text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Paul Bunyan
Paul Bunyan is a giant lumberjack and folk hero in American and Canadian folklore. His exploits revolve around the tall tales of his superhuman labors, and he is customarily accompanied by Babe the Blue Ox. The character originated in the oral tradition of North American loggers, and was later popularized by freelance writer William B. Laughead (1882–1958) in a 1916 promotional pamphlet for the Red River Lumber Company. He has been the subject of various literary compositions, musical pieces, commercial works, and theatrical productions. His likeness is displayed in several oversized statues across North America.
There are many hypotheses about the etymology of the name "Paul Bunyan". Much of the commentary focuses on a Franco-Canadian origin for the name. Phonetically, Bunyan is similar to the Québécois expression ""bon yenne!"" expressing surprise or astonishment. The English surname Bunyan is derived from the same root as "bunion" in the Old French "bugne", referring to a large lump or swelling. Several researchers have attempted to trace Paul Bunyan to the character of Bon Jean of French Canadian folklore.
Michael Edmonds states in his 2009 book "Out of the Northwoods: The Many Lives of Paul Bunyan" that Paul Bunyan stories circulated for at least thirty years before finding their way into print. In contrast to the lengthy narratives abundant in published material, Paul Bunyan "stories" when told in the lumbercamp bunkhouses were presented in short fragments. Some of these stories include motifs from older folktales, such as absurdly severe weather and fearsome critters. Parallels in early printings support the view that at least a handful of Bunyan stories hold a common origin in folklore.
The first known reference of Paul Bunyan in print appeared in the March 17, 1893 issue of "Gladwin County Record." Under the local news section for the area of Beaverton, where it reads, "Paul Bunion ["sic"] is getting ready while the water is high to take his drive out." This line was presumably an inside joke, as it appeared over fifteen years before any commercial use of the Paul Bunyan name. At the time, few within the general public would have known who Paul Bunyan was.
The earliest recorded story of Paul Bunyan is an uncredited 1904 editorial in the "Duluth News Tribune" which recounts:
Each of these elements recurs in later accounts, including logging the Dakotas, a giant camp, the winter of the blue snow, and stove skating. All four anecdotes are mirrored in J. E. Rockwell's "Some Lumberjack Myths" six years later, and James MacGillivray wrote on the subject of stove skating in "Round River" four years before that. MacGillivray's account, somewhat extended, reappeared in "The American Lumberman" in 1910. "The American Lumberman" followed up with a few sporadic editorials, such as "Paul Bunyan's Oxen," "In Paul Bunyan's Cook Shanty," and "Chronicle of Life and Works of Mr. Paul Bunyan." Rockwell's earlier story was one of the few to allude to Paul Bunyan's large stature, "eight feet tall and weighed 300 pounds," and introduce his big blue ox, before Laughead commercialized Paul Bunyan, although W .D. Harrigan referred to a giant pink ox in "Paul Bunyan's Oxen," circa 1914. In all the articles, Paul Bunyan is praised as a logger of great physical strength and unrivaled skill.
K. Bernice Stewart, a student at the University of Wisconsin, was working contemporaneously with Laughead to gather Paul Bunyan stories from woodsmen in the Midwest. Stewart was able to make a scholarly anthology of original anecdotes through a series of interviews. These were published in 1916 as "Legends of Paul Bunyan, Lumberjack" in "Transactions of the Wisconsin Academy of Sciences, Arts and Letters" and coauthored by her English professor Homer A. Watt. The research relates traditional narratives, some in multiple versions, and goes on to conclude that many probably existed in some part before they were set to revolve around Bunyan as a central character. Stewart argued in her analysis that Paul Bunyan belongs to a class of traveler's tales.
Charles E. Brown was the curator of the Museum of the State Historical Society of Wisconsin and secretary of the Wisconsin Archaeological Society. He was another principal researcher who recorded early Paul Bunyan stories from lumberjacks. He published these anecdotes in short pamphlet format for the use of students of folklore. Much of his research was financed through the government-funded Wisconsin Writers' Program.
In 2007, Michael Edmonds of the Wisconsin Historical Society began a thorough reinvestigation of the Paul Bunyan tradition, publishing his findings in "Out of the Northwoods: The Many Lives of Paul Bunyan". Edmonds concluded that Paul Bunyan had origins in the oral traditions of woodsmen working in Wisconsin camps during the turn of the 20th century, but such stories were heavily embellished and popularized by commercial interests.
Laughead, in 1916, devised the original advertising pamphlet for the Red River Lumber Company utilizing the Paul Bunyan folk character. Laughead reworked original folklore while adding some tales of his own. This has led to significant confusion as to Paul Bunyan's legitimacy as a genuine folkloric character. Laughead took many liberties with the original oral source material. While still a lumberjack of gigantic stature and size with extreme power and strength, Paul Bunyan's height was magnified so as to tower over trees and Laughead attributed to him the creation of several American landscapes, landmarks and natural wonders. Laughead noted that Paul Bunyan and Babe are said to have created the 10,000 lakes of Minnesota by their footprints.
Later commenters would elaborate in more detail pointing out bodies of water such as Lake Bemidji. Some observers have noted that Lake Bemidji, itself, has a shape resembling something of a giant footprint when viewed from high above. Furthermore, latter authors, and possibly tourist agents, would add other geographic features to those Paul Bunyan was supposed to have created. Among others, Paul Bunyan has been credited with creating the Grand Canyon by pulling his ax behind him, and Mount Hood by putting stones on his campfire.
Running at variance to his origins in folklore, the character of Paul Bunyan has become a fixture for juvenile audiences since his debut in print. Typical among such adaptations is the further embellishment of stories pulled directly from William B. Laughead's pamphlet, and with very few elements from oral tradition adapted into them. Nearly all of the literature is presented in long narrative format, exaggerates Paul Bunyan's height to colossal proportions, and follows him from infancy to adulthood.
Some of the more enduring collections of stories include "Paul Bunyan" by James Stevens, "Paul Bunyan Swings His Axe" by Dell J. McCormick, "Paul Bunyan" by Esther Shephard, "Paul Bunyan and His Great Blue Ox" by Wallace Wadsworth, and "The Marvelous Exploits of Paul Bunyan" by William Laughead.
" Legends of Paul Bunyan" (1947) was the first book published by the prolific tall tale writer Harold Felton.
In 1958, Walt Disney Studios produced "Paul Bunyan" as an animated short musical. The feature starred Thurl Ravenscroft, perhaps best known as the voice of Tony the Tiger for The Kellogg Company, and was nominated for an Academy Award for Best Animated Short Film.
In the 1995 Disney film "Tall Tale", Paul Bunyan is played by Oliver Platt. Contrary to the usual image of Bunyan's gigantism, Platt's Paul is depicted as a man of average height, but compensated with a "larger than life" personality consistent with the film's "over the top" nature.
In 2017, an animated film based loosely on the folktale titled "Bunyan and Babe" was released, starring John Goodman as Paul Bunyan.
Commentators such as Carleton C. Ames, Marshall Fitwick, and particularly Richard Dorson cite Paul Bunyan as an example of "fakelore," a literary invention passed off as an older folktale. They point out that the majority of books about Paul Bunyan are composed almost entirely of elements with no basis in folklore, especially those targeted at juvenile audiences. Modern commercial writers are credited with setting Paul Bunyan on his rise to a nationally recognized figure, but this ignores the historical roots of the character in logging camps and forest industries.
At the same time, several authors have come forward to propose that the legend of Paul Bunyan was based on a real person. D. Laurence Rogers and others have suggested a possible connection between Paul Bunyan tales and the exploits of French-Canadian lumberjack Fabian "Saginaw Joe" Fournier (1845 – 1875). From 1865 to 1875, Fournier worked for the H. M. Loud Company in the Grayling, Michigan area. James Stevens in his 1925 book "Paul Bunyan" makes another unverified claim that Paul Bunyan was a soldier in the Papineau Rebellion named Paul Bon Jean, and this is occasionally repeated in other accounts.
Stewart and Watt acknowledge that they have not yet succeeded in definitively finding out whether Bunyan was based on an actual person or was wholly mythical. They have noted, however, that some of the older lumberjacks whom they interviewed claimed to have known him or members of his crew, and the supposed location of his grave was actually pointed out in northern Minnesota. Bunyan's extreme gigantism was a later invention, and early stories either do not mention it or, as in the Stewart and Watt paper, refer to him as being about seven feet tall.
Included in this section is a comparison chart between early Paul Bunyan references, the Stewart and Watt paper, and the Laughead advertisement.
William B. Laughead, an independent adman, was the first to utilize Paul Bunyan for commercial use in a series of campaigns for the Red River Lumber Company. His first endeavor was a pamphlet entitled "Introducing Mr. Paul Bunyan of Westwood, California", but it did not prove effective. It was not until "Tales about Paul Bunyan, Vol. II" appeared that the campaign gained momentum. Embellishing older exploits and adding some of his own, Laughead's revamped Paul Bunyan did not stay faithful to the original folktales. Among other things, Laughead gave the name "Babe" to the blue ox, increased Paul Bunyan's height to impossible proportions, and created the first pictorial representation of Bunyan. This has led to significant confusion regarding the validity of Paul Bunyan as a genuine folkloric character. Nevertheless, the Laughead pamphlets are regarded as one of the most popular collections, often appearing in a single, unabridged volume entitled: "The Marvelous Exploits of Paul Bunyan."
The Red River ad campaign ingrained Paul Bunyan as a nationally recognized figure, and it also affirmed his massive marketing appeal. Throughout the better part of the century, Paul Bunyan's name and image continued to be utilized in promoting various products, cities, and services. Across North America, giant statues of Paul Bunyan were erected to promote local businesses and tourism. A significant portion of these were produced from the 1960s through the 1970s by the company International Fiberglass as part of their "Muffler Men" series of giant fiberglass sculptures. Since 2014 a paved biking trail bears the name "Paul Bunyan Trail" and spans 120 miles, from Crow Wing State Park to Lake Bemidji State Park. Many cities through which the trail passes sell trinkets and novelty items from Paul Bunyan folklore.
The Bemidji Blue Ox Marathon (started in 2013) runs along the Paul Bunyan State Trail, around Lake Bemidji and past the Paul Bunyan and Babe the Blue Ox statues.
The statue of Paul Bunyan is regularly mentioned in the novel "It" by Stephen King. | https://en.wikipedia.org/wiki?curid=24277 |
Pear
The pear () tree and shrub are a species of genus Pyrus , in the family Rosaceae, bearing the pomaceous fruit of the same name. Several species of pear are valued for their edible fruit and juices while others are cultivated as trees.
The tree is medium-sized and native to coastal as well as mildly temperate regions of Europe, north Africa and Asia. Pear wood is one of the preferred materials in the manufacture of high-quality woodwind instruments and furniture.
About 3000 known varieties of pears are grown worldwide. The fruit is consumed fresh, canned, as juice, and dried. In 2017, world production of pears was 24 million tonnes, with China as the main producer.
The word "pear" is probably from Germanic "pera" as a loanword of Vulgar Latin "pira", the plural of "pirum", akin to Greek "apios" (from Mycenaean "ápisos"), of Semitic origin ("pirâ"), meaning "fruit". The adjective "pyriform" or "piriform" means pear-shaped.
The pear is native to coastal and mildly temperate regions of the Old World, from western Europe and north Africa east right across Asia. It is a medium-sized tree, reaching tall, often with a tall, narrow crown; a few species are shrubby.
The leaves are alternately arranged, simple, long, glossy green on some species, densely silvery-hairy in some others; leaf shape varies from broad oval to narrow lanceolate. Most pears are deciduous, but one or two species in southeast Asia are evergreen. Most are cold-hardy, withstanding temperatures as low as in winter, except for the evergreen species, which only tolerate temperatures down to about .
The flowers are white, rarely tinted yellow or pink, diameter, and have five petals. Like that of the related apple, the pear fruit is a pome, in most wild species diameter, but in some cultivated forms up to long and broad; the shape varies in most species from oblate or globose, to the classic pyriform 'pear-shape' of the European pear with an elongated basal portion and a bulbous end.
The fruit is composed of the receptacle or upper end of the flower-stalk (the so-called calyx tube) greatly dilated. Enclosed within its cellular flesh is the true fruit: five 'cartilaginous' carpels, known colloquially as the "core". From the upper rim of the receptacle are given off the five sepals, the five petals, and the very numerous stamens.
Pears and apples cannot always be distinguished by the form of the fruit; some pears look very much like some apples, e.g. the nashi pear. One major difference is that the flesh of pear fruit contains stone cells.
Pear cultivation in cool temperate climates extends to the remotest antiquity, and there is evidence of its use as a food since prehistoric times. Many traces of it have been found in prehistoric pile dwellings around Lake Zurich. Pears were cultivated in China as early as 2000 BC. The word “pear”, or its equivalent, occurs in all the Celtic languages, while in Slavic and other dialects, differing appellations, still referring to the same thing, are found—a diversity and multiplicity of nomenclature which led Alphonse Pyramus de Candolle to infer a very ancient cultivation of the tree from the shores of the Caspian to those of the Atlantic.
The pear was also cultivated by the Romans, who ate the fruits raw or cooked, just like apples. Pliny's "Natural History" recommended stewing them with honey and noted three dozen varieties. The Roman cookbook "De re coquinaria" has a recipe for a spiced, stewed-pear "patina", or soufflé.
A certain race of pears, with white down on the undersurface of their leaves, is supposed to have originated from "P. nivalis", and their fruit is chiefly used in France in the manufacture of perry (see also cider). Other small-fruited pears, distinguished by their early ripening and apple-like fruit, may be referred to as "P. cordata", a species found wild in western France and southwestern England. Pears have been cultivated in China for approximately 3000 years.
The genus is thought to have originated in present-day Western China in the foothills of the Tian Shan, a mountain range of Central Asia, and to have spread to the north and south along mountain chains, evolving into a diverse group of over 20 widely recognized primary species. The enormous number of varieties of the cultivated European pear ("Pyrus communis" subsp. "communis"), are without doubt derived from one or two wild subspecies ("P. communis" subsp. "pyraster" and "P. communis" subsp. "caucasica"), widely distributed throughout Europe, and sometimes forming part of the natural vegetation of the forests. Court accounts of Henry III of England record pears shipped from La Rochelle-Normande and presented to the King by the Sheriffs of the City of London. The French names of pears grown in English medieval gardens suggest that their reputation, at the least, was French; a favored variety in the accounts was named for Saint Rule or Regul', Bishop of Senlis.
Asian species with medium to large edible fruit include "P. pyrifolia", "P. ussuriensis", "P. × bretschneideri", "P. × sinkiangensis", and "P. pashia." Other small-fruited species are frequently used as rootstocks for the cultivated forms.
According to Pear Bureau Northwest, about 3000 known varieties of pears are grown worldwide.
The pear is normally propagated by grafting a selected variety onto a rootstock, which may be of a pear variety or quince. Quince rootstocks produce smaller trees, which is often desirable in commercial orchards or domestic gardens. For new varieties the flowers can be cross-bred to preserve or combine desirable traits. The fruit of the pear is produced on spurs, which appear on shoots more than one year old.
Three species account for the vast majority of edible fruit production, the European pear "Pyrus communis" subsp. "communis" cultivated mainly in Europe and North America, the Chinese white pear ("bai li") "Pyrus ×bretschneideri", and the Nashi pear "Pyrus pyrifolia" (also known as Asian pear or apple pear), both grown mainly in eastern Asia. There are thousands of cultivars of these three species. A species grown in western China, "P. sinkiangensis", and "P. pashia", grown in southern China and south Asia, are also produced to a lesser degree.
Other species are used as rootstocks for European and Asian pears and as ornamental trees. Pear wood is close-grained and at least in the past was used as a specialized timber for fine furniture and making the blocks for woodcuts. The Manchurian or Ussurian Pear, "Pyrus ussuriensis" (which produces unpalatable fruit) has been crossed with "Pyrus communis" to breed hardier pear cultivars. The Bradford pear ("Pyrus calleryana" 'Bradford') in particular has become widespread in North America, and is used only as an ornamental tree, as well as a blight-resistant rootstock for "Pyrus communis" fruit orchards. The Willow-leaved pear ("Pyrus salicifolia") is grown for its attractive, slender, densely silvery-hairy leaves.
The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit:
The purely decorative cultivar "P. salicifolia" ‘Pendula’, with pendulous branches and silvery leaves, has also won the award.
Summer and autumn cultivars of "Pyrus communis", being climacteric fruits, are gathered before they are fully ripe, while they are still green, but snap off when lifted. In the case of the 'Passe Crassane', long the favored winter pear in France, the crop is traditionally gathered at three different times: the first a fortnight or more before it is ripe, the second a week or ten days after that, and the third when fully ripe. The first gathering will come into eating last, and thus the season of the fruit may be considerably prolonged.
In 2017, world production of pears was 24.2 million tonnes, led by China with 68% of the total (table).
Pears may be stored at room temperature until ripe. Pears are ripe when the flesh around the stem gives to gentle pressure. Ripe pears are optimally stored refrigerated, uncovered in a single layer, where they have a shelf life of 2 to 3 days.
Pears are consumed fresh, canned, as juice, and dried. The juice can also be used in jellies and jams, usually in combination with other fruits, including berries. Fermented pear juice is called perry or pear cider and is made in a way that is similar to how cider is made from apples.
Pears ripen at room temperature. Ripening is accelerated by the gas ethylene. If pears are placed next to bananas in a fruit bowl, the ethylene emitted by the banana causes the pears to ripen. Refrigeration will slow further ripening. Pear Bureau Northwest offers tips on ripening and judging ripeness: Although the skin on Bartlett pears changes from green to yellow as they ripen, most varieties show little color change as they ripen. Because pears ripen from the inside out, the best way to judge ripeness is to "check the neck": apply gentle thumb pressure to the neck or stem end of the pear. If it yields to gentle pressure, then the pear is ripe, sweet, and juicy. If it is firm, leave the pear at room temperature and check daily for ripeness.
The culinary or cooking pear is green but dry and hard, and only edible after several hours of cooking. Two Dutch cultivars are "" (a sweet variety) and "" (slightly sour).
Pear wood is one of the preferred materials in the manufacture of high-quality woodwind instruments and furniture, and was used for making the carved blocks for woodcuts. It is also used for wood carving, and as a firewood to produce aromatic smoke for smoking meat or tobacco. Pear wood is valued for kitchen spoons, scoops and stirrers, as it does not contaminate food with color, flavor or smell, and resists warping and splintering despite repeated soaking and drying cycles. Lincoln describes it as "a fairly tough, very stable wood... (used for) carving... brushbacks, umbrella handles, measuring instruments such as set squares and T-squares... recorders... violin and guitar fingerboards and piano keys... decorative veneering." Pearwood is the favored wood for architect's rulers because it does not warp. It is similar to the wood of its relative, the apple tree ("Malus domestica") and used for many of the same purposes.
Raw pear is 84% water, 15% carbohydrates and contains negligible protein and fat (table). In a 100 g reference amount, raw pear supplies 57 calories, a moderate source of dietary fiber, and no other essential nutrients in significant amounts (table).
Pears grow in the sublime orchard of Alcinous, in "Odyssey" vii: "Therein grow trees, tall and luxuriant, pears and pomegranates and apple-trees with their bright fruit, and sweet figs, and luxuriant olives. Of these the fruit perishes not nor fails in winter or in summer, but lasts throughout the year."
'A Partridge in a Pear Tree' is the first gift in "The Twelve Days of Christmas" cumulative song. This verse is repeated twelve times in the song.
The pear tree was an object of particular veneration (as was the Walnut) in the Tree worship of the Nakh peoples of the North Caucasus – see Vainakh mythology and see also Ingushetia – the best-known of the Vainakh peoples today being the Chechens of Chechnya in the Russian Federation.
Pear and walnut trees were held to be the sacred abodes of beneficent spirits in pre-Islamic Chechen religion and, for this reason, it was forbidden to fell them. | https://en.wikipedia.org/wiki?curid=24278 |
PowerPC
PowerPC (with the backronym Performance Optimization With Enhanced RISC – Performance Computing, sometimes abbreviated as PPC) is a reduced instruction set computer (RISC) instruction set architecture (ISA) created by the 1991 Apple–IBM–Motorola alliance, known as "AIM". PowerPC, as an evolving instruction set, has since 2006 been named Power ISA, while the old name lives on as a trademark for some implementations of Power Architecture–based processors.
PowerPC was the cornerstone of AIM's PReP and Common Hardware Reference Platform initiatives in the 1990s. Originally intended for personal computers, the architecture is well known for being used by Apple's Power Macintosh, PowerBook, iMac, iBook, and Xserve lines from 1994 until 2006, when Apple migrated to Intel's x86. It has since become a niche in personal computers, but remains popular for embedded and high-performance processors. Its use in 7th generation of video game consoles and embedded applications provided an array of uses. In addition, PowerPC CPUs are still used in AmigaOne and third party AmigaOS 4 personal computers.
PowerPC is largely based on IBM's earlier POWER instruction set architecture, and retains a high level of compatibility with it; the architectures have remained close enough that the same programs and operating systems will run on both if some care is taken in preparation; newer chips in the POWER series use the Power ISA.
The history of RISC began with IBM's 801 research project, on which John Cocke was the lead developer, where he developed the concepts of RISC in 1975–78. 801-based microprocessors were used in a number of IBM embedded products, eventually becoming the 16-register IBM ROMP processor used in the IBM RT PC. The RT PC was a rapid design implementing the RISC architecture. Between the years of 1982 and 1984, IBM started a project to build the fastest microprocessor on the market; this new 32-bit architecture became referred to as the "America Project" throughout its development cycle, which lasted for approximately 5–6 years. The result is the POWER instruction set architecture, introduced with the RISC System/6000 in early 1990.
The original POWER microprocessor, one of the first superscalar RISC implementations, is a high performance, multi-chip design. IBM soon realized that a single-chip microprocessor was needed in order to scale its RS/6000 line from lower-end to high-end machines. Work began on a one-chip POWER microprocessor, designated the RSC (RISC Single Chip). In early 1991, IBM realized its design could potentially become a high-volume microprocessor used across the industry.
Apple had already realized the limitations and risks of its dependency upon a single CPU vendor at a time when Motorola was falling behind on delivering the 68040 CPU. Furthermore, Apple had conducted its own research and made an experimental quad-core CPU design called Aquarius, which convinced the company's technology leadership that the future of computing was in the RISC methodology. IBM approached Apple with the goal of collaborating on the development of a family of single-chip microprocessors based on the POWER architecture. Soon after, Apple, being one of Motorola's largest customers of desktop-class microprocessors, asked Motorola to join the discussions due to their long relationship, Motorola having had more extensive experience with manufacturing high-volume microprocessors than IBM, and to form a second source for the microprocessors. This three-way collaboration between Apple, IBM, and Motorola became known as the AIM alliance.
In 1991, the PowerPC was just one facet of a larger alliance among these three companies. At the time, most of the personal computer industry was shipping systems based on the Intel 80386 and 80486 chips, which have a complex instruction set computer (CISC) architecture, and development of the Pentium processor was well underway. The PowerPC chip was one of several joint ventures involving the three alliance members, in their efforts to counter the growing Microsoft-Intel dominance of personal computing.
For Motorola, POWER looked like an unbelievable deal. It allowed the company to sell a widely tested and powerful RISC CPU for little design cash on its own part. It also maintained ties with an important customer, Apple, and seemed to offer the possibility of adding IBM too, which might buy smaller versions from Motorola instead of making its own.
At this point Motorola already had its own RISC design in the form of the 88000, which was doing poorly in the market. Motorola was doing well with its 68000 family and the majority of the funding was focused on this. The 88000 effort was somewhat starved for resources.
The 88000 was already in production, however; Data General was shipping 88000 machines and Apple already had 88000 prototype machines running. The 88000 had also achieved a number of embedded design wins in telecom applications. If the new POWER one-chip version could be made bus-compatible at a hardware level with the 88000, that would allow both Apple and Motorola to bring machines to market far faster since they would not have to redesign their board architecture.
The result of these various requirements is the PowerPC ("performance computing") specification. The differences between the earlier POWER instruction set and that of PowerPC is outlined in Appendix E of the manual for PowerPC ISA v.2.02.
Since 1991, IBM had a long-standing desire for a unifying operating system that would simultaneously host all existing operating systems as personalities upon one microkernel. From 1991 to 1995, the company designed and aggressively evangelized what would become Workplace OS, primarily targeting PowerPC.
When the first PowerPC products reached the market, they were met with enthusiasm. In addition to Apple, both IBM and the Motorola Computer Group offered systems built around the processors. Microsoft released Windows NT 3.51 for the architecture, which was used in Motorola's PowerPC servers, and Sun Microsystems offered a version of its Solaris OS. IBM ported its AIX Unix. Workplace OS featured a new port of OS/2 (with Intel emulation for application compatibility), pending a successful launch of the PowerPC 620. Throughout the mid-1990s, PowerPC processors achieved benchmark test scores that matched or exceeded those of the fastest x86 CPUs.
Ultimately, demand for the new architecture on the desktop never truly materialized. Windows, OS/2, and Sun customers, faced with the lack of application software for the PowerPC, almost universally ignored the chip. IBM's Workplace OS platform (and thus, OS/2 for PowerPC) was summarily canceled upon its first developers' release in December 1995 due to the simultaneous buggy launch of the PowerPC 620. The PowerPC versions of Solaris and Windows were discontinued after only a brief period on the market. Only on the Macintosh, due to Apple's persistence, did the PowerPC gain traction. To Apple, the performance of the PowerPC was a bright spot in the face of increased competition from Windows 95 and Windows NT-based PCs.
With the cancellation of Workplace OS, the general PowerPC platform (especially AIM's Common Hardware Reference Platform) was instead seen as a hardware-only compromise to run many operating systems one at a time upon a single unifying vendor-neutral hardware platform.
In parallel with the alliance between IBM and Motorola, both companies had development efforts underway internally. The PowerQUICC line was the result of this work inside Motorola. The 4xx series of embedded processors was underway inside IBM. The IBM embedded processor business grew to nearly US$100 million in revenue and attracted hundreds of customers.
Toward the close of the decade, manufacturing issues began plaguing the AIM alliance in much the same way they did Motorola, which consistently pushed back deployments of new processors for Apple and other vendors: first from Motorola in the 1990s with the PowerPC 7xx and 74xx processors, and IBM with the 64-bit PowerPC 970 processor in 2003. In 2004, Motorola exited the chip manufacturing business by spinning off its semiconductor business as an independent company called Freescale Semiconductor. Around the same time, IBM exited the 32-bit embedded processor market by selling its line of PowerPC products to Applied Micro Circuits Corporation (AMCC) and focusing on 64-bit chip designs, while maintaining its commitment of PowerPC CPUs toward game console makers such as Nintendo's GameCube and Wii, Sony's PlayStation 3 and Microsoft's Xbox 360, of which the latter two both use 64-bit processors. In 2005, Apple announced they would no longer use PowerPC processors in their Apple Macintosh computers, favoring Intel-produced processors instead, citing the performance limitations of the chip for future personal computer hardware specifically related to heat generation and energy usage, as well as the inability of IBM to move the 970 processor to the 3 GHz range. The IBM-Freescale alliance was replaced by an open standards body called Power.org. Power.org operates under the governance of the IEEE with IBM continuing to use and evolve the PowerPC processor on game consoles and Freescale Semiconductor focusing solely on embedded devices.
IBM continues to develop PowerPC microprocessor cores for use in their application-specific integrated circuit (ASIC) offerings. Many high volume applications embed PowerPC cores.
The PowerPC specification is now handled by Power.org where IBM, Freescale, and AMCC are members. PowerPC, Cell and POWER processors are now jointly marketed as the Power Architecture. Power.org released a unified ISA, combining POWER and PowerPC ISAs into the new Power ISA v.2.03 specification and a new reference platform for servers called PAPR (Power Architecture Platform Reference).
, IBM's POWER microprocessors, which implement the Power ISA, are used by IBM in their IBM Power Systems, running IBM i, AIX, and Linux.
Many PowerPC designs are named and labeled by their apparent technology generation. That began with the "G3", which was an internal project name inside AIM for the development of what would become the PowerPC 750 family. Apple popularized the term "G3" when they introduced Power Mac G3 and PowerBook G3 at an event at 10 November 1997. Motorola and Apple liked the moniker and used the term "G4" for the 7400 family introduced in 1998 and the Power Mac G4 in 1999.
At the time the G4 was launched, Motorola categorized all their PowerPC models (former, current and future) according to what generation they adhered to, even renaming the older 603e core "G2". Motorola had a G5 project that never came to fruition, but the name stuck and Apple reused it when the 970 family launched in 2003 even if those were designed and built by IBM.
The PowerPC is designed along RISC principles, and allows for a superscalar implementation. Versions of the design exist in both 32-bit and 64-bit implementations. Starting with the basic POWER specification, the PowerPC added:
Some instructions present in the POWER instruction set were deemed too complex and were removed in the PowerPC architecture. Some removed instructions could be emulated by the operating system if necessary. The removed instructions are:
Most PowerPC chips switch endianness via a bit in the MSR (machine state register), with a second bit provided to allow the OS to run with a different endianness. Accesses to the "inverted page table" (a hash table that functions as a TLB with off-chip storage) are always done in big-endian mode. The processor starts in big-endian mode.
In little-endian mode, the three lowest-order bits of the effective address are exclusive-ORed with a three bit value selected by the length of the operand. This is enough to appear fully little-endian to normal software. An operating system will see a warped view of the world when it accesses external chips such as video and network hardware. Fixing this warped view requires that the motherboard perform an unconditional 64-bit byte swap on all data entering or leaving the processor. Endianness thus becomes a property of the motherboard. An OS that operates in little-endian mode on a big-endian motherboard must both swap bytes and undo the exclusive-OR when accessing little-endian chips.
AltiVec operations, despite being 128-bit, are treated as if they were 64-bit. This allows for compatibility with little-endian motherboards that were designed prior to AltiVec.
An interesting side effect of this implementation is that a program can store a 64-bit value (the longest operand format) to memory while in one endian mode, switch modes, and read back the same 64-bit value without seeing a change of byte order. This will not be the case if the motherboard is switched at the same time.
Mercury Systems and Matrox ran the PowerPC in little-endian mode. This was done so that PowerPC devices serving as co-processors on PCI boards could share data structures with host computers based on x86. Both PCI and x86 are little-endian. OS/2 and Windows NT for PowerPC ran the processor in little-endian mode while Solaris, AIX and Linux ran in big endian.
Some of IBM's embedded PowerPC chips use a per-page endianness bit. None of the previous applies to them.
The first implementation of the architecture was the PowerPC 601, released in 1992, based on the RSC, implementing a hybrid of the POWER1 and PowerPC instructions. This allowed the chip to be used by IBM in their existing POWER1-based platforms, although it also meant some slight pain when switching to the 2nd generation "pure" PowerPC designs. Apple continued work on a new line of Macintosh computers based on the chip, and eventually released them as the 601-based "Power Macintosh" on March 14, 1994.
IBM also had a full line of PowerPC based desktops built and ready to ship; unfortunately, the operating system that IBM had intended to run on these desktops—Microsoft Windows NT—was not complete by early 1993, when the machines were ready for marketing. Accordingly, and further because IBM had developed animosity toward Microsoft, IBM decided to port OS/2 to the PowerPC in the form of Workplace OS. This new software platform spent three years (1992 to 1995) in development and was canceled with the December 1995 developer release, because of the disappointing launch of the PowerPC 620. For this reason, the IBM PowerPC desktops did not ship, although the reference design (codenamed Sandalbow) based on the PowerPC 601 CPU was released as an RS/6000 model ("Byte"s April 1994 issue included an extensive article about the Apple and IBM PowerPC desktops).
Apple, which also lacked a PowerPC based OS, took a different route. Utilizing the portability platform yielded by the secret Star Trek project, the company ported the essential pieces of their Mac OS operating system to the PowerPC architecture, and further wrote a 68k emulator that could run 68k based applications and the parts of the OS that had not been rewritten.
The second generation was "pure" and includes the "low end" PowerPC 603 and "high end" PowerPC 604. The 603 is notable due to its very low cost and power consumption. This was a deliberate design goal on Motorola's part, who used the 603 project to build the basic core for all future generations of PPC chips. Apple tried to use the 603 in a new laptop design but was unable due to the small 8 KiB level 1 cache. The 68000 emulator in the Mac OS could not fit in 8 KiB and thus slowed the computer drastically. The 603e solved this problem by having a 16 KiB L1 cache, which allowed the emulator to run efficiently.
In 1993, developers at IBM's Essex Junction, Burlington, Vermont facility started to work on a version of the PowerPC that would support the Intel x86 instruction set directly on the CPU. While this was just one of several concurrent power architecture projects that IBM was working on, this chip began to be known inside IBM and by the media as the PowerPC 615. Profitability concerns and rumors of performance issues in the switching between the x86 and native PowerPC instruction sets resulted in the project being canceled in 1995 after only a limited number of chips were produced for in-house testing. Aside the rumors, the switching process took only 5 cycles, or the amount of time needed for the processor to empty its instruction pipeline. Microsoft also aided the processor's demise by refusing to support the PowerPC mode.
The first 64-bit implementation is the PowerPC 620, but it appears to have seen little use because Apple didn't want to buy it and because, with its large die area, it was too costly for the embedded market. It was later and slower than promised, and IBM used their own POWER3 design instead, offering no 64-bit "small" version until the late-2002 introduction of the PowerPC 970. The 970 is a 64-bit processor derived from the POWER4 server processor. To create it, the POWER4 core was modified to be backward-compatible with 32-bit PowerPC processors, and a vector unit (similar to the AltiVec extensions in Motorola's 74xx series) was added.
IBM's RS64 processors are a family of chips implementing the "Amazon" variant of the PowerPC architecture. These processors are used in the RS/6000 and AS/400 computer families; the Amazon architecture includes proprietary extensions used by AS/400. The POWER4 and later POWER processors implement the Amazon architecture and replaced the RS64 chips in the RS/6000 and AS/400 families.
IBM developed a separate product line called the "4xx" line focused on the embedded market. These designs included the 401, 403, 405, 440, and 460. In 2004, IBM sold their 4xx product line to Applied Micro Circuits Corporation (AMCC). AMCC continues to develop new high performance products, partly based on IBM's technology, along with technology that was developed within AMCC. These products focus on a variety of applications including networking, wireless, storage, printing/imaging and industrial automation.
Numerically, the PowerPC is mostly found in controllers in cars. For the automotive market, Freescale Semiconductor initially offered many variations called the MPC5xx family such as the MPC555, built on a variation of the 601 core called the 8xx and designed in Israel by MSIL (Motorola Silicon Israel Limited). The 601 core is single issue, meaning it can only issue one instruction in a clock cycle. To this they add various bits of custom hardware, to allow for I/O on the one chip. In 2004, the next-generation four-digit 55xx devices were launched for the automotive market. These use the newer e200 series of PowerPC cores.
Networking is another area where embedded PowerPC processors are found in large numbers. MSIL took the QUICC engine from the MC68302 and made the PowerQUICC MPC860. This was a very famous processor used in many Cisco edge routers in the late 1990s. Variants of the PowerQUICC include the MPC850, and the MPC823/MPC823e. All variants include a separate RISC microengine called the CPM that offloads communications processing tasks from the central processor and has functions for DMA. The follow-on chip from this family, the MPC8260, has a 603e-based core and a different CPM.
Honda also uses PowerPC processors for ASIMO.
In 2003, BAE SYSTEMS Platform Solutions delivered the Vehicle-Management Computer for the F-35 fighter jet. This platform consists of dual PowerPCs made by Freescale in a triple redundant setup.
Operating systems that work on the PowerPC architecture are generally divided into those that are oriented toward the general-purpose PowerPC systems, and those oriented toward the embedded PowerPC systems.
Companies that have licensed the 64-bit POWER or 32-bit PowerPC from IBM include:
PowerPC processors were used in a number of now-discontinued video game consoles:
The Power architecture is currently used in the following desktop computers:
The Power architecture is currently used in the following embedded applications: | https://en.wikipedia.org/wiki?curid=24281 |
Pope Urban II
Pope Urban II (; – 29 July 1099), otherwise known as Odo of Châtillon or Otho de Lagery, was the bishop of Rome and ruler of the Papal States from 12 March 1088 to his death. He is best known for initiating the Crusades.
Odo was a native of France. He was a descendant of a noble family in Châtillon-sur-Marne. Reims was the nearby cathedral school where Odo began his studies in 1050.
Before his papacy Odo was the abbot of Cluny and bishop of Ostia. As pope, he dealt with Antipope Clement III, infighting of various Christian nations, and the Muslim incursions into Europe. In 1095 he started preaching the First Crusade (1095–99). He promised forgiveness and pardon for all of the past sins of those who would fight to reclaim the holy land from Muslims, and free the eastern churches. This pardon would also apply to those that would fight the Muslims in Spain. He also set up the modern-day Roman Curia in the manner of a royal ecclesiastical court to help run the Church.
Pope Leo XIII beatified him on 14 July 1881.
Urban, baptized Eudes (Odo), was born to a family of Châtillon-sur-Marne. He was prior of the abbey of Cluny, later Pope Gregory VII named him cardinal-bishop of Ostia . He was one of the most prominent and active supporters of the Gregorian reforms, especially as legate in the Holy Roman Empire in 1084. He was among the three whom Gregory VII nominated as "papabile" (possible successors). Desiderius, the abbot of Monte Cassino, was chosen to follow Gregory in 1085 but, after his short reign as Victor III, Odo was elected by acclamation at a small meeting of cardinals and other prelates held in Terracina in March 1088.
From the outset, Urban had to reckon with the presence of Guibert, the former bishop of Ravenna who held Rome as the antipope "Clement III". Gregory had repeatedly clashed with the emperor Henry IV over papal authority. Despite the Walk to Canossa, Gregory had backed the rebel Duke of Swabia and again excommunicated the emperor. Henry finally took Rome in 1084 and installed Clement III in his place.
Urban took up the policies of Pope Gregory VII and, while pursuing them with determination, showed greater flexibility and diplomatic finesse. Usually kept away from Rome, Urban toured northern Italy and France. A series of well-attended synods held in Rome, Amalfi, Benevento, and Troia supported him in renewed declarations against simony, lay investitures, clerical marriages (partly via the "cullagium" tax), and the emperor and his antipope. He facilitated the marriage of Matilda, countess of Tuscany, with Welf II, duke of Bavaria. He supported the rebellion of Prince Conrad against his father and bestowed the office of groom on Conrad at Cremona in 1095. While there, he helped arrange the marriage between Conrad and Maximilla, the daughter of Count Roger of Sicily, which occurred later that year at Pisa; her large dowry helped finance Conrad's continued campaigns. The Empress Adelaide was encouraged in her charges of sexual coercion against her husband, Henry IV. He supported the theological and ecclesiastical work of Anselm, negotiating a solution to the cleric's impasse with King William II of England and finally receiving England's support against the Imperial pope in Rome.
Urban maintained vigorous support for his predecessors' reforms, however, and did not shy from supporting Anselm when the new archbishop of Canterbury fled England. Likewise, despite the importance of French support for his cause, he upheld his legate Hugh of Die's excommunication of King Philip over his doubly bigamous marriage with Bertrade de Montfort, wife of the Count of Anjou. (The ban was repeatedly lifted and reimposed as the king promised to forswear her and then repeatedly returned to her. A public penance in 1104 ended the controversy, although Bertrade remained active in attempting to see her sons succeed Philip instead of Louis.)
Urban II's movement took its first public shape at the Council of Piacenza, where, in March 1095, Urban II received an ambassador from the Byzantine Emperor Alexios I Komnenos asking for help against the Muslim Seljuk Turks who had taken over most of formerly Byzantine Anatolia. A great council met, attended by numerous Italian, Burgundian, and French bishops in such vast numbers it had to be held in the open air outside the city of Clermont. Though the Council of Clermont held in November of the same year was primarily focused on reforms within the church hierarchy, Urban II gave a speech on 27 November 1095 to a broader audience. Urban II's sermon proved highly effective, as he summoned the attending nobility and the people to wrest the Holy Land, and the eastern churches generally, from the control of the Seljuk Turks.
There exists no exact transcription of the speech that Urban delivered at the Council of Clermont. The five extant versions of the speech were written down some time later, and they differ widely from one another. All versions of the speech except that by Fulcher of Chartres were probably influenced by the chronicle account of the First Crusade called the "Gesta Francorum" (written c. 1101), which includes a version of it. Fulcher of Chartres was present at the Council, though he did not start writing his history of the crusade, including a version of the speech until c. 1101. Robert the Monk may have been present, but his version dates from about 1106. The five versions of Urban's speech likely reflect much more clearly what later authors thought Urban II should have said to launch the First Crusade than what Urban II actually did say.
As a better means of evaluating Urban's true motives in calling for a crusade to the Holy Lands, there are four extant letters written by Pope Urban himself: one to the Flemish (dated December 1095); one to the Bolognese (dated September 1096); one to Vallombrosa (dated October 1096); and one to the counts of Catalonia (dated either 1089 or 1096–1099). However, whereas the three former letters were concerned with rallying popular support for the Crusades, and establishing the objectives, his letters to the Catalonian lords instead beseech them to continue the fight against the Moors, assuring them that doing so would offer the same divine rewards as a conflict against the Seljuks. It is Urban II's own letters, rather than the paraphrased versions of his speech at Clermont, that reveal his actual thinking about crusading. Nevertheless, the versions of the speech have had a great influence on popular conceptions and misconceptions about the Crusades, so it is worth comparing the five composed speeches to Urban's actual words. Fulcher of Chartres has Urban saying this:
The chronicler Robert the Monk put this into the mouth of Urban II: ... this land which you inhabit, shut in on all sides by the seas and surrounded by the mountain peaks, is too narrow for your large population; nor does it abound in wealth; and it furnishes scarcely food enough for its cultivators. Hence it is that you murder one another, that you wage war, and that frequently you perish by mutual wounds. Let therefore hatred depart from among you, let your quarrels end, let wars cease, and let all dissensions and controversies slumber. Enter upon the road to the Holy Sepulchre; wrest that land from the wicked race, and subject it to yourselves ... God has conferred upon you above all nations great glory in arms. Accordingly undertake this journey for the remission of your sins, with the assurance of the imperishable glory of the Kingdom of Heaven.
Robert continued:
When Pope Urban had said these ... things in his urbane discourse, he so influenced to one purpose the desires of all who were present, that they cried out "It is the will of God! It is the will of God!". When the venerable Roman pontiff heard that, [he] said: "Most beloved brethren, today is manifest in you what the Lord says in the Gospel, 'Where two or three are gathered together in my name there am I in the midst of them.' Unless the Lord God had been present in your spirits, all of you would not have uttered the same cry. For, although the cry issued from numerous mouths, yet the origin of the cry was one. Therefore I say to you that God, who implanted this in your breasts, has drawn it forth from you. Let this then be your war-cry in combats, because this word is given to you by God. When an armed attack is made upon the enemy, let this one cry be raised by all the soldiers of God: It is the will of God! It is the will of God!"
Within Fulcher of Chartres account of pope Urban’s speech there was a promise of remission of sins for whoever took part in the crusade.All who die by the way, whether by land or by sea, or in battle against the pagans, shall have immediate remission of sins. This I grant them through the power of God with which I am invested. O what a disgrace if such a despised and base race, which worships demons, should conquer a people which has the faith of omnipotent God and is made glorious with the name of Christ! With what reproaches will the Lord overwhelm us if you do not aid those who, with us, profess the Christian religion! Let those who have been accustomed unjustly to wage private warfare against the faithful now go against the infidels and end with victory this war which should have been begun long ago. Let those who for a long time, have been robbers, now become knights. Let those who have been fighting against their brothers and relatives now fight in a proper way against the barbarians. Let those who have been serving as mercenaries for small pay now obtain the eternal reward. Let those who have been wearing themselves out in both body and soul now work for a double honor. Behold! on this side will be the sorrowful and poor, on that, the rich; on this side, the enemies of the Lord, on that, his friends. Let those who go not put off the journey, but rent their lands and collect money for their expenses; and as soon as winter is over and spring comes, let them eagerly set out on the way with God as their guide.
It is disputed whether the famous slogan "God wills it" or "It is the will of God" ("deus vult" in Latin, "Dieu le veut" in French) in fact was established as a rallying cry during the Council. While Robert the Monk says so, it is also possible that the slogan was created as a catchy propaganda motto afterwards.
Urban II's own letter to the Flemish confirms that he granted "remission of all their sins" to those undertaking the enterprise to liberate the eastern churches. One notable contrast with the speeches recorded by Robert the Monk, Guibert of Nogent, and Baldric of Dol is the lesser emphasis on Jerusalem itself, which Urban only once mentions as his own focus of concern. In the letter to the Flemish he writes, "they [the Turks] have seized the Holy City of Christ, embellished by his passion and resurrection, and blasphemy to say—have sold her and her churches into abominable slavery." In the letters to Bologna and Vallombrosa he refers to the crusaders' desire to set out for Jerusalem rather than to his own desire that Jerusalem be freed from Muslim rule. It was believed that originally that Urban wanted to send a relatively small force to aid the Byzantines, however after meeting with two prominent members of the crusades Adhemar of Puy and Raymond of Saint-Guilles, Urban decided to rally a much larger force to retake Jerusalem. Urban II refers to liberating the church as a whole or the eastern churches generally rather than to reconquering Jerusalem itself. The phrases used are "churches of God in the eastern region" and "the eastern churches" (to the Flemish), "liberation of the Church" (to Bologna), "liberating Christianity [Lat. Christianitatis]" (to Vallombrosa), and "the Asian church" (to the Catalan counts). Coincidentally or not, Fulcher of Chartres's version of Urban's speech makes no explicit reference to Jerusalem. Rather it more generally refers to aiding the crusaders' Christian "brothers of the eastern shore," and to their loss of Asia Minor to the Turks.
It is still disputed what Pope Urban's motives were as evidenced by the different speeches that were recorded, all of which differ from each other. Some historians believe that Urban wished for the reunification of the eastern and western churches, a rift that was caused by the Great Schism of 1054. Others believe that Urban saw this as an opportunity to gain legitimacy as the pope as at the time he was contending with the antipope Clement III. A third theory is that Urban felt threatened by the Muslim incursions into Europe and saw the crusades as a way to unite the christian world into a unified defense against them.
The most important effect of the First Crusade for Urban himself was the removal of Clement III from Rome in 1097 by one of the French armies. His restoration there was supported by Matilda of Tuscany.
Urban II died on 29 July 1099, fourteen days after the fall of Jerusalem to the Crusaders, but before news of the event had reached Italy; his successor was Pope Paschal II.
Urban also gave support to the crusades in Spain against the Moors there. Pope Urban was concerned that the focus on the east and Jerusalem would neglect the fight in Spain. He saw the fight in the east and in Spain as part of the same crusade so he would offer the same remission of sin for those that fought in Spain and discouraged those that wished to travel east from Spain.
Urban received vital support in his conflict with the Byzantine Empire, Romans and the Holy Roman Empire from the Norman of Campania and Sicily. In return he granted Roger I the freedom to appoint bishops as a right of ("lay investiture"), to collect Church revenues before forwarding to the papacy, and the right to sit in judgment on ecclesiastical questions. Roger I virtually became a legate of the Pope within Sicily. In 1098 these were extraordinary prerogatives that Popes were withholding from temporal sovereigns elsewhere in Europe and that later led to bitter confrontations with Roger's Hohenstaufen heirs.
Pope Urban was beatified in 1881 by Pope Leo XIII with his feast day on 29 July. | https://en.wikipedia.org/wiki?curid=24284 |
Pope Urban IV
Pope Urban IV (; c. 1195 – 2 October 1264), born Jacques Pantaléon, was the head of the Catholic Church and ruler of the Papal States from 29 August 1261 to his death. He was not a cardinal; only a few popes since his time have not been cardinals, including Gregory X, Urban V and Urban VI.
Pantaléon was the son of a cobbler of Troyes, France. He studied theology and common law in Paris and was appointed a canon of Laon and later Archdeacon of Liège. At the First Council of Lyon (1245) he attracted the attention of Pope Innocent IV, who sent him on two missions in Germany. One of the missions was to negotiate the Treaty of Christburg between the pagan Prussians and the Teutonic Knights. He became Bishop of Verdun in 1253. In 1255, Pope Alexander IV made him Latin Patriarch of Jerusalem.
Pantaléon had returned from Jerusalem, which was in dire straits, and was at Viterbo seeking help for the oppressed Christians in the East when Alexander IV died. After a three-month vacancy, Pantaléon was chosen by the eight cardinals of the Sacred College to succeed him in a papal election that concluded on 29 August 1261. He chose the regnal name of Urban IV.
A fortnight before Urban's election, the Latin Empire of Constantinople, founded during the ill-fated Fourth Crusade against the Byzantines, fell to the Byzantines led by Emperor Michael VIII Palaiologos. Urban IV endeavoured without success to stir up a crusade to restore the Latin Empire.
Urban initiated construction of the Basilica of St. Urbain, Troyes, in 1262.
The festival of Corpus Christi ("the Body of Christ") was instituted by Urban on August 11, 1264, with the publication of the papal bull "Transiturus."
Urban asked Thomas Aquinas, the Dominican theologian, to write the texts for the Mass and Office of the feast. This included such famous hymns as the "Pange lingua, Tantum ergo," and "Panis angelicus".
Urban became involved in the affairs of Denmark. Jakob Erlandsen, Archbishop of Lund, wanted to make the Danish Church independent of the Royal power - which put him in direct confrontation with the Dowager Queen Margaret Sambiria, acting as regent for her son, King Eric V of Denmark. The Queen imprisoned the Archbishop, who responded by issuing an interdict. Both sides tried to get the Pope's support. The Pope agreed to several items that the Queen wanted - especially, he issued a dispensation to alter the terms of the Danish succession that would permit women to inherit the Danish throne. However, the main issues remained unsolved by Urban's death, with the case continuing at the papal court in Rome and the exiled Archbishop Erlandsen coming to Italy to pursue it in person.
In fact, the convoluted affairs of distant Denmark were of only a minor concern to the Pope. It was Italy which commanded Urban's near full attention: the long confrontation with the late Hohenstaufen German Emperor Frederick II had not been pressed during the mild pontificate of Alexander IV, during which it devolved into inter-urban struggles between nominally pro-Imperial Ghibellines and even more nominally pro-papal Guelf factions. Frederick II's heir Manfred was immersed in these struggles.
Urban's military captain was the condottiere Azzo d'Este, nominally at the head of a loose league of cities that included Mantua and Ferrara. Any Hohenstaufen in Sicily was bound to have claims over the cities of Lombardy, and as a check to Manfred, Urban introduced Charles of Anjou into the equation to place the crown of the Kingdom of Sicily in the hands of a monarch amenable to papal control. Charles was Count of Provence by right of his wife, maintaining a rich base for projecting what would be an expensive Italian war.
For two years, Urban negotiated with Manfred regarding whether Manfred would aid the Latins in regaining Constantinople in return for papal confirmation of the Hohenstaufen rights in the realm. Meanwhile, the papal pact solidified with Charles a promise of papal ships and men, produced by a crusading tithe, and Charles's promise not to lay claims on Imperial lands in northern Italy, nor in the Papal States. Charles promised to restore the annual "census" or feudal tribute due the Pope as overlord, some 10,000 ounces of gold being agreed upon, while the Pope would work to block Conradin from election as King of the Germans.
Before the arrival in Italy of his candidate Charles, Urban IV died at Perugia on 2 October 1264. His successor was Pope Clement IV, who immediately took up the papal side of the arrangement.
There is a story that the pope's death was related to Great Comet of 1264 which he fell sick at sometime near the arrival of the comet and then he died when the comet disappeared.
Tannhäuser, a prominent German Minnesänger and poet, was a contemporary of Urban—the pope died in 1264, and the Minnesänger died shortly after 1265. Two centuries later, the pope became a major character in a legend which grew up about the Minnesänger, which is first attested in 1430 and propagated in ballads from 1450.
The legendary account makes Tannhäuser a knight and poet who found the Venusberg, the subterranean home of Venus, and spent a year there worshipping the goddess. After leaving the Venusberg, Tannhäuser is filled with remorse and travels to Rome to ask Pope Urban IV if it is possible to be absolved of his sins. Urban replies that forgiveness is as impossible as it would be for his papal staff to send forth green leaves. Three days after Tannhäuser's departure Urban's staff begins to grow new leaves; messengers are sent to retrieve the knight, but he has already returned to Venusberg, never to be seen again; while the Pope, for refusing a penitent, is damned eternally. There is, however, no historical evidence for the events in the legend. | https://en.wikipedia.org/wiki?curid=24286 |
Pandora
In Greek mythology, Pandora (Greek: , derived from , "pān", i.e. "all" and , "dōron", i.e. "gift", thus "the all-endowed", "all-gifted" or "all-giving") was the first human woman created by Hephaestus on the instructions of Zeus. As Hesiod related it, each god cooperated by giving her unique gifts. Her other name—inscribed against her figure on a white-ground "kylix" in the British Museum—is Anesidora (), "she who sends up gifts" ("up" implying "from below" within the earth).
The Pandora myth is a kind of theodicy, addressing the question of why there is evil in the world. According to this, Pandora opened a jar ("pithos"), in modern accounts sometimes mistranslated as "Pandora's box", releasing all the evils of humanity. Hesiod's interpretation of Pandora's story went on to influence both Jewish and Christian theology and so perpetuated her bad reputation into the Renaissance. Later poets, dramatists, painters and sculptors made her their subject and over the course of five centuries contributed new insights into her motives and significance.
Hesiod, both in his "Theogony" (briefly, without naming Pandora outright, line 570) and in "Works and Days", gives the earliest version of the Pandora story.
The Pandora myth first appeared in lines 560–612 of Hesiod's poem in epic meter, the "Theogony" (c. 8th–7th centuries BC), without ever giving the woman a name. After humans received the stolen gift of fire from Prometheus, an angry Zeus decides to give humanity a punishing gift to compensate for the boon they had been given. He commands Hephaestus to mold from earth the first woman, a "beautiful evil" whose descendants would torment the human race. After Hephaestus does so, Athena dresses her in a silvery gown, an embroidered veil, garlands and an ornate crown of silver. This woman goes unnamed in the "Theogony", but is presumably Pandora, whose myth Hesiod revisited in "Works and Days". When she first appears before gods and mortals, "wonder seized them" as they looked upon her. But she was "sheer guile, not to be withstood by men." Hesiod elaborates (590–93):
From her is the race of women and female kind:
of her is the deadly race and tribe of women who
live amongst mortal men to their great trouble,
no helpmates in hateful poverty, but only in wealth.
Hesiod goes on to lament that men who try to avoid the evil of women by avoiding marriage will fare no better (604–7):
He reaches deadly old age without anyone to tend his years,
and though he at least has no lack of livelihood while he lives,
yet, when he is dead, his kinsfolk divide his possessions amongst them.
Hesiod concedes that occasionally a man finds a good wife, but still (609) "evil contends with good."
The more famous version of the Pandora myth comes from another of Hesiod's poems, "Works and Days". In this version of the myth (lines 60–105), Hesiod expands upon her origin, and moreover widens the scope of the misery she inflicts on humanity. As before, she is created by Hephaestus, but now more gods contribute to her completion (63–82): Athena taught her needlework and weaving (63–4); Aphrodite "shed grace upon her head and cruel longing and cares that weary the limbs" (65–6); Hermes gave her "a shameful mind and deceitful nature" (67–8); Hermes also gave her the power of speech, putting in her "lies and crafty words" (77–80) ; Athena then clothed her (72); next Persuasion and the Charites adorned her with necklaces and other finery (72–4); the Horae adorned her with a garland crown (75). Finally, Hermes gives this woman a name: Pandora – "All-gifted" – "because all the Olympians gave her a gift" (81). (In Greek, "Pandora" has an active rather than a passive meaning; hence, Pandora properly means "All-giving." The implications of this mistranslation are explored in "All-giving Pandora: mythic inversion?" below.) In this retelling of her story, Pandora's deceitful feminine nature becomes the least of humanity's worries. For she brings with her a jar (which, due to textual corruption in the sixteenth century, came to be called a box) containing "burdensome toil and sickness that brings death to men" (91–2), diseases (102) and "a myriad other pains" (100). Prometheus had (fearing further reprisals) warned his brother Epimetheus not to accept any gifts from Zeus. But Epimetheus did not listen; he accepted Pandora, who promptly scattered the contents of her jar. As a result, Hesiod tells us, "the earth and sea are full of evils" (101). One item, however, did not escape the jar (96–9):
Only Hope was left within her unbreakable house,
she remained under the lip of the jar, and did not
fly away. Before [she could], Pandora replaced the
lid of the jar. This was the will of aegis-bearing
Zeus the Cloudgatherer.
Hesiod does not say why hope ("elpis") remained in the jar. and closes with the moral (105): "Thus it is not possible to escape the mind of Zeus."
Hesiod also outlines how the end of man's Golden Age (an all-male society of immortals who were reverent to the gods, worked hard, and ate from abundant groves of fruit) was brought on by Prometheus. When he stole Fire from Mt. Olympus and gave it to mortal man, Zeus punished the technologically advanced society by creating a woman. Thus, Pandora was created and given the jar (mistranslated as 'box') which releases all evils upon man.
Archaic and Classic Greek literature seem to make little further mention of Pandora, but mythographers later filled in minor details or added postscripts to Hesiod's account. For example, the "Bibliotheca" and Hyginus each make explicit what might be latent in the Hesiodic text: Epimetheus married Pandora. They each add that the couple had a daughter, Pyrrha, who married Deucalion and survived the deluge with him. However, the Hesiodic "Catalogue of Women", , had made a "Pandora" one of the "daughters" of Deucalion, and the mother of Graecus by Zeus. In the 15th-century AD an attempt was made to conjoin pagan and scriptural narrative by the monk Annio da Viterbo, who claimed to have found an account by the ancient Chaldean historian Berossus in which "Pandora" was named as a daughter-in-law of Noah in the alternative Flood narrative.
The mistranslation of "pithos", a large storage jar, as "box" is usually attributed to the sixteenth century humanist Erasmus of Rotterdam when he translated Hesiod's tale of Pandora into Latin. Hesiod's "pithos" refers to a large storage jar, often half-buried in the ground, used for wine, oil or grain. It can also refer to a funerary jar. Erasmus, however, translated "pithos" into the Latin word "pyxis", meaning "box". The phrase "Pandora's box" has endured ever since.
Historic interpretations of the Pandora figure are rich enough to have offered Dora and Erwin Panofsky scope for monographic treatment. M. L. West writes that the story of Pandora and her jar is from a pre-Hesiodic myth, and that this explains the confusion and problems with Hesiod's version and its inconclusiveness. He writes that in earlier myths, Pandora was married to Prometheus, and cites the ancient Hesiodic "Catalogue of Women" as preserving this older tradition, and that the jar may have at one point contained only good things for humanity. He also writes that it may have been that Epimetheus and Pandora and their roles were transposed in the pre-Hesiodic myths, a "mythic inversion". He remarks that there is a curious correlation between Pandora being made out of earth in Hesiod's story, to what is in the "Bibliotheca" that Prometheus created man from water and earth. Hesiod's myth of Pandora's jar, then, could be an amalgam of many variant early myths.
The meaning of Pandora's name, according to the myth provided in "Works and Days", is "all-gifted". However, according to others Pandora more properly means "all-giving". Certain vase paintings dated to the 5th century BC likewise indicate that the pre-Hesiodic myth of the goddess Pandora endured for centuries after the time of Hesiod. An alternative name for Pandora attested on a white-ground kylix (ca. 460 BC) is "Anesidora", which similarly means "she who sends up gifts." This vase painting clearly depicts Hephaestus and Athena putting the finishing touches on the first woman, as in the "Theogony". Written above this figure (a convention in Greek vase painting) is the name "Anesidora". More commonly, however, the epithet "anesidora" is applied to Gaea or Demeter. In view of such evidence, William E. Phipps has pointed out, "Classics scholars suggest that Hesiod reversed the meaning of the name of an earth goddess called Pandora (all-giving) or Anesidora (one-who-sends-up-gifts). Vase paintings and literary texts give evidence of Pandora as a mother earth figure who was worshipped by some Greeks. The main English commentary on "Works and Days" states that Hesiod shows no awareness [of this]."
Jane Ellen Harrison also turned to the repertory of vase-painters to shed light on aspects of myth that were left unaddressed or disguised in literature. On a fifth-century amphora in the Ashmolean Museum (her fig.71) the half-figure of Pandora emerges from the ground, her arms upraised in the epiphany gesture, to greet Epimetheus. A winged "ker" with a fillet hovers overhead: "Pandora rises from the earth; she "is" the Earth, giver of all gifts," Harrison observes. Over time this "all-giving" goddess somehow devolved into an "all-gifted" mortal woman. A.H. Smith, however, noted that in Hesiod's account Athena and the Seasons brought wreaths of grass and spring flowers to Pandora, indicating that Hesiod was conscious of Pandora's original "all-giving" function. For Harrison, therefore, Hesiod's story provides "evidence of a shift from matriarchy to patriarchy in Greek culture. As the life-bringing goddess Pandora is eclipsed, the death-bringing human Pandora arises." Thus, Harrison concludes "in the patriarchal mythology of Hesiod her great figure is strangely changed and diminished. She is no longer Earth-Born, but the creature, the handiwork of Olympian Zeus." (Harrison 1922:284). Robert Graves, quoting Harrison, asserts of the Hesiodic episode that "Pandora is not a genuine myth, but an anti-feminist fable, probably of his own invention." H.J. Rose wrote that the myth of Pandora is decidedly more illiberal than that of epic in that it makes Pandora the origin of all of Man's woes with her being the exemplification of the bad wife.
The Hesiodic myth did not, however, completely obliterate the memory of the all-giving goddess Pandora. A scholium to line 971 of Aristophanes' "The Birds" mentions a cult "to Pandora, the earth, because she bestows all things necessary for life". And in fifth-century Athens, Pandora made a prominent appearance in what, at first, appears an unexpected context, in a marble relief or bronze appliqués as a frieze along the base of the "Athena Parthenos", the culminating experience on the Acropolis. Jeffrey M. Hurwit has interpreted her presence there as an "anti-Athena." Both were motherless, and reinforced via opposite means the civic ideologies of patriarchy and the "highly gendered social and political realities of fifth-century Athens"—Athena by rising above her sex to defend it, and Pandora by embodying the need for it. Meanwhile, Pausanias (i.24.7) merely noted the subject and moved on.
Images of Pandora began to appear on Greek pottery as early as the 5th century BCE, although identification of the scene represented is sometimes ambiguous. An independent tradition that does not square with any of the Classical literary sources is in the visual repertory of Attic red-figure vase-painters, which sometimes supplements, sometimes ignores, the written testimony; in these representations the upper part of Pandora is visible rising from the earth, "a chthonic goddess like Gaia herself." Sometimes, but not always, she is labeled "Pandora". In some cases the figure of Pandora emerging from the earth is surrounded by figures carrying hammers in what has been suggested as a scene from a satyr play by Sophocles, "Pandora, or The Hammerers", of which only fragments remain. But there have also been alternative interpretations of such scenes.
In a late Pre-Raphaelite painting by John D. Batten, hammer-wielding workmen appear through a doorway, while in the foreground Hephaestus broods on the as yet unanimated figure of “Pandora”. There were also earlier English paintings of the newly created Pandora as surrounded by the heavenly gods presenting gifts, a scene also depicted on ancient Greek pottery. In one case it was part of a decorative scheme painted on the ceiling at Petworth House by Louis Laguerre in about 1720. William Etty’s "Pandora Crowned by the Seasons" of a century later is similarly presented as an apotheosis taking place among the clouds.
In between these two had come James Barry’s huge "Birth of Pandora", on which he laboured for over a decade at the turn of the nineteenth century. Well before that he was working on the design, which was intended to reflect his theoretical writings on the interdependence between history painting and the way it should reflect the ideal state. An early drawing, only preserved now in the print made of it by Luigi Schiavonetti, follows the account of Hesiod and shows Pandora being adorned by the Graces and the Hours while the gods look on. Its ideological purpose, however, was to demonstrate an equal society unified by the harmonious function of those within it. But in the actual painting which followed much later, a subordinated Pandora is surrounded by gift-bearing gods and Minerva stands near her, demonstrating the feminine arts proper to her passive role. The shift is back to the culture of blame whenever she steps outside it.
In the individual representations of Pandora that were to follow, her idealisation is as a dangerous type of beauty, generally naked or semi-naked. She is only differentiated from other paintings or statues of such females by being given the attribute of a jar or, increasingly in the 19th century, a straight-sided box. As well as the many European paintings of her from this period, there are examples in sculptures by Henri-Joseph Ruxthiel (1819),John Gibson (1856), Pierre Loison (1861, see above) and Chauncy Bradley Ives (1871).
There is an additional reason why Pandora should appear nude, in that it was a theological commonplace going back to the early Church Fathers that the Classical myth of Pandora made her a type of Eve. Each is the first woman in the world; and each is a central character in a story of transition from an original state of plenty and ease to one of suffering and death, a transition which is brought about as a punishment for transgression of divine law.
It has been argued that it was as a result of the Hellenisation of Western Asia that the misogyny in Hesiod's account of Pandora began openly to influence both Jewish and then Christian interpretations of scripture. The doctrinal bias against women so initiated then continued into Renaissance times. Bishop Jean Olivier's long Latin poem "Pandora" drew on the Classical account as well as the Biblical to demonstrate that woman is the means of drawing men to sin. Originally appearing in 1541 and republished thereafter, it was soon followed by two separate French translations in 1542 and 1548. At the same period appeared a 5-act tragedy by the Protestant theologian Leonhard Culmann (1498-1568) titled "Ein schön weltlich Spiel von der schönen Pandora" (1544), similarly drawing on Hesiod in order to teach conventional Christian morality.
The equation of the two also occurs in the 1550 allegorical painting by Jean Cousin the Elder, "Eva Prima Pandora" (Eve the first Pandora), in which a naked woman reclines in a grotto. Her right elbow rests on a skull, indicating the bringing of death, and she holds an apple branch in that hand – both attributes of Eve. Her left arm is wreathed by a snake (another reference to the temptation of Eve) and that hand rests on an unstopped jar, Pandora's attribute. Above hangs the sign from which the painting gains its name and beneath it is a closed jar, perhaps the counterpart of the other in Olympus, containing blessings.
In Juan de Horozco's Spanish emblem book, "Emblemas morales" (1589), a motive is given for Pandora's action. Accompanying an illustration of her opening the lid of an urn from which demons and angels emerge is a commentary that condemns “female curiosity and the desire to learn by which the very first woman was deceived”. In the succeeding century that desire to learn was equated with the female demand to share the male prerogative of education. In Nicolas Regnier’s painting “The Allegory of Vanity” (1626), subtitled “Pandora”, it is typified by her curiosity about the contents of the urn that she has just unstopped and is compared to the other attributes of vanity surrounding her (fine clothes, jewellery, a pot of gold coins). Again, Pietro Paolini’s lively Pandora of about 1632 seems more aware of the effect that her pearls and fashionable headgear is making than of the evils escaping from the jar she holds. There is a social message carried by these paintings too, for education, no less than expensive adornment, is only available to those who can afford it.
But an alternative interpretation of Pandora’s curiosity makes it merely an extension of childish innocence. This comes out in portrayals of Pandora as a young girl, as in Walter Crane’s “Little Pandora” spilling buttons while encumbered by the doll she is carrying, in Arthur Rackham’s book illustration and Frederick Stuart Church’s etching of an adolescent girl taken aback by the contents of the ornamental box she has opened. The same innocence informs Odilon Redon’s 1910/12 clothed figure carrying a box and merging into a landscape suffused with light, and even more the 1914 version of a naked Pandora surrounded by flowers, a primaeval Eve in the Garden of Eden. Such innocence, “naked and without alarm” in the words of an earlier French poet, portrays Pandora more as victim of a conflict outside her comprehension than as temptress.
Early dramatic treatments of the story of Pandora are works of musical theatre. "La Estatua de Prometeo" (1670) by Pedro Calderón de la Barca is made an allegory in which devotion to learning is contrasted with the active life. Prometheus moulds a clay statue of Minerva, the goddess of wisdom to whom he is devoted, and gives it life from a stolen sunbeam. This initiates a debate among the gods whether a creation outside their own work is justified; his devotion is in the end rewarded with permission to marry his statue. In this work, Pandora, the statue in question, plays only a passive role in the competition between Prometheus and his brother Epimetheus (signifying the active life), and between the gods and men.
Another point to note about Calderón’s musical drama is that the theme of a statue married by her creator is more suggestive of the story of Pygmalion. The latter is also typical of Voltaire’s ultimately unproduced opera "Pandore" (1740). There too the creator of a statue animates it with stolen fire, but then the plot is complicated when Jupiter also falls in love with this new creation but is prevented by Destiny from consummating it. In revenge the god sends Destiny to tempt this new Eve into opening a box full of curses as a punishment for Earth’s revolt against Heaven.
If Pandora appears suspended between the roles of Eve and of Pygmalion’s creation in Voltaire’s work, in Charles-Pierre Colardeau’s erotic poem "Les Hommes de Prométhée" (1774) she is presented equally as a love-object and in addition as an unfallen Eve:
Having been fashioned from clay and given the quality of “naïve grace combined with feeling”, she is set to wander through an enchanted landscape. There she encounters the first man, the prior creation of Prometheus, and warmly responds to his embrace. At the end the couple quit their marriage couch and survey their surroundings “As sovereigns of the world, kings of the universe”.
One other musical work with much the same theme was Aumale de Corsenville's one-act verse melodrama "Pandore", which had an overture and incidental music by Franz Ignaz Beck. There Prometheus, having already stolen fire from heaven, creates a perfect female, “artless in nature, of limpid innocence”, for which he anticipates divine vengeance. However, his patron Minerva descends to announce that the gods have gifted Pandora with other qualities and that she will become the future model and mother of humanity. The work was performed on 2 July 1789, on the very eve of the French Revolution, and was soon forgotten in the course of the events that followed.
Over the course of the 19th century, the story of Pandora was interpreted in radically different ways by four dramatic authors in four countries. In two of these she was presented as the bride of Epimetheus; in the two others she was the wife of Prometheus. The earliest of these works was the lyrical dramatic fragment by Johann Wolfgang von Goethe, written between 1807 and 1808. Though it bears the title "Pandora", what exists of the play revolves round Epimetheus’ longing for the return of the wife who has abandoned him and has yet to arrive. It is in fact a philosophical transformation of Goethe's passion in old age for a teenaged girl.
Henry Wadsworth Longfellow’s "The Masque of Pandora" dates from 1876. It begins with her creation, her refusal by Prometheus and acceptance by Epimetheus. Then in the latter’s house an “oaken chest, Carven with figures and embossed with gold” attracts her curiosity. After she eventually gives in to temptation and opens it, she collapses in despair and a storm destroys the garden outside. When Epimetheus returns, she begs him to kill her but he accepts joint responsibility. The work was twice used as the basis for operas by Alfred Cellier in 1881 and by Eleanor Everest Freer in 1933. Iconographical elements from the masque also figure in Walter Crane's large watercolour of Pandora of 1885. She is pictured as sprawled over a carved wooden chest on which are embossed golden designs of the three fates who figure as a chorus in Longfellow's scene 3. Outside the palace, a high wind is bending the trees. But on the front of the chest, a medallion showing the serpent wound about the tree of knowledge recalls the old interpretation of Pandora as a type of Eve.
In England the high drama of the incident was travestied in James Robinson Planché’s "Olympic Revels or Prometheus and Pandora" (1831), the first of the Victorian burlesques. It is a costume drama peppered with comic banter and songs during which the gods betroth Pandora to a disappointed Prometheus with “only one little box” for dowry. When she opens it, Jupiter descends to curse her and Prometheus, but Hope emerges from the box and negotiates their pardon.
At the other end of the century, Gabriel Fauré’s ambitious opera Prométhée (1900) had a cast of hundreds, a huge orchestra and an outdoor amphitheatre for stage. It was based in part on the "Prometheus Bound" of Aeschylus but was rewritten so as to give the character of Pandore an equal part with his. This necessitated her falling “as if dead” on hearing the judgement against Prométhée in Act 1; a funeral procession bearing her body at the start of Act 2, after which she revives to mourn the carrying out of Prométhée's sentence; while in Act 3 she disobeys Prométhée by accepting a box, supposedly filled with blessings for mankind, and makes the tragedy complete.
The pattern during the 19th century had only repeated that of the nearly three millennia before it. The ancient myth of Pandora never settled into one accepted version, was never agreed to have a single interpretation. It was used as a vehicle to illustrate the prevailing ideologies or artistic fashions of the time and eventually became so worn a coinage that it grew confused with other, sometimes later, stories. Best known in the end for a single metaphorical attribute, the box with which she was not even endowed until the 16th century, depictions of Pandora have been further confused with other holders of receptacles – with one of the trials of Psyche, with Sophonisba about to drink poison or Artemisia with the ashes of her husband. Nevertheless, her very polyvalence has been in the end the guarantor of her cultural survival. | https://en.wikipedia.org/wiki?curid=24290 |
Peremptory plea
In the common law, the peremptory pleas (pleas in bar) are defensive pleas that set out special reasons for which a trial cannot proceed; they serve to bar the case entirely. Pleas in bar may be used in civil or criminal cases; they address the substantial merits of the case.
In a criminal case, the peremptory pleas are the plea of autrefois convict, the plea of autrefois acquit, and the plea of pardon.
A plea of "autrefois convict" (Law French for "previously convicted") is one in which the defendant claims to have been previously convicted of the same offence and that he or she therefore cannot be tried for it again. In the instance where a defendant has been summonsed to both criminal and civil proceedings, a plea of autrefois convict is essentially an application to 'merge' proceedings, giving rise to "res judicata" or a cause of action estoppel in civil proceedings.
A plea of "autrefois acquit" is one in which the defendant claims to have been previously acquitted for the same offence and thus should not be tried again. The plea of autrefois acquit is a form of estoppel by which the Crown cannot reassert the guilt of the accused after they have been acquitted. The plea prevents inconsistent decisions and the reopening of litigation.
The limitations of these pleas have been circumscribed by various legal cases and appeals. In England, Wales and Northern Ireland, significant changes were made by the Criminal Justice Act 2003, by which an acquittal on a serious charge can be quashed and a retrial ordered, if there is "new and compelling evidence" against the acquitted person.
In a civil case, a plea in bar alleges that circumstances exist that serve to block and defeat the plaintiff's case absolutely and entirely. Pleas in bar can include accord and satisfaction or the running of the statute of limitations. A special plea in bar advances new matter, while a general plea in bar denies some material allegation in the complaint. | https://en.wikipedia.org/wiki?curid=24294 |
Pope Urban V
Pope Urban V (; 1310 – 19 December 1370), born Guillaume de Grimoard, was the head of the Catholic Church from 28 September 1362 until his death in 1370 and was also a member of the Order of Saint Benedict. He was the only Avignon pope to be beatified.
Even after his election as pontiff, he continued to follow the Benedictine Rule, living simply and modestly. His habits did not always gain him supporters who were used to lives of affluence.
Urban V pressed for reform throughout his pontificate and also oversaw the restoration and construction of churches and monasteries. One of the goals he set himself upon his election to the Papacy was the reunion of the Eastern and Western Churches. He came as close as some of his predecessors and successors, but did not succeed.
Guillaume de Grimoard was born in 1310 in the Castle of Grizac in the French region of Languedoc (today part of the commune of Le Pont-de-Montvert, department of Lozère), the second son of Guillaume de Grimoard, Lord of Bellegarde, and of Amphélise de Montferrand. He had two brothers, Étienne and Anglic the future cardinal, and a sister Delphine.
In 1327, Guillaume Grimoard became a Benedictine monk in the small Priory of Chirac, near his home, which was a dependency of the ancient Abbey of St. Victor near Marseille. He was sent to St. Victor for his novitiate. After his profession of monastic vows, he was ordained a priest in his own monastery in Chirac in 1334. He studied literature and law at Montpellier, and then he moved to the University of Toulouse, where he studied law for four years. He earned a doctorate in Canon Law on 31 October 1342.
He was appointed Prior of Nôtre-Dame du Pré (de Priorato) in the diocese of Auxerre by Pope Clement VI, which he held until his promotion to Saint-Germain en Auxerre in 1352. He began both disciplinary and financial reforms. His new bishop, Jean d'Auxois (1353–1359), however, in concert with the Archbishop of Sens, Guillaume de Melun, made heavy demands on their hospitality, and when the latter attempted to impose new exactions, which were resisted by Grimoard, the Archbishop physically abused the Prior, who nonetheless would not submit. Prior Grimoard became Procurator-General for the Order of St. Benedict at the Papal Curia.
He became a noted canonist, teaching at Montpellier, Paris and Avignon. He was appointed by the Bishop of Clermont, Pierre de Aigrefeuille (1349–1357), to be his vicar general, which meant in effect that he ruled the diocese on behalf of the bishop. When Bishop Pierre was transferred to Uzès (1357–1366), Guillaume Grimond became Vicar General of Uzès.
Guillaume was named abbot of the monastery of Saint-Germain en Auxerre on 13 February 1352 by Pope Clement VI. In 1359 the town and abbey were captured by the English and subjected to heavy imposts.
In the summer of 1352 Pope Clement VI summoned Abbot Guillaume for an assignment. Northern Italy had been in a chaotic state for some time, thanks to the ambitions of the Visconti of Milan, led by Archbishop Giovanni Visconti. He had conquered much of Lombardy, seized the Papal city of Bologna, and was invading the borders of Florentine territory. In order to keep a hold on the territory for the Church, the Pope had hit on the scheme of making Archbishop Visconti his vicar of Bologna for the present. He drew up an agreement on 27 April 1352, which absolved the Visconti of all their transgressions and signed away much of northern Italy. The Pope even made the first payment on the subsidy which he was going to provide them. The Visconti, on their part, had no intention of observing the terms of the pact, one of which was the return of the Legation of Bologna to the Papacy, despite the fine words and promises they made in Avignon. On 26 July, Abbot Grimoard and Msgr. Azzo Manzi da Reggio, the Dean of the Cathedral of Aquileia, were presented with written instructions by Pope Clement to go to northern Italy as apostolic nuncios to deal with the situation. Guillaume was to receive the city of Bologna from the Visconti, who were illegal occupiers, and hand it over to Giovanni Visconti as the papal vicar, and to threaten with ecclesiastical censures any parties who did not adhere to the treaty. This he did on 2 October 1352. Guillaume was allotted 8 gold florins a day for his expenses, his associate Anzo only 4 florins. While he was in Milan he was also able to get the Archbishop to renew the treaty that was expiring with the King and Queen of Sicily. He was back in Avignon in November 1352.
In 1354 Abbot Grimoard was sent to Italy again, this time to Rome, where there was business that needed to be transacted for the Apostolic Camera. There were also serious disorders in the Basilica of St. Peter which needed to be sorted out.
In August 1361, he was elected the abbot of the Abbey of Saint-Victor in Marseille. Despite the appointment, he continued to teach as a professor, at least for the next academic year.
Cardinal Gil Álvarez Carrillo de Albornoz had been sent to Italy in 1353, to bring under control the notorious Giovanni di Vico of Viterbo, as well as the Malatesta of Rimini and the Ordelaffi family of Forlì. In 1360 Abbot Guillaume was sent to assist him by dealing with Archbishop Visconti's nephew and successor, Bernabò Visconti. Their confrontation was so hostile and threatening that the Abbot left immediately and reported back to Pope Innocent the treachery of his vassal. The Pope sent him back to Italy immediately, but happily the utter defeat of Visconti's army which was besieging Bologna by Cardinal Albornoz eased the situation considerably. Nonetheless, immediately after he was elected pope, Grimoard excommunicated Bernabò Visconti. He returned to France, and retired to his castle of Auriol, where he was found on 10 June 1362.
The reason for his retirement to Auriol is not far to seek. The plague was raging in southern France again in 1361 and 1362. Cardinal Pierre des Près died on 16 May 1361; Cardinal Petrus de Foresta, died on 7 June 1361; Cardinal Guillaume Farinier, died on 17 June 1361; Cardinal Guillaume Court, O.Cist., died on 12 June 1361; Cardinal Petrus Bertrandi, died on 13 July 1361; Cardinal Jean de Caraman, died on 1 August 1361; Cardinal Bernard de la Tour, died on 7 August 1361; Cardinal Francesco degli Atti, died on 25 August 1361; and Cardinal Pierre de Cros died in September 1361. In addition it was estimated that some 6000 persons and more than 100 bishops died in 1361. Cardinal Nicolas Roselli (1357–1362) of Tarragona died at Majorca on 28 March 1362, though not of the plague.
King Louis I of Naples died on 25 May 1362. This set off a power struggle, with Queen Joanna I attempting to get back the power she had lost to her husband, as well as a contest to see who her next husband would be. Abbot Guillaume was summoned to Avignon, where he was on 27 June, and sent to Naples to provide the advice and guidance as to the desires of the feudal overlord of Naples, Pope Innocent VI.
During his trip to the south, he visited the great Benedictine abbey of Monte Cassino, where he was saddened to see the state into which it had fallen, both physically and organizationally, both from earthquakes and episcopal neglect. As soon as he became Pope he undertook to repair the situation, and on 31 March 1367 he abolished the diocese of Cassino and restored the monastery to the complete control of its Abbot.
In September 1362, Grimoard was apostolic nuncio in Italy when Pope Innocent VI died. Exactly where he was when the news reached him summoning him to Avignon is unknown. Naples is just a guess; other possibilities are Florence and Lombardy.
Pope Innocent VI died on 12 September 1362. The Conclave to elect his successor opened on 22 September, the Feast of Saint Maurice, in the Apostolic Palace in Avignon. Twenty of the twenty-one cardinals were in attendance. Only Cardinal Albornoz remained at his post in Italy. Of the twenty cardinals eighteen were French in origin, six of them Limousin. Ten of the twenty-one cardinals were papal relatives. The influence of the Limousin cardinals was somewhat diminished since their homeland had recently become subject to English occupation, which frightened the thirteen cardinals who were subjects of the King of France. Both Cardinals Hélie de Talleyrand and Guy de Boulogne considered themselves to be electable.
Matteo Villani, the Florentine chronicler, says that fifteen cardinals were prepared to elect, or actually elected, Hugues Roger, OSB, a Limousin and the brother of Pope Clement VI, who was Chamberlain of the College of Cardinals. Cardinal Hugues declined the offer. Villani is the only source that reports this version of events. This story, moreover, contradicts the report of Jean de Froissart, who claims that a stalemate developed between Talleyrand and Guy de Boulogne, such that members of neither party could get the required two-thirds of the votes. It was apparently one of the Limousin Cardinals, Guillaume d'Aigrefeuille, who directed the attention of the cardinals to Abbot Guillaume Grimoard. On 28 September, they elected Grimoard as the new Pope. He was not initially informed of the result, instead he was requested to return immediately to Avignon to "consult" with the Conclave. The cardinals feared the reaction of the Romans to the election of another French pope, and so kept the results of the election secret until Grimoard's arrival a month later, at the end of October. The Romans had been clamoring for some time for a Roman, or at least Italian, pope, and it was feared they would interfere with Guillaume's travel had they known of his election. Upon his arrival, Grimoard accepted his election and took the pontifical name of Urban V. When asked the reason for the selection of his new name, Grimoard was alleged to have said: "All the popes who have borne this name were saints".
Grimoard was not even a bishop at the time of his election, and had to be consecrated before he could be crowned. This was done on 6 November by Cardinal Andouin Aubert, the Bishop of Ostia, a nephew of Grimoard's predecessor, Innocent VI. The Bishop of Ostia had the traditional right to consecrate a pope a bishop. At the conclusion of the consecration Mass, Urban V was crowned. There is no record of who it was who placed the crown on his head. The right to do so belonged to the cardinal protodeacon, who was Cardinal Guillaume de la Jugié, a nephew of Pope Clement VI. Urban V was the sixth pope in the Avignon Papacy.
Urban V kept on another papal nephew, Arnaud Aubert, the nephew of Pope Innocent VI. He had been given the very important position of papal chamberlain, the head of the Church's financial department, by his uncle in 1361. He continued in that office throughout the reign of Urban VI and also that of Gregory XI, until 1371. In addition to the management of the papal household, the office made Aubert the temporal vicar for the Pope in the diocese of Avignon and the administrator of the Comtat-Venaissin.
In 1363–1364 the winter was so cold, especially in January, February and March, that the Rhone froze over to the extent that people and vehicles could travel across the ice. The Pope, however, announced that he would excommunicate anyone who attempted to do so, fearing that people might accidentally fall in and be drowned. Near Carcassone, a man froze to death while travelling on his horse, though the horse was able to make it back to its accustomed stable with the dead man on its back. Many of the poor, women, and children died of the cold.
As pope, Urban V continued to follow the discipline of the Benedictine Rule and to wear his monastic habit. Urban V worked against absenteeism, pluralism and simony, while seeking to improve clerical training and examination. It must be kept in mind, however, that, with the training of a monk, reform was a matter of return to ideal values and principles through discipline, not a matter of striking out with new solutions. With the training of a lawyer, reform was a matter of codifying and enforcing established decisions and precedents.
Pope Urban V introduced considerable reforms in the administration of justice and liberally patronized learning. He founded a university in Hungary. He granted the University of Pavia the status of Studium Generale (14 April 1363). In Toulouse, he granted the Theology Faculty the same rights as possessed by the University of Paris. In Montpellier, he restored the school of medicine and founded the College of Saint Benedict, whose church, decorated with numerous works of art, later became the cathedral of the city. He founded a collegiate church in Quézac, and a church and library in Ispagnac. On a hilltop near Bédouès, the parish in which the Château de Grisac is situated, he built a church where the bodies of his parents were buried, and, we are informed by a papal bull of December 1363, he instituted a college of six canon-priests, along with a deacon and a subdeacon.
Urban V issued a preliminary consent for the establishment of the University of Kraków, which by September 1364 had gained full papal consent. He provided books and the best professors to more than 1,000 students of all classes. Around Rome, he also planted vineyards.
He imposed the penalty of excommunication on anyone who molested the Jews or attempted forcible conversion and baptism.
The great feature of Urban V's reign was the effort to return the papacy to Rome and to suppress its powerful rivals for the temporal sovereignty there. He began by sending his brother, Cardinal Angelicus Grimoard, as legate in northern Italy. In 1362 Urban ordered a crusade to be preached throughout Italy against Bernabò Visconti, Giangaleazzo Visconti and their kindred, accused as robbers of the church's estate. In March 1363 Bernabò was declared a heretic. However, Pope Urban found it necessary to purchase peace in March of the following year, sending the newly created Cardinal Androin de la Roche, former Abbot of Cluny, as apostolic legate to Italy to arrange the business. Then, through the mediation of Emperor Charles IV, Urban lifted his excommunication against Bernabò, obtaining Bologna only after he signed a hasty peace that was highly favorable to Bernabò.
In May 1365 the Emperor Charles visited Avignon, where he appeared with the Pope in full imperial regalia. He then proceeded to Arles, which was one of his domains, where he was crowned King by the Archbishop, Pierre de Cros, OSB.
Urban V's greatest desire was that of a crusade against the Turks. In 1363, King John II of France and Peter I, the King of Cyprus, came to Avignon, and it was decided that there should be a war against the Turks. It was Urban and Peter who were most eager for the crusade; the French were exhausted by recent losses in the Hundred Years' War, and some of their leaders were still being held prisoner in England. The Pope held a special ceremony on Holy Saturday, 1363, and bestowed the crusader's cross on the two kings, and on Cardinal Hélie de Talleyrand as well. John II was appointed Rector and Captain General of the expedition. Cardinal de Talleyrand was appointed apostolic legate for the expedition, but he died on 17 January 1364, before the expedition could set out. Assembling the army proved an impossible task, and King John returned to prison in England. He died in London on 8 April 1364.
King Peter of Cyprus, disappointed by King John's return to captivity in England and the death of Cardinal de Talleyrand, collected whatever soldiers he could, and in 1365 launched a successful attack on Alexandria (11 October 1365). Additional support was not forthcoming, however, and seeing that the enemy vastly outnumbered the crusaders, he ordered the sacking and burning of the city, and then withdrew. He continued to harass the coasts of Syria and Egypt until he was assassinated in 1369. Urban, however, played no part in the crusade or its aftermath.
Amadeus of Savoy and Louis of Hungary also put together a crusade in Urban's reign in 1366. Initially they were successful, and Amadeus even captured Gallipoli. But despite initial successes, each was forced to withdraw.
Continued troubles in Italy, as well as pleas from figures such as Petrarch and St. Bridget of Sweden, caused Urban V to set out for Rome, only to find that his Vicar, Cardinal Albornoz had just died. He conducted the remains of the Cardinal to Assisi, where they were buried in the Basilica of Saint Francis. The Pope reached the City of Rome on 16 October 1367, the first pope in sixty years to set foot in his own diocese. He was greeted by the clergy and people with joy, and despite the satisfaction of being attended by the Emperor Charles IV in St. Peter's, and of placing the crown upon the head of the Empress Elizabeth (1 November 1368), it soon became clear that by changing the seat of his government he had not increased its power. In Rome he was nonetheless able to receive the homage of King Peter I of Cyprus, Queen Joan I of Naples, and the confession of faith by the Byzantine Emperor John V Palaeologus. Bridget of Sweden, who was living in Rome and attempting to get approval for a new religious order, the Bridgettines, had actually appeared before the Pope at Montefiascone in 1370 as he was preparing to return to France, and, in the presence of Cardinal Pierre Roger de Beaufort, the future pope, predicted the death of the Pope if he should leave Rome. And he did die.
Unable any longer to resist the urgency of the French cardinals, and despite several cities of the Papal States still being in revolt, Urban V boarded a ship at Corneto heading for France on 5 September 1370, arriving back at Avignon on the 24th of the same month. A few days later he fell severely ill. Feeling his death approaching, he asked that he might be moved from the Papal Palace to the nearby residence of his brother, Angelic de Grimoard, whom he had made a cardinal, that he might be close to those he loved. He died there on 19 December 1370. He had been pope for eight years, one month, and nineteen days. His body was initially placed in the Chapel of John XXII in the Cathedral of S. Marie de Domps in Avignon. On 31 May 1371 his remains were transferred to the monastery of Saint-Victor in Marseille, where he had built a splendid tomb for himself.
Pope Gregory XI opened the cause of beatification of his predecessor. Urban V's claimed miracles and his virtues were documented. But the cause stopped in 1379 in Rome. It stopped in Avignon in 1390, under the orders of the antipope Clement VII. The Western Schism caused the process to stop, but it was revived centuries later, and led to the beatification of Urban V on 10 March 1870 by Pope Pius IX. His feast day is celebrated on 19 December, the day of his death. This was decided upon by a General Chapter of the Benedictine Order held in 1414. | https://en.wikipedia.org/wiki?curid=24297 |
Potsdam Conference
The Potsdam Conference () was held in Potsdam, Germany, from 17 July to 2 August 1945. (In some older documents, it is also referred to as the Berlin Conference of the Three Heads of Government of the USSR, the USA, and the UK.) The participants were the Soviet Union, the United Kingdom, and the United States, represented respectively by Premier Joseph Stalin, Prime Ministers Winston Churchill and Clement Attlee, and President Harry S. Truman.
They gathered to decide how to administer Germany, which had agreed to unconditional surrender nine weeks earlier on the 8th of May (Victory in Europe Day). The goals of the conference also included the establishment of postwar order, peace treaty issues, and countering the effects of the war.
Additionally, the Foreign Secretaries of the three Governments, James F. Byrnes, V. M. Molotov, and Anthony Eden, the Chief of Staff, with other advisers also participated in the Conference. From July 17 to July 25 there were held nine meetings. After that, the Conference was interrupted for two days, as the results of the British general election were being announced. On July 28, Prime Minister Clement Attlee defeated Winston Churchill in the British general election and replaced him as Britain’s representative through the Potsdam Conference, accompanied by the new Secretary of State for Foreign Affairs, Ernest Bevin. Four days of further discussion followed. During the Conference there were steady meetings of the heads of the three Governments along with the Foreign Secretaries, and also of the Foreign Secretaries alone. Committees that were appointed by the latter ones for precursory consideration of questions before the Conference also met daily. Important decisions and agreements were reached and views were exchanged on a plethora of other questions. However, consideration of these matters was continued by the Council of Foreign Ministers established by the Conference subsequently. The Potsdam Conference ended having strengthened the relationship between the three Governments in view of their collaboration, with renewed confidence that together with the other United Nations they would insure the creation of a just and enduring peace.
A number of changes had taken place in the five months since the Yalta Conference that greatly affected the relationships among the leaders. The Soviet Union had occupied Central and Eastern Europe, and the Red Army effectively controlled the Baltic states, Poland, Czechoslovakia, Hungary, Bulgaria, and Romania, and refugees were fleeing from those countries. Stalin had set up a puppet communist government in Poland, insisted that his control of Eastern Europe was a defensive measure against possible future attacks and claimed that it was a legitimate sphere of Soviet influence.
Also, Britain had a new prime minister. Conservative Party leader Winston Churchill had served as prime minister in a coalition government; his Soviet policy, since the early 1940s, had differed considerably from Roosevelt's; Churchill believed Stalin to be a "devil"-like tyrant who led a vile system. A general election had been held in the UK on 5 July 1945, but the results were delayed to allow the votes of armed forces personnel to be counted in their home constituencies. The outcome became known during the conference, when Labour leader Clement Attlee became the new prime minister.
Then, Roosevelt had died on 12 April 1945, and Vice-President Harry Truman assumed the presidency; his succession saw VE Day (Victory in Europe) within a month and VJ Day (Victory in Japan) on the horizon. During the war and in the name of Allied unity, Roosevelt had brushed off warnings of a potential domination by Stalin in part of Europe, explaining, "I just have a hunch that Stalin is not that kind of a man... I think that if I give him everything I possibly can and ask for nothing from him in return, 'noblesse oblige', he won't try to annex anything and will work with me for a world of democracy and peace".
Truman had closely followed the Allied progress of the war. George Lenczowski noted that "despite the contrast between his relatively modest background and the international glamour of his aristocratic predecessor, [Truman] had the courage and resolution to reverse the policy that appeared to him naive and dangerous", which was "in contrast to the immediate, often "ad hoc" moves and solutions dictated by the demands of the war". With the end of the war, the priority of Allied unity was replaced with the challenge of the relationship between the two emerging superpowers. Both leading powers continued to sustain a cordial relationship to the public, but suspicions and distrust lingered between them.
Truman was much more suspicious of the communists than Roosevelt had been, and he became increasingly suspicious of Soviet intentions under Stalin. He and his advisers saw Soviet actions in Eastern Europe as aggressive expansionism that was incompatible with the agreements that Stalin had committed to at Yalta the previous February. In addition, Truman became aware of possible complications elsewhere when Stalin objected to Churchill's proposal for an early Allied withdrawal from Iran, ahead of the schedule agreed at the Tehran Conference. The Potsdam Conference was the only time that Truman met Stalin in person.
At the Yalta Conference, France had been granted an occupation zone within Germany. France had been a participant in the Berlin Declaration and was to be an equal member of the Allied Control Council. Nevertheless, at the insistence of the Americans, Charles de Gaulle was not invited to Potsdam, just as he had been denied representation at Yalta. The diplomatic slight was a cause of deep and lasting resentment for him. Reasons for the omissions included the longstanding personal mutual antagonism between Roosevelt and De Gaulle, ongoing disputes over the French and American occupation zones and anticipated conflicts of interest over French Indochina, but it also reflected the judgement of both the British and Americans that French aims in respect of many items on the conference's agenda were likely to contradict the Anglo-American agreed objectives.
At the end of the conference, the three heads of government agreed on the following actions. All other issues were to be answered by the final peace conference, which was to be called as soon as possible.
France, having been excluded from the conference, resisted implementing the Potsdam agreements within its occupation zone. In particular, the French refused to resettle any expelled Germans from the east. Moreover, the French did not accept any obligation to abide by the agreements in the proceedings of the Allied Control Council; in particular, they resisted all proposals to establish common policies and institutions across Germany as a whole and anything that they feared could lead to the emergence of an eventual unified German government.
Soviet Government proposed the extension of the authority of the Austrian Provisional Government and the Allies agreed to examine the proposal after the entry of the British and American forces in Vienna.
The Soviet Government proposed to the Conference that the suspense of territorial questions should be resolved permanently, after the peace was settled down at those regions. More specifically, the proposal referred to the section of the western frontier of the Union of Soviet Socialist Republics which was located near to the Baltic Sea. This area should pass on the eastern shore of the Bay of Danzig to the east, north of Braunsberg and Goldap, to the meeting point of the frontiers of Lithuania, the Polish Republic and East Prussia.
Finally, after the Conference considered thoroughly the recommendation of the Soviet Union, it was agreed that the entire city of Koenigsburg and the area next to it should be transferred, as described previously, to the Soviet Union.
The President Harry S. Truman and Prime Minister Winston Churchill had guaranteed that they would support the proposal of the Conference, when the peace was eventually ensured.
The Soviet Union made another proposal to the conference concerning the mandated territories as it was decided in the Crimea Conference and in the Charter of the United Nations Organization.
After various opinions had been heard and discussed on this question, the Foreign Prime Ministers agreed that it was essential to decide at once the preparation of a peace treaty for Italy, combined with the arrangement of any former Italian territories. In September the Council of Ministers of Foreign Affairs would examine the question about the italian territory.
At the Conference, the leaders agreed on the removal of Germans from Poland, Czechoslovakia and Hungary. The three governments were convinced that not only the transfer of German populations, but also the elements, which were remaining in Poland, Czechoslovakia and Hungary should begin as soon as possible. They emphasized that the transfers should be completed in an orderly and humane manner. Firstly, at the Potsdam Conference the leaders decided that the Allied Control Council in Germany should deal with the matter giving priority to the equal distribution of Germans among the zones of occupation. They were giving advice to their representatives on the Control Council to report to their governments the number of people who had already entered Germany from the Eastern countries. These representatives should also form an estimation about future transfers emphasizing to the capacity of Germany. The governments of eastern countries were being informed of the methods of further transfers and they were requested to suspend the expulsions of people. The Big Three were concerned of the reports from the Control Council and, thus, they should make an analytical examination on this matter.
The Big Three noticed that the Soviet representatives on the Allied Control Commissions in Rumania, Bulgaria and Hungary communicated to their United Kingdom and United States colleagues proposals for refining the work of the Control Commission, as the aggressions in Europe had ended. Then, the three leaders agreed on the revision of the procedures of the Allied Control Commissions in these countries, taking into consideration the interests and responsibilities of their Governments (UK, USA, and USSR), which together presented the terms of armistice to the respective countries, and accepting as a basis the agreed proposals.
At the Conference was agreed the establishment of a Council of Foreign Ministers, that would represent the five principal powers, in order to continue the essential preliminary work for the peace settlements and to assume other matters, which could occasionally be committed to the Council, by agreement of the governments participating in the Council. The establishment of the Council in question does not go against the agreement of the Crimea conference that there should be periodic meetings among the foreign secretaries of the three Governments. According to the text of the agreement for the establishment of the Council the following were decided:
All in all, the Conference has concurred to apply common policy for determining, at the earliest opportunity, the terms of the peace. The statement was as follows:
In general, it was desirable by the three Governments that the problem of the abnormal position of Italy, Bulgaria, Finland, Hungary and Romania should be resolved by the end of the negotiations. They believe that the other Allies will share their point of view.
As Italy was one of the most important issues that required instant governing by the new Council of Foreign Ministers, the three Governments had drown their attention to prepare a peace treaty for this country. Italy was the first of the Axis powers, which broke the bonds with Germany and participated to the operations of the Allies against Japan.
Italy has achieved to gain her freedom and left behind the fascist regime, making a significant progress. As a result, she paved the way for the re-establishment of a democratic government. If Italy actually ended up with a recognized and democratic government, it would be much easier for the USA, Great Britain and Soviet Union to satisfy their desire and support the involvement of Italy to the United Nations.
The Council of Foreign Ministers had also the duty to examine and prepare the peace treaties for Bulgaria, Finland, Hungary and Romania.
And besides the termination of peace treaties with recognized and democratic governments in these four states would allow to the three Governments to accept the requests from them to be members of the United Nations. Moreover, after the termination of peace negotiations, the Big Three agreed to examine in the near future the restoration of the diplomatic relations among Finland, Romania, Bulgaria and Hungary.
The three Governments were sure that the new situation, which was now formed in Europe after the end of the World War II, representatives of the Allied press would enjoy the freedom of expression upon developments in Romania, Bulgaria, Hungary and Finland.
The Article 4 of the Charter of the United Nations mentioned:
1. "Membership in the United Nations is open to all other peace-loving States who accept the obligations contained in the present Charter and, in the judgment of the organization, are able and willing to carry out these obligations;"
2. "The admission of any such state to membership in the United Nations will be effected by a decision of the General Assembly upon the recommendation of the Security Council."
The leaders declared that they were willing to support any request for membership from the states, which have remained neutral while the war was taking place, and fulfilled the requirements too.
Nevertheless, the three Governments felt the need to make clear that they were totally reluctant to support any application for membership from the Spanish Government. This is explained by the fact that Spanish Government was established with the support of the Axis powers. Consequently, due to its origins, nations and close relation to the Axis powers, the Conference was unwilling to justify the existence of such membership.
One of those at the conference is William D. Leahy. The Fleet Admiral in the US Navy had stood as advisor to Roosevelt during the Yalta Conference and to Truman during the Potsdam Conference. Leahy had a lengthy military background since he had served as the most senior American military officer on active duty during the Second World War. He later stated in his book, "I Was There: The Personal Story of the Chief of Staff to Presidents Roosevelt and Truman Based on His Notes and Diaries Made at the Time," that the Potsdam Conference was one of the most frustrating out of all the conferences because of hostile relations between the Soviet Union and Britain and the United States. Throughout his work, he refers to the conference as its code name, Terminal. Later in his book, he discusses a tour of Berlin that he took with Truman and describes the experience as "I never saw such destruction. I don't know whether they learned anything from it or not".
In addition to the Potsdam Agreement, on 26 July, Churchill; Truman; and Chiang Kai-shek, Chairman of the Nationalist Government of China (the Soviet Union was not at war with Japan) issued the Potsdam Declaration, which outlined the terms of surrender for Japan during World War II in Asia.
Truman had mentioned an unspecified "powerful new weapon" to Stalin during the conference. Towards the end of the conference, on July 26, the Potsdam Declaration gave Japan an ultimatum to surrender unconditionally or meet "prompt and utter destruction", which did not mention the new bomb but promised that "it was not intended to enslave Japan". The Soviet Union was not involved in that declaration since it was still neutral in the war against Japan. Japanese Prime Minister Kantarō Suzuki did not respond, which was interpreted as a declaration that the Empire of Japan had ignored the ultimatum. As a result, the United States dropped atomic bombs on Hiroshima on 6 August 1945 and Nagasaki on 9 August. The justifications used were that both cities were legitimate military targets and that it was necessary to end the war swiftly and to preserve American lives.
When Truman informed Stalin of the atomic bomb, he said that the United States "had a new weapon of unusual destructive force", but Stalin had full knowledge of the atomic bomb's development because of Soviet spy networks inside the Manhattan Project, and he told Truman at the conference to "make good use of this new addition to the Allied arsenal".
The Soviet Union converted the other countries Eastern Europe into satellite states within the Eastern Bloc, such as the People's Republic of Poland, the People's Republic of Bulgaria, the People's Republic of Hungary, the Czechoslovak Socialist Republic, the People's Republic of Romania and the People's Republic of Albania. Many of these countries had seen failed Socialist revolutions prior to World War II. | https://en.wikipedia.org/wiki?curid=24298 |
Pope Urban VI
Pope Urban VI (; c. 1318 – 15 October 1389), born Bartolomeo Prignano (), was the Roman claimant to the headship of the Catholic Church from 8 April 1378 to his death. He was the most recent pope to be elected from outside the College of Cardinals. His pontificate began shortly after the end of the Avignon Papacy. It was marked by immense conflict between rival factions as part of the Western Schism, with much of Europe recognizing Clement VII, based in Avignon, as the true pope.
Born in Itri, Prignano was a devout monk and learned casuist, trained at Avignon. On 21 March 1364 he was consecrated Archbishop of Acerenza in the Kingdom of Naples. He became Archbishop of Bari in 1377.
Prignano had developed a reputation for simplicity and frugality and a head for business when acting vice-chancellor. He also demonstrated a penchant for learning, and, according to Cristoforo di Piacenza, he had no family allies in an age of nepotism, although once in the papal chair he elevated four cardinal-nephews and sought to place one of them in control of Naples. His great faults undid his virtues: Ludwig von Pastor summed up his character: "He lacked Christian gentleness and charity. He was naturally arbitrary and extremely violent and imprudent, and when he came to deal with the burning ecclesiastical question of the day, that of reform, the consequences were disastrous."
On the death of Gregory XI (27 March 1378), a Roman mob surrounded the conclave to demand a Roman pope. With the cardinals being under some haste and great pressure to avoid the return of the papal seat to Avignon, Prignano was unanimously chosen Pope on 8 April 1378 as acceptable to the disunited majority of French cardinals, taking the name Urban VI. Not being a cardinal, he was not well known. Immediately following the conclave, most of the cardinals fled Rome before the mob could learn that not a Roman (though not a Frenchman either), but a subject of Queen Joan I of Naples, had been chosen.
Though the coronation was carried out in scrupulous detail, leaving no doubt as to the legitimacy of the new pontiff, the French were not particularly happy with this move and began immediately to conspire against this Pope. Urban VI did himself no favors; whereas the cardinals had expected him pliant, he was considered arrogant and angry by many of his contemporaries. Dietrich of Nieheim reported the opinion of the cardinals that his elevation had turned his head, and Froissart, Leonardo Aretino, Tommaso de Acerno and St. Antoninus of Florence recorded similar conclusions.
Immediately following his election, Urban began preaching intemperately to the cardinals (some of whom thought the delirium of power had made Urban mad and unfit for rule), insisting that the business of the Curia should be carried on without gratuities and gifts, forbidding the cardinals to accept annuities from rulers and other lay persons, condemning the luxury of their lives and retinues, and the multiplication of benefices and bishoprics in their hands. Nor would he remove again to Avignon, thus alienating King Charles V of France.
The cardinals were mortally offended. Five months after his election, the French cardinals met at Anagni, inviting Urban, who realized he would be seized, and perhaps slain. In his absence, they issued a manifesto of grievances on 9 August which declared his election invalid since they had been cowed by the mob into electing an Italian. Letters to the missing Italian cardinals followed on 20 August declaring the papal throne vacant ("sede vacante"). Then at Fondi, secretly supported by the king of France, the French cardinals proceeded to elect Robert of Geneva as Pope on 20 September. Robert, a militant cleric who had succeeded Albornoz as commander of the papal troops, took the name Clement VII, beginning the Western Schism, which divided Catholic Christendom until 1417.
Urban was declared excommunicated by the French antipope and was called "the Antichrist", while Catherine of Siena, defending Pope Urban, called the cardinals "devils in human form." Coluccio Salutati identified the political nature of the withdrawal: "Who does not see," the Chancellor openly addressed the French cardinals, "that you seek not the true pope, but opt solely for a Gallic pontiff." Opening rounds of argument were embodied in John of Legnano's defense of the election, "De fletu ecclesiæ," written and incrementally revised between 1378 and 1380, which Urban caused to be distributed in multiple copies, and in the numerous rebuttals that soon appeared. Events overtook the rhetoric, however; 26 new cardinals were created in a single day, and by an arbitrary alienation of the estates and property of the church, funds were raised for open war. At the end of May 1379 Clement went to Avignon, where he was more than ever at the mercy of the king of France. Louis I, Duke of Anjou, was granted a phantom kingdom of Adria to be carved out of papal Emilia and Romagna, if he could unseat the pope at Rome.
Meanwhile, the War of the Eight Saints, carried on with spates of unprecedented cruelty to civilians, was draining the resources of Florence, though the city ignored the interdict placed upon it by Gregory, declared its churches open, and sold ecclesiastical property for 100,000 florins to finance the war. Bologna had submitted to the Church in August 1377, and Florence signed a treaty at Tivoli on 28 July 1378 at a cost of 200,000 florins indemnity extorted by Urban for the restitution of church properties, receiving in return the papal favor and the lifting of the disregarded interdict.
Urban's erstwhile patroness, Queen Joan I of Naples, deserted him in the late summer of 1378, in part because her former archbishop had become her feudal suzerain. Urban now lost sight of the larger issues and began to commit a series of errors. He turned upon his powerful neighbor Joan, excommunicated her as an obstinate partisan of Clement, and permitted a crusade to be preached against her. Soon her enemy and cousin, the "crafty and ambitious" Charles II was made King of Naples on 1 June 1381, and was crowned by Urban. Joan's authority was declared forfeit, and Charles murdered her in 1382. "In return for these favours, Charles had to promise to hand over Capua, Caserta, Aversa, Nocera, and Amalfi to the pope's nephew, a thoroughly worthless and immoral man."
Once ensconced at Naples, Charles found his new kingdom invaded by Louis of Anjou and Amadeus VI of Savoy; hard-pressed, he reneged on his promises. In Rome, the Castel Sant'Angelo was besieged and taken, and Urban was forced to flee. In the fall of 1383 he was determined to go to Naples and press Charles in person. There he found himself virtually a prisoner. After a first reconciliation, with the death of Louis (20 September 1384), Charles found himself freer to resist Urban's feudal pretensions, and relations took a turn for the worse. Urban was shut up in Nocera, from the walls of which he daily fulminated his anathemas against his besiegers, with bell, book and candle; a price was set on his head.
Rescued by two Neapolitan barons who had sided for Louis, Raimondello Orsini and Tommaso di Sanseverino, after six months of siege he succeeded in making his escape to Genoa with six galleys sent him by doge Antoniotto Adorno. Several among his cardinals who had been shut up in Nocera with him were determined to make a stand, proposing that the Pope, due to incapacity and obstinacy, be put in the charge of one of the cardinals. Urban had them seized, tortured and put to death, "a crime unheard of through the centuries" the chronicler Egidio da Viterbo remarked.
Urban's support had dwindled to the northern Italian states, Portugal, England, and Emperor Charles IV, who brought with him the support of most of the princes and abbots of Germany.
On the death of Charles of Naples on 24 February 1386, Urban moved to Lucca in December of the same year. The Kingdom of Naples was contended between a party favouring his son Ladislaus and Louis II of Anjou. Urban contrived to take advantage of the anarchy which had ensued (as well as of the presence of the feeble Maria as Queen of Sicily) to seize Naples for his nephew Francesco Moricotti Prignani. In the meantime he was able to have Viterbo and Perugia return to the Papal control.
In August 1388 Urban moved from Perugia with thousands of troops. To raise funds he had proclaimed a Jubilee to be held in 1390. At the time of the proclamation, only 38 years had elapsed since the previous Jubilee, which was celebrated under Clement VI. During the march, Urban fell from his mule at Narni and had to recover in early October in Rome, where he was able to oust the communal rule of the "banderesi" and restore the papal authority. He died soon afterwards, likely of injuries caused by the fall, but not without rumors of poisoning. He was succeeded by Boniface IX.
During the reconstruction of Saint Peter's Basilica, Urban's remains were almost dumped out to be destroyed so his sarcophagus could be used to water horses. The sarcophagus was saved only when church historian Giacomo Grimaldi arrived and, realizing its importance, ordered it preserved. | https://en.wikipedia.org/wiki?curid=24302 |
Pope Urban VII
Pope Urban VII (; 4 August 1521 – 27 September 1590), born Giovanni Battista Castagna, was head of the Catholic Church and ruler of the Papal States from 15 to 27 September 1590. His twelve-day papacy was the shortest in history.
Giovanni Battista Castagna was born in Rome in 1521 to a noble family as the son of Cosimo Castagna of Genoa and Costanza Ricci Giacobazzi of Rome.
Castagna studied in universities all across Italy and obtained a doctorate in civil law and canon law when he finished his studies at the University of Bologna. Soon after he became auditor of his uncle, Cardinal Girolamo Verallo, whom he accompanied as datary on a papal legation to France. He served as a constitutional lawyer and entered the Roman Curia during the pontificate of Pope Julius III as the Referendary of the Apostolic Signatura. Castagna was chosen to be the new Archbishop of Rossano on 1 March 1553, and he would quickly receive all the minor and major orders culminating in his ordination to the priesthood on 30 March 1553 in Rome. He then received episcopal consecration a month after at the home of Cardinal Verallo.
He served as the Governor of Fano from 1555 to 1559 and later served as the Governor of Perugia and Umbria from 1559 to 1560. During the reign of Pius IV he settled satisfactorily a long-standing boundary dispute between the inhabitants of Terni and Spoleto. Castagna would later participate in the Council of Trent from 1562 to 1563 and served as the president of several conciliar congregations. He was appointed as the Apostolic Nuncio to Spain in 1565 and served there until 1572, resigning his post from his archdiocese a year later. He also served as the Governor of Bologna from 1576 to 1577. Among other positions, he was the Apostolic Nuncio to Venice from 1573 to 1577 and served also as the Papal Legate to Flanders and Cologne from 1578 to 1580.
Pope Gregory XIII elevated him to the cardinalate on 12 December 1583 and he was appointed as the Cardinal-Priest of San Marcello.
After the death of Pope Sixtus V a conclave was convoked to elect a successor. Ferdinando I de' Medici, Grand Duke of Tuscany had been appointed a cardinal at the age of fourteen, but was never ordained to the priesthood. At the age of thirty-eight, he resigned the cardinalate upon the death of his older brother, Francesco in 1587, in order to succeed to the title. (There were suspicions that Francisco and his wife died of arsenic poisoning after having dined at Fernando's Villa Medici, although one story has Fernando as the intended target of his sister-in-law.) Ferdinando's foreign policy attempted to free Tuscany from Spanish domination. He was consequently opposed to the election of any candidate supported by Spain. He persuaded Cardinal Alessandro Peretti di Montalto, grand-nephew of Sixtus V to switch his support from Cardinal Marco Antonio Colonna, which brought the support of the younger cardinals appointed by the late Sixtus.
Castagna, a seasoned diplomat of moderation and proven rectitude was elected as pope on 15 September 1590 and selected the pontifical name of "Urban VII".
Urban VII's short passage in office gave rise to the world's first known public smoking ban, as he threatened to excommunicate anyone who "took tobacco in the porchway of or inside a church, whether it be by chewing it, smoking it with a pipe or sniffing it in powdered form through the nose".
Urban VII was known for his charity to the poor. He subsidized Roman bakers so they could sell bread under cost, and restricted the spending on luxury items for members of his court. He also subsidized public works projects throughout the Papal States. Urban VII was strictly against nepotism and he forbade it within the Roman Curia.
Urban VII died on 27 September 1590, shortly before midnight, of malaria in Rome. He was buried in the Vatican. The funeral oration was delivered by Pompeo Ugonio. His remains were later transferred to the church of Santa Maria sopra Minerva on 21 September 1606.
His estate was valued at 30,000 scudi and it was bequeathed to the Archconfraternity of the Annunciation to use as dowries for poor young girls. | https://en.wikipedia.org/wiki?curid=24303 |
Password
A password, sometimes called a passcode, is a memorized secret, typically a string of characters, used to confirm the identity of a user. Using the terminology of the NIST Digital Identity Guidelines, the secret is memorized by a party called the "claimant" while the party verifying the identity of the claimant is called the "verifier". When the claimant successfully demonstrates knowledge of the password to the verifier through an established authentication protocol, the verifier is able to infer the claimant's identity.
In general, a password is an arbitrary string of characters including letters, digits, or other symbols. If the permissible characters are constrained to be numeric, the corresponding secret is sometimes called a personal identification number (PIN).
Despite its name, a password does not need to be an actual word; indeed, a non-word (in the dictionary sense) may be harder to guess, which is a desirable property of passwords. A memorized secret consisting of a sequence of words or other text separated by spaces is sometimes called a passphrase. A passphrase is similar to a password in usage, but the former is generally longer for added security.
Passwords have been used since ancient times. Sentries would challenge those wishing to enter an area to supply a password or "watchword", and would only allow a person or group to pass if they knew the password. Polybius describes the system for the distribution of watchwords in the Roman military as follows:
The way in which they secure the passing round of the watchword for the night is as follows: from the tenth maniple of each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune, and receiving from him the watchword—that is a wooden tablet with the word inscribed on it – takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.
Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example in the opening days of the Battle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password—"flash"—which was presented as a challenge, and answered with the correct response—"thunder". The challenge and response were changed every three days. American paratroopers also famously used a device known as a "cricket" on D-Day in place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply.
Passwords have been used with computers since the earliest days of computing. The Compatible Time-Sharing System (CTSS), an operating system introduced at MIT in 1961, was the first computer system to implement password login. CTSS had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password with privacy." In the early 1970s, Robert Morris developed a system of storing login passwords in a hashed form as part of the Unix operating system. The system was based on a simulated Hagelin rotor crypto machine, and first appeared in 6th Edition Unix in 1974. A later version of his algorithm, known as crypt(3), used a 12-bit salt and invoked a modified form of the DES algorithm 25 times to reduce the risk of pre-computed dictionary attacks.
In modern times, user names and passwords are commonly used by people during a log in process that controls access to protected computer operating systems, mobile phones, cable TV decoders, automated teller machines (ATMs), etc. A typical computer user has passwords for many purposes: logging into accounts, retrieving e-mail, accessing applications, databases, networks, web sites, and even reading the morning newspaper online.
The easier a password is for the owner to remember generally means it will be easier for an attacker to guess. However, passwords that are difficult to remember may also reduce the security of a system because (a) users might need to write down or electronically store the password, (b) users will need frequent password resets and (c) users are more likely to re-use the same password across different accounts. Similarly, the more stringent the password requirements, such as "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system. Others argue longer passwords provide more security (e.g., entropy) than shorter passwords with a wide variety of characters.
In "The Memorability and Security of Passwords", Jeff Yan et al. examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords.
Combining two or more unrelated words and altering some of the letters to special characters or numbers is another good method, but a single dictionary word is not. Having a personally designed algorithm for generating obscure passwords is another good method.
However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1', substitutions which are well known to attackers. Similarly typing the password one keyboard row higher is a common trick known to attackers.
In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media):
The security of a password-protected system depends on several factors. The overall system must be designed for sound security, with protection against computer viruses, man-in-the-middle attacks and the like. Physical security issues are also a concern, from deterring shoulder surfing to more sophisticated physical threats such as video cameras and keyboard sniffers. Passwords should be chosen so that they are hard for an attacker to guess and hard for an attacker to discover using any of the available automatic attack schemes. See password strength and computer security for more information.
Nowadays, it is a common practice for computer systems to hide passwords as they are typed. The purpose of this measure is to prevent bystanders from reading the password; however, some argue that this practice may lead to mistakes and stress, encouraging users to choose weak passwords. As an alternative, users should have the option to show or hide passwords as they type them.
Effective access control provisions may force extreme measures on criminals seeking to acquire a password or biometric token. Less extreme measures include extortion, rubber hose cryptanalysis, and side channel attack.
Some specific password management issues that must be considered when thinking about, choosing, and handling, a password follow.
The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password entry attempts. In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords, if they have been well chosen and are not easily guessed.
Many systems store a cryptographic hash of the password. If an attacker gets access to the file of hashed passwords guessing can be done offline, rapidly testing candidate passwords against the true password's hash value. In the example of a web-server, an online attacker can guess only at the rate at which the server will respond, while an off-line attacker (who gains access to the file) can guess at a rate limited only by the hardware on which the attack is running.
Passwords that are used to generate cryptographic keys (e.g., for disk encryption or Wi-Fi security) can also be subjected to high rate guessing. Lists of common passwords are widely available and can make password attacks very efficient. (See Password cracking.) Security in such situations depends on using passwords or passphrases of adequate complexity, making such an attack computationally infeasible for the attacker. Some systems, such as PGP and Wi-Fi WPA, apply a computation-intensive hash to the password to slow such attacks. See key stretching.
An alternative to limiting the rate at which an attacker can make guesses on a password is to limit the total number of guesses that can be made. The password can be disabled, requiring a reset, after a
small number of consecutive bad guesses (say 5); and the user may be required to change the password after a larger cumulative number of bad guesses (say 30), to prevent an attacker from making an arbitrarily large number of bad guesses by interspersing them between good guesses made by the legitimate password owner. Attackers may conversely use knowledge of this mitigation to implement a denial of service attack against the user by intentionally locking the user out of their own device; this denial of service may open other avenues for the attacker to manipulate the situation to their advantage via social engineering.
Some computer systems store user passwords as plaintext, against which to compare user logon attempts. If an attacker gains access to such an internal password store, all passwords—and so all user accounts—will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well.
More secure systems store each password in a cryptographically protected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. The most secure don't store passwords at all, but a one-way derivation, such as a polynomial, modulus, or an advanced hash function.
Roger Needham invented the now common approach of storing only a "hashed" form of the plaintext password. When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user's entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, in many implementations, another value known as a salt. A salt prevents attackers from easily building a list of hash values for common passwords and prevents password cracking efforts from scaling across all users. MD5 and SHA1 are frequently used cryptographic hash functions but they are not recommended for password hashing unless they are used as part of a larger construction such as in PBKDF2.
The stored data—sometimes called the "password verifier" or the "password hash"—is often stored in Modular Crypt Format or RFC 2307 hash format, sometimes in the /etc/passwd file or the /etc/shadow file.
The main storage methods for passwords are plain text, hashed, hashed and salted, and reversibly encrypted. If an attacker gains access to the password file, then if it is stored as plain text, no cracking is necessary. If it is hashed but not salted then it is vulnerable to rainbow table attacks (which are more efficient than cracking). If it is reversibly encrypted then if the attacker gets the decryption key along with the file no cracking is necessary, while if he fails to get the key cracking is not possible. Thus, of the common storage formats for passwords only when passwords have been salted and hashed is cracking both necessary and possible.
If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover a plaintext password. An attacker can, however, use widely available tools to attempt to guess the passwords. These tools work by hashing possible passwords and comparing the result of each guess to the actual password hashes. If the attacker finds a match, they know that their guess is the actual password for the associated user.
Password cracking tools can operate by brute force (i.e. trying every possible combination of characters) or by hashing every word from a list; large lists of possible passwords in many languages are widely available on the Internet. The existence of password cracking tools allows attackers to easily recover poorly chosen passwords. In particular, attackers can quickly recover passwords that are short, dictionary words, simple variations on dictionary words or that use easily guessable patterns.
A modified version of the DES algorithm was used as the basis for the password hashing algorithm in early Unix systems. The crypt algorithm used a 12-bit salt value so that each user's hash was unique and iterated the DES algorithm 25 times in order to make the hash function slower, both measures intended to frustrate automated guessing attacks. The user's password was used as a key to encrypt a fixed value. More recent Unix or Unix like systems (e.g., Linux or the various BSD systems) use more secure password hashing algorithms such as PBKDF2, bcrypt, and scrypt which have large salts and an adjustable cost or number of iterations.
A poorly designed hash function can make attacks feasible even if a strong password is chosen. See LM hash for a widely deployed, and insecure, example.
Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or person. If the password is carried as electrical signals on unsecured physical wiring between the user access point and the central system controlling the password database, it is subject to snooping by wiretapping methods. If it is carried as packeted data over the Internet, anyone able to watch the packets containing the logon information can snoop with a very low probability of detection.
Email is sometimes used to distribute passwords but this is generally an insecure method. Since most email is sent as plaintext, a message containing a password is readable without effort during transport by any eavesdropper. Further, the message will be stored as plaintext on at least two computers: the sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on there as well, at least for some time, and may be copied to backup, cache or history files on any of these systems.
Using client-side encryption will only protect transmission from the mail handling system server to the client machine. Previous or subsequent relays of the email will not be protected and the email will probably be stored on multiple computers, certainly on the originating and receiving computers, most often in clear text.
The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, using cryptographic protection. The most widely used is the Transport Layer Security (TLS, previously called SSL) feature built into most current Internet browsers. Most browsers alert the user of a TLS/SSL protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use; see cryptography.
Unfortunately, there is a conflict between stored hashed-passwords and hash-based challenge-response authentication; the latter requires a client to prove to a server that they know what the shared secret (i.e., password) is, and to do this, the server must be able to obtain the shared secret from its stored form. On many systems (including Unix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need the original password to authenticate remotely; they only need the hash.
Rather than transmitting a password, or transmitting the hash of the password, password-authenticated key agreement systems can perform a zero-knowledge password proof, which proves knowledge of the password without exposing it.
Moving a step further, augmented systems for password-authenticated key agreement (e.g., AMP, B-SPEKE, PAK-Z, SRP-6) avoid both the conflict and limitation of hash-based methods. An augmented system allows a client to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and where the unhashed password is required to gain access.
Usually, a system must provide a way to change a password, either because a user believes the current password has been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in unencrypted form, security can be lost (e.g., via wiretapping) before the new password can even be installed in the password database and if the new password is given to a compromised employee, little is gained. Some web sites include the user-selected password in an unencrypted confirmation e-mail message, with the obvious increased vulnerability.
Identity management systems are increasingly used to automate issuance of replacements for lost passwords, a feature called self service password reset. The user's identity is verified by asking questions and comparing the answers to ones previously stored (i.e., when the account was opened).
Some password reset questions ask for personal information that could be found on social media, such as mother's maiden name. As a result, some security experts recommend either making up one's own questions or giving false answers.
"Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly, monthly or even more often). Such policies usually provoke user protest and foot-dragging at best and hostility at worst. There is often an increase in the people who note down the password and leave it where it can easily be found, as well as help desk calls to reset a forgotten password. Users may use simpler passwords or develop variation patterns on a consistent theme to keep their passwords memorable. Because of these issues, there is some debate as to whether password aging is effective. Changing a password will not prevent abuse in most cases, since the abuse would often be immediately noticeable. However, if someone may have had access to the password through some means, such as sharing a computer or breaching a different site, changing the password limits the window for abuse.
Single passwords are also much less convenient to change because many people need to be told at the same time, and they make removal of a particular user's access more difficult, as for instance on graduation or resignation. Separate logins are also often used for accountability, for example to know who changed a piece of data.
Common techniques used to improve the security of computer systems protected by a password include:
"Ten Windows Password Myths" : "NT dialog boxes ... limited passwords to a maximum of 14 characters"
reducing security.)
Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result.
It is common practice amongst computer users to reuse the same password on multiple sites. This presents a substantial security risk, because an attacker needs to only compromise a single site in order to gain access to other sites the victim uses. This problem is exacerbated by also reusing usernames, and by websites requiring email logins, as it makes it easier for an attacker to track a single user across multiple sites. Password reuse can be avoided or minimised by using mnemonic techniques, writing passwords down on paper, or using a password manager.
It has been argued by Redmond researchers Dinei Florencio and Cormac Herley, together with Paul C. van Oorschot of Carleton University, Canada, that password reuse is inevitable, and that users should reuse passwords for low-security websites (which contain little personal data and no financial information, for example) and instead focus their efforts on remember long, complex passwords for a few important accounts, such as bank accounts. Similar arguments were made by Forbes in not change passwords as often as many "experts" advise, due to the same limitations in human memory.
Historically, many security experts asked people to memorize their passwords: "Never write down a password". More recently, many security experts such as Bruce Schneier recommend that people use passwords that are too complicated to memorize, write them down on paper, and keep them in a wallet.
Password manager software can also store passwords relatively safely, in an encrypted file sealed with a single master password.
According to a survey by the University of London, one in ten people are now leaving their passwords in their wills to pass on this important information when they die. One third of people, according to the poll, agree that their password protected data is important enough to pass on in their will.
Multi-factor authentication schemes combine passwords (as "knowledge factors") with one or more other means of authentication, to make authentication more secure and less vulnerable to compromised passwords. For example, a simple two-factor login might send a text message, e-mail, automated phone call, or similar alert whenever a login attempt is made, possibly supplying a code which must be entered in addition to a password. More sophisticated factors include such things as hardware tokens and biometric security.
Most organizations specify a password policy that sets requirements for the composition and usage of passwords, typically dictating minimum length, required categories (e.g., upper and lower case, numbers, and special characters), prohibited elements (e.g., use of one's own name, date of birth, address, telephone number). Some governments have national authentication frameworks that define requirements for user authentication to government services, including requirements for passwords.
Many websites enforce standard rules such as minimum and maximum length, but also frequently include composition rules such as featuring at least one capital letter and at least one number/symbol. These latter, more specific rules were largely based on a 2003 report by the National Institute of Standards and Technology (NIST), authored by Bill Burr. It originally proposed the practice of using numbers, obscure characters and capital letters and updating regularly. In a 2017 "Wall Street Journal" article, Burr reported he regrets these proposals and made a mistake when he recommended them.
According to a 2017 rewrite of this NIST report, many websites have rules that actually have the opposite effect on the security of their users. This includes complex composition rules as well as forced password changes after certain periods of time. While these rules have long been widespread, they have also long been seen as annoying and ineffective by both users and cyber-security experts. The NIST recommends people use longer phrases as passwords (and advises websites to raise the maximum password length) instead of hard-to-remember passwords with "illusory complexity" such as "pA55w+rd". A user prevented from using the password "password" may simply choose "Password1" if required to include a number and uppercase letter. Combined with forced periodic password changes, this can lead to passwords that are difficult to remember but easy to crack.
Paul Grassi, one of the 2017 NIST report's authors, further elaborated: "Everyone knows that an exclamation point is a 1, or an I, or the last character of a password. $ is an S or a 5. If we use these well-known tricks, we aren’t fooling any adversary. We are simply fooling the database that stores passwords into thinking the user did something good."
Attempting to crack passwords by trying as many possibilities as time and money permit is a brute force attack. A related method, rather more efficient in most cases, is a dictionary attack. In a dictionary attack, all words in one or more dictionaries are tested. Lists of common passwords are also typically tested.
Password strength is the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Cryptologists and computer scientists often refer to the strength or 'hardness' in terms of entropy.
Passwords easily discovered are termed "weak" or "vulnerable"; passwords very difficult or impossible to discover are considered "strong". There are several programs available for password attack (or even auditing and recovery by systems personnel) such as L0phtCrack, John the Ripper, and Cain; some of which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detect weak passwords proposed by users.
Studies of production computer systems have consistently shown that a large fraction of all user-chosen passwords are readily guessed automatically. For example, Columbia University found 22% of user passwords could be recovered with little effort. According to Bruce Schneier, examining data from a 2006 phishing attack, 55% of MySpace passwords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006. He also reported that the single most common password was "password1", confirming yet again the general lack of informed care in choosing passwords among users. (He nevertheless maintained, based on these data, that the general quality of passwords has improved over the years—for example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.)
The numerous ways in which permanent or semi-permanent passwords can be compromised has prompted the development of other techniques. Unfortunately, some are inadequate in practice, and in any case few have become universally available for users seeking a more secure alternative. A 2012 paper examines why passwords have proved so hard to supplant (despite numerous predictions that they would soon be a thing of the past); in examining thirty representative proposed replacements with respect to security, usability and deployability they conclude "none even retains the full set of benefits that legacy passwords already provide."
That "the password is dead" is a recurring idea in computer security. It often accompanies arguments that the replacement of passwords by a more secure means of authentication is both necessary and imminent. This claim has been made by numerous people at least since 2004. Notably, Bill Gates, speaking at the 2004 RSA Conference predicted the demise of passwords saying "they just don't meet the challenge for anything you really want to secure." In 2011 IBM predicted that, within five years, "You will never need a password again." Matt Honan, a journalist at Wired, who was the victim of a hacking incident, in 2012 wrote "The age of the password has come to an end." Heather Adkins, manager of Information Security at Google, in 2013 said that "passwords are done at Google." Eric Grosse, VP of security engineering at Google, states that "passwords and simple bearer tokens, such as cookies, are no longer sufficient to keep users safe." Christopher Mims, writing in the Wall Street Journal said the password "is finally dying" and predicted their replacement by device-based authentication.
Avivah Litan of Gartner said in 2014 "Passwords were dead a few years ago. Now they are more than dead."
The reasons given often include reference to the usability as well as security problems of passwords.
The claim that "the password is dead" is often used by advocates of alternatives to passwords, such as biometrics, two-factor authentication or single sign-on. Many initiatives have been launched with the explicit goal of eliminating passwords. These include Microsoft's Cardspace, the Higgins project, the Liberty Alliance, NSTIC, the FIDO Alliance and various Identity 2.0 proposals. Jeremy Grant, head of NSTIC initiative (the US Dept. of Commerce National Strategy for Trusted Identities in Cyberspace), declared "Passwords are a disaster from a security perspective, we want to shoot them dead." The FIDO Alliance promises a "passwordless experience" in its 2015 specification document.
In spite of these predictions and efforts to replace them passwords still appear as the dominant form of authentication on the web. In "The Persistence of Passwords," Cormac Herley and Paul van Oorschot suggest that every effort should be made to end the "spectacularly incorrect assumption" that passwords are dead.
They argue that "no other single technology matches their combination of cost, immediacy and convenience" and that "passwords are themselves the best fit for many of the scenarios in which they are currently used."
Following the work of Herley and van Oorschot, Bonneau et al. systematically compared web passwords to 35 competing authentication schemes in terms of their usability, deployability, and security. (The technical report is an extended version of the peer-reviewed paper by the same name.) Their analysis shows that most schemes do better than passwords on security, some schemes do better and some worse with respect to usability, while "every" scheme does worse than passwords on deployability. The authors conclude with the following observation: “Marginal gains are often not sufficient to reach the activation energy necessary to overcome significant transition costs, which may provide the best explanation of why we are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery.” | https://en.wikipedia.org/wiki?curid=24304 |
Portable Network Graphics
Portable Network Graphics (PNG, officially pronounced , more commonly pronounced ) is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF).
PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without alpha channel for transparency), and full-color non-palette-based RGB or RGBA images. The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics; therefore non-RGB color spaces such as CMYK are not supported. A PNG file contains a single image in an extensible structure of "chunks", encoding the basic pixels and other information such as textual comments and integrity checks documented in RFC 2083.
PNG files use the file extension codice_1 or codice_2 and are assigned MIME media type codice_3.
PNG was published as informational RFC 2083 in March 1997 and as an ISO/IEC 15948 standard in 2004.
The motivation for creating the PNG format was the realization, in early 1995, that the Lempel–Ziv–Welch (LZW) data compression algorithm used in the Graphics Interchange Format (GIF) format was patented by Unisys. There were also other problems with the GIF format that made a replacement desirable, notably its limit of 256 colors at a time when computers with far more advanced displays were becoming common.
A January 1995 precursory discussion thread, on the usenet newsgroup "comp.graphics" with the subject "Thoughts on a GIF-replacement file format", had many propositions which would later be part of the PNG file format. In that thread, Oliver Fromme, author of the popular JPEG viewer QPEG, proposed the PING name, eventually becoming PNG, a recursive acronym meaning "PING is not GIF", and also the codice_4 extension.
Although GIF allows for animation, it was decided that PNG should be a single-image format. In 2001, the developers of PNG published the Multiple-image Network Graphics (MNG) format, with support for animation. MNG achieved moderate application support, but not enough among mainstream web browsers and no usage among web site designers or publishers. In 2008, certain Mozilla developers published the Animated Portable Network Graphics (APNG) format with similar goals. APNG is a format that is natively supported by Gecko- and Presto-based web browsers and is also commonly used for thumbnails on Sony's PlayStation Portable system (using the normal PNG file extension). However, as of 2017, usage of APNG remains minimal despite being supported by all major browsers but Microsoft Edge.
The original PNG specification was authored by an ad-hoc group of computer graphics experts and enthusiasts. Discussions and decisions about the format were conducted by email. The original authors listed on RFC 2083 are:
A PNG file starts with an 8-byte signature (refer to hex editor image on the right):
After the header comes a series of chunks, each of which conveys certain information about the image. Chunks declare themselves as "critical" or "ancillary", and a program encountering an ancillary chunk that it does not understand can safely ignore it. This chunk-based storage layer structure, similar in concept to a container format or to Amigas IFF, is designed to allow the PNG format to be extended while maintaining compatibility with older versions—it provides forward compatibility, and this same file structure (with different signature and chunks) is used in the associated MNG, JNG, and APNG formats.
A chunk consists of four parts: length (4 bytes, big-endian), chunk type/name (4 bytes), chunk data (length bytes) and CRC (cyclic redundancy code/checksum; 4 bytes). The CRC is a network-byte-order CRC-32 computed over the chunk type and chunk data, but not the length.
Chunk types are given a four-letter case sensitive ASCII type/name; compare FourCC. The case of the different letters in the name (bit 5 of the numeric value of the character) is a bit field that provides the decoder with some information on the nature of chunks it does not recognize.
The case of the first letter indicates whether the chunk is critical or not. If the first letter is uppercase, the chunk is critical; if not, the chunk is ancillary. Critical chunks contain information that is necessary to read the file. If a decoder encounters a critical chunk it does not recognize, it must abort reading the file or supply the user with an appropriate warning.
The case of the second letter indicates whether the chunk is "public" (either in the specification or the registry of special-purpose public chunks) or "private" (not standardised). Uppercase is public and lowercase is private. This ensures that public and private chunk names can never conflict with each other (although two private chunk names could conflict).
The third letter must be uppercase to conform to the PNG specification. It is reserved for future expansion. Decoders should treat a chunk with a lower case third letter the same as any other unrecognised chunk.
The case of the fourth letter indicates whether the chunk is safe to copy by editors that do not recognize it. If lowercase, the chunk may be safely copied regardless of the extent of modifications to the file. If uppercase, it may only be copied if the modifications have not touched any critical chunks.
A decoder must be able to interpret critical chunks to read and render a PNG file.
The codice_6 chunk is essential for color type 3 (indexed color). It is optional for color types 2 and 6 (truecolor and truecolor with alpha) and it must not appear for color types 0 and 4 (grayscale and grayscale with alpha).
Other image attributes that can be stored in PNG files include gamma values, background color, and textual metadata information. PNG also supports color management through the inclusion of ICC color space profiles.
The lowercase first letter in these chunks indicates that they are not needed for the PNG specification. The lowercase last letter in some chunks indicates that they are safe to copy, even if the application concerned does not understand them.
Pixels in PNG images are numbers that may be either indices of sample data in the palette or the sample data itself. The palette is a separate table contained in the PLTE chunk. Sample data for a single pixel consists of a tuple of between one and four numbers. Whether the pixel data represents palette indices or explicit sample values, the numbers are referred to as channels and every number in the image is encoded with an identical format.
The permitted formats encode each number as an unsigned integer value using a fixed number of bits, referred to in the PNG specification as the "bit depth". Notice that this is not the same as color depth, which is commonly used to refer to the total number of bits in each pixel, not each channel. The permitted bit depths are summarized in the table along with the total number of bits used for each pixel.
The number of channels depends on whether the image is grayscale or color and whether it has an alpha channel. PNG allows the following combinations of channels, called the "color type".
The color type is specified as an 8-bit value however only the low 3 bits are used and, even then, only the five combinations listed above are permitted. So long as the color type is valid it can be considered as a bit field as summarized in the adjacent table:
With indexed color images, the palette always stores trichromatic colors at a depth of 8 bits per channel (24 bits per palette entry). Additionally, an optional list of 8-bit alpha values for the palette entries may be included; if not included, or if shorter than the palette, the remaining palette entries are assumed to be opaque. The palette must not have more entries than the image bit depth allows for, but it may have fewer (for example, if an image with 8-bit pixels only uses 90 colors then it does not need palette entries for all 256 colors). The palette must contain entries for all the pixel values present in the image.
The standard allows indexed color PNGs to have 1, 2, 4 or 8 bits per pixel; grayscale images with no alpha channel may have 1, 2, 4, 8 or 16 bits per pixel. Everything else uses a bit depth per channel of either 8 or 16. The combinations this allows are given in the table above. The standard requires that decoders can read all supported color formats, but many image editors can only produce a small subset of them.
PNG offers a variety of transparency options. With true-color and grayscale images either a single pixel value can be declared as transparent or an alpha channel can be added (enabling any percentage of partial transparency to be used). For paletted images, alpha values can be added to palette entries. The number of such values stored may be less than the total number of palette entries, in which case the remaining entries are considered fully opaque.
The scanning of pixel values for binary transparency is supposed to be performed before any color reduction to avoid pixels becoming unintentionally transparent. This is most likely to pose an issue for systems that can decode 16-bits-per-channel images (as is required for compliance with the specification) but only output at 8 bits per channel (the norm for all but the highest end systems).
Alpha "storage" can be "associated" ("premultiplied") or "unassociated", but PNG standardized on "unassociated" ("non-premultiplied") alpha, which means that imagery is not alpha "encoded"; the emissions represented in RGB are not the emissions at the pixel level. This means that the over operation will multiply the RGB emissions by the alpha, and cannot represent emission and occlusion properly.
PNG uses a 2-stage compression process:
PNG uses DEFLATE, a non-patented lossless data compression algorithm involving a combination of LZ77 and Huffman coding. Permissively-licensed DEFLATE implementations, such as zlib, are widely available.
Compared to formats with lossy compression such as JPG, choosing a compression setting higher than average delays processing, but often does not result in a significantly smaller file size.
Before DEFLATE is applied, the data is transformed via a prediction method: a single "filter method" is used for the entire image, while for each image line, a "filter type" is chosen to transform the data to make it more efficiently compressible. The filter type used for a scanline is prepended to the scanline to enable inline decompression.
There is only one filter method in the current PNG specification (denoted method 0), and thus in practice the only choice is which filter type to apply to each line. For this method, the filter predicts the value of each pixel based on the values of previous neighboring pixels, and subtracts the predicted color of the pixel from the actual value, as in DPCM. An image line filtered in this way is often more compressible than the raw image line would be, especially if it is similar to the line above, since the differences from prediction will generally be clustered around 0, rather than spread over all possible image values. This is particularly important in relating separate rows, since DEFLATE has no understanding that an image is a 2D entity, and instead just sees the image data as a stream of bytes.
There are five filter types for filter method 0; each type predicts the value of each byte (of the image data before filtering) based on the corresponding byte of the pixel to the left ("A"), the pixel above ("B"), and the pixel above and to the left ("C") or some combination thereof, and encodes the "difference" between the predicted value and the actual value. Filters are applied to byte values, not pixels; pixel values may be one or two bytes, or several values per byte, but never cross byte boundaries. The filter types are:
The Paeth filter is based on an algorithm by Alan W. Paeth.
Compare to the version of DPCM used in lossless JPEG, and to the discrete wavelet transform using 1×2, 2×1, or (for the Paeth predictor) 2×2 windows and Haar wavelets.
Compression is further improved by choosing filter types adaptively on a line-by-line basis. This improvement, and a heuristic method of implementing it commonly used by PNG-writing software, were created by Lee Daniel Crocker, who tested the methods on many images during the creation of the format; the choice of filter is a component of file size optimization, as discussed below.
If interlacing is used, each stage of the interlacing is filtered separately, meaning that the image can be progressively rendered as each stage is received; however, interlacing generally makes compression less effective.
PNG offers an optional 2-dimensional, 7-pass interlacing scheme—the Adam7 algorithm. This is more sophisticated than GIF's 1-dimensional, 4-pass scheme, and allows a clearer low-resolution image to be visible earlier in the transfer, particularly if interpolation algorithms such as bicubic interpolation are used.
However, the 7-pass scheme tends to reduce the data's compressibility more than simpler schemes.
PNG itself does not support animation. MNG is an extension to PNG that does; it was designed by members of the PNG Group. MNG shares PNG's basic structure and chunks, but it is significantly more complex and has a different file signature, which automatically renders it incompatible with standard PNG decoders.
The complexity of MNG led to the proposal of APNG by developers of the Mozilla Foundation. It is based on PNG, supports animation and is simpler than MNG. APNG offers fallback to single-image display for PNG decoders that do not support APNG. However, neither of these formats is currently widely supported. APNG is supported in Firefox 3.0 and up, Pale Moon (all versions), and Opera 9.5, but since Opera changed its layout engine to Blink, support was dropped. The latest version of Safari on iOS 8 and Safari 8 for OS X Yosemite support APNG. Chromium 59.0 has added APNG support, then Opera added back in 46.0. The PNG Group decided in April 2007 not to embrace APNG. Several alternatives were under discussion, ANG, aNIM/mPNG, "PNG in GIF" and its subset "RGBA in GIF".
Displayed in the fashion of hex editors, with on the left side byte values shown in hex format, and on the right side their equivalent characters from ISO-8859-1 with unrecognized and control characters replaced with periods.
Additionally the PNG signature and individual chunks are marked with colors. Note they are easy to identify because of their human readable type names (in this example PNG, IHDR, IDAT, and IEND).
Reasons to use this International Standard may be:
PNG images are less widely supported by older browsers. In particular, IE6 has limited support for PNG.
The JPEG (Joint Photographic Experts Group) format can produce a smaller file than PNG for photographic (and photo-like) images, since JPEG uses a lossy encoding method specifically designed for photographic image data, which is typically dominated by soft, low-contrast transitions, and an amount of noise or similar irregular structures. Using PNG instead of a high-quality JPEG for such images would result in a large increase in filesize with negligible gain in quality. In comparison, when storing images that contain text, line art, or graphics – images with sharp transitions and large areas of solid color – the PNG format can compress image data more than JPEG can. Additionally, PNG is lossless, while JPEG produces visual artifacts around high-contrast areas. (Such artifacts depend on the settings used in the JPG compression; they can be quite noticeable when a low-quality [high-compression] setting is used.) Where an image contains both sharp transitions and photographic parts, a choice must be made between the two effects. JPEG does not support transparency.
JPEG's lossy compression also suffers from generation loss, where repeatedly decoding and re-encoding an image to save it again causes a loss of information each time, degrading the image. This does not happen with repeated viewing or copying, but only if the file is edited and saved over again. Because PNG is lossless, it is suitable for storing images to be edited. While PNG is reasonably efficient when compressing photographic images, there are lossless compression formats designed specifically for photographic images, lossless WebP and Adobe DNG (digital negative) for example. However these formats are either not widely supported, or are proprietary. An image can be stored losslessly and converted to JPEG format only for distribution, so that there is no generation loss.
While the PNG specification does not explicitly include a standard for embedding Exif image data from sources such as digital cameras, the preferred method for embedding EXIF data in a PNG is to use the non-critical ancillary chunk label codice_13.
Early web browsers did not support PNG images; JPEG and GIF were the main image formats. JPEG was commonly used when exporting images containing gradients for web pages, because of GIF's limited color depth. However, JPEG compression causes a gradient to blur slightly. A PNG format reproduces a gradient as accurately as possible for a given bit depth, while keeping the file size small. PNG became the optimal choice for small gradient images as web browser support for the format improved. No images at all are needed to display gradients in modern browsers, as gradients can be created using CSS.
JPEG-LS is an image format by the Joint Photographic Experts Group, though far less widely known and supported than the other lossy JPEG format discussed above. It is directly comparable with PNG, and has a standard set of test images. On the Waterloo Repertoire ColorSet, a standard set of test images (unrelated to the JPEG-LS conformance test set), JPEG-LS generally performs better than PNG, by 10–15%, but on some images PNG performs substantially better, on the order of 50–75%. Thus, if both of these formats are options and file size is an important criterion, they should both be considered, depending on the image.
Tagged Image File Format (TIFF) is a format that incorporates an extremely wide range of options. While this makes TIFF useful as a generic format for interchange between professional image editing applications, it makes adding support for it to applications a much bigger task and so it has little support in applications not concerned with image manipulation (such as web browsers). The high level of extensibility also means that most applications provide only a subset of possible features, potentially creating user confusion and compatibility issues.
The most common general-purpose, lossless compression algorithm used with TIFF is Lempel–Ziv–Welch (LZW). This compression technique, also used in GIF, was covered by patents until 2003. TIFF also supports the compression algorithm PNG uses (i.e. Compression Tag 000816 'Adobe-style') with medium usage and support by applications. TIFF also offers special-purpose lossless compression algorithms like CCITT Group IV, which can compress bilevel images (e.g., faxes or black-and-white text) better than PNG's compression algorithm.
PNG supports non-premultiplied alpha only whereas TIFF also supports "associated" (premultiplied) alpha.
The official reference implementation of the PNG format is the programming library "libpng". It is published as free software under the terms of a permissive free software license. Therefore, it is usually found as an important system library in free operating systems.
The PNG format is widely supported by graphics programs, including Adobe Photoshop, Corel's Photo-Paint and Paint Shop Pro, the GIMP, GraphicConverter, Helicon Filter, ImageMagick, Inkscape, IrfanView, Pixel image editor, Paint.NET and Xara Photo & Graphic Designer and many others. Some programs bundled with popular operating systems which support PNG include Microsoft's Paint and Apple's Photos/iPhoto and Preview, with the GIMP also often being bundled with popular Linux distributions.
Adobe Fireworks (formerly by Macromedia) uses PNG as its native file format, allowing other image editors and preview utilities to view the flattened image. However, Fireworks by default also stores metadata for layers, animation, vector data, text and effects. Such files should not be distributed directly. Fireworks can instead export the image as an optimized PNG without the extra metadata for use on web pages, etc.
PNG support first appeared in Internet Explorer 4.0b1 (32-bit only for NT) and in Netscape 4.04.
Despite calls by the Free Software Foundation and the World Wide Web Consortium (W3C), tools such as gif2png, and campaigns such as Burn All GIFs, PNG adoption on websites was fairly slow due to late and buggy support in Internet Explorer, particularly regarding transparency.
PNG compatible browsers include: Apple Safari, Google Chrome, Mozilla Firefox, Opera, Camino, Internet Explorer 7 (still numerous issues), Internet Explorer 8 (still some issues), Internet Explorer 9 and many others. For the complete comparison, see Comparison of web browsers (Image format support).
Especially versions of Internet Explorer (Windows) below 9.0 have numerous problems which prevent it from correctly rendering PNG images.
PNG icons have been supported in most distributions of Linux since at least 1999, in desktop environments such as GNOME. In 2006, Microsoft Windows support for PNG icons was introduced in Windows Vista. PNG icons are supported in AmigaOS 4, AROS, macOS, iOS and MorphOS as well. In addition, Android makes extensive use of PNGs.
PNG file size can vary significantly depending on how it is encoded and compressed; this is discussed and a number of tips are given in "PNG: The Definitive Guide."
Compared to GIF files, a PNG file with the same information (256 colors, no ancillary chunks/metadata), compressed by an effective compressor is normally smaller than a GIF image. Depending on the file and the compressor, PNG may range from somewhat smaller (10%) to significantly smaller (50%) to somewhat larger (5%), but is rarely significantly larger for large images. This is attributed to the performance of PNG's DEFLATE compared to GIF's LZW, and because the added precompression layer of PNG's predictive filters take account of the 2-dimensional image structure to further compress files; as filtered data encodes differences between pixels, they will tend to cluster closer to 0, rather than being spread across all possible values, and thus be more easily compressed by DEFLATE. However, some versions of Adobe Photoshop, CorelDRAW and MS Paint provide poor PNG compression, creating the impression that GIF is more efficient.
PNG files vary in size due to a number of factors:
There is thus a filesize trade-off between high color depth, maximal metadata (including color space information, together with information that does not affect display), interlacing, and speed of compression, which all yield large files, with lower color depth, fewer or no ancillary chunks, no interlacing, and tuned but computationally intensive filtering and compression. For different purposes, different trade-offs are chosen: a maximal file may be best for archiving and editing, while a stripped down file may be best for use on a website, and similarly fast but poor compression is preferred when repeatedly editing and saving a file, while slow but high compression is preferred when a file is stable: when archiving or posting.
Interlacing is a trade-off: it dramatically speeds up early rendering of large files (improves latency), but may increase file size (decrease throughput) for little gain, particularly for small files.
Although PNG is a lossless format, PNG encoders can preprocess image data in a lossy fashion to improve PNG compression. For example, quantizing a truecolor PNG to 256 colors allows the indexed color type to be used for a likely reduction in file size.
Some programs are more efficient than others when saving PNG files, this relates to implementation of the PNG compression used by the program.
Many graphics programs (such as Apple's Preview software) save PNGs with large amounts of metadata and color-correction data that are generally unnecessary for Web viewing. Unoptimized PNG files from Adobe Fireworks are also notorious for this since they contain options to make the image editable in supported editors. Also CorelDRAW (at least version 11) sometimes produces PNGs which cannot be opened by Internet Explorer (versions 6–8).
Adobe Photoshop's performance on PNG files has improved in the CS Suite when using the Save For Web feature (which also allows explicit PNG/8 use).
Adobe's Fireworks saves larger PNG files than many programs by default. This stems from the mechanics of its "Save" format: the images produced by Fireworks' save function include large, private chunks, containing complete layer and vector information. This allows further lossless editing. When saved with the "Export" option, Fireworks' PNGs are competitive with those produced by other image editors, but are no longer editable as anything but flattened bitmaps. Fireworks is unable to save size-optimized vector-editable PNGs.
Other notable examples of poor PNG compressors include:
Poor compression increases the PNG file size but does not affect the image quality or compatibility of the file with other programs.
When the color depth of a truecolor image is reduced to an 8-bit palette (as in GIF), the resulting image data is typically much smaller. Thus a truecolor PNG is typically larger than a color-reduced GIF, although PNG could store the color-reduced version as a palettized file of comparable size. Conversely, some tools, when saving images as PNGs, automatically save them as truecolor, even if the original data use only 8-bit color, thus bloating the file unnecessarily. Both factors can lead to the misconception that PNG files are larger than equivalent GIF files.
Various tools are available for optimizing PNG files; they do this by:
A simple comparison of their features is listed below.
Before zopflipng was available, a good way in practice to perform a png optimization is to use a combination of 2 tools in sequence for optimal compression: one which optimizes filters (and removes ancillary chunks), and one which optimizes DEFLATE. Although pngout offers both, only one type of filter can be specified in a single run, therefore it can be used with a wrapper tool or in combination with optipng or pngcrush, acting as a re-deflater, like advdef.
For removing ancillary chunks, most PNG optimization tools have the ability to remove all color correction data from PNG files (gamma, white balance, ICC color profile, standard RGB color profile). This often results in much smaller file sizes. For example, the following command line options achieve this with pngcrush:
codice_28
Ancillary chunks can also be losslessly removed using the free Win32 program PNGExtra.
OptiPNG, pngcrush, pngout, and zopflipng all offer options applying one of the filter types 0–4 globally (using the same filter type for all lines) or with a "pseudo filter" (numbered 5), which for each line chooses one of the filter types 0–4 using an adaptive algorithm. Zopflipng offers 3 different adaptive method, including a brute-force search that attempts to optimize the filtering.
pngout and zopflipng provide an option to preserve/reuse the line-by-line filter set present in the input image.
OptiPNG, pngcrush and zopflipng provide options to try different filter strategies in a single run and choose the best. The freeware command line version of pngout doesn't offer this, but the commercial version, pngoutwin, does.
zopfli and the LZMA SDK employ DEFLATE implementations that produce higher compression ratios than the zlib reference implementation at the cost of performance. AdvanceCOMP's codice_29 and codice_30 can use either of these libraries to re-compress PNG files. Additionally, PNGOUT contains its own proprietary DEFLATE implementation.
advpng doesn't have an option to apply filters and always uses filter 0 globally (leaving the image data unfiltered); therefore it should not be used where the image benefits significantly from filtering. By contrast, advdef from the same package doesn't deal with PNG structure and acts only as a re-deflater, retaining any existing filter settings.
Since icons intended for Windows Vista and later versions may contain PNG subimages, the optimizations can be applied to them as well. At least one icon editor, Pixelformer, is able to perform a special optimization pass while saving ICO files, thereby reducing their sizes. FileOptimizer (mentioned above) can also handle ICO files.
Icons for macOS may also contain PNG subimages, yet there isn't such tool available. | https://en.wikipedia.org/wiki?curid=24306 |
Pope Urban VIII
Pope Urban VIII (; baptised 5 April 1568 – 29 July 1644), born Maffeo Barberini, was head of the Catholic Church and ruler of the Papal States from 6 August 1623 to his death in 1644. He expanded the papal territory by force of arms and advantageous politicking, and was also a prominent patron of the arts and a reformer of Church missions.
However, the massive debts incurred during his pontificate greatly weakened his successors, who were unable to maintain the papacy's longstanding political and military influence in Europe. He was also an opponent of Copernicanism and involved in the Galileo affair.
He was born Maffeo Barberini in April 1568 to Antonio Barberini, a Florentine nobleman, and Camilla Barbadoro. His father died when he was only three years old and his mother took him to Rome, where he was put in the charge of his uncle, Francesco Barberini, an apostolic protonotary. At the age of 16, he became his uncle's heir. He was educated by the Society of Jesus ("Jesuits"), and received a doctorate of law from the University of Pisa in 1589.
In 1601, Barberini, through the influence of his uncle, was able to secure from Pope Clement VIII appointment as a papal legate to the court of King Henry IV of France. In 1604, the same pope appointed him as the Archbishop of Nazareth, an office joined with that of Bishop of the suppressed Dioceses of Canne and Monteverde, with his residence at Barletta. At the death of his uncle, he inherited his riches, with which he bought a palace in Rome, which he made into a luxurious Renaissance residence.
Pope Paul V also later employed Barberini in a similar capacity, afterwards raising him, in 1606, to the order of the Cardinal-Priest, with the titular church of San Pietro in Montorio and appointing him as a papal legate of Bologna.
Barberini was considered someone who could be elected as pope, though there were those such as Cardinal Ottavio Bandini who worked to prevent it. Despite this, throughout 29–30 July, the cardinals began an intense series of negotiations to test the numbers as to who could emerge from the conclave as pope, with Cardinal Ludovico Ludovisi dismissing Barberini's chances as long as Barberini remained a close ally of Cardinal Scipione Borghese, whose faction Barberini supported. Ludovisi had discussions with Cardinals Farnese, Medici and Aldobrandini on 30 July about seeing to Barberini's election. The three supported his candidacy and went about securing the support of others, which lead to Barberini's election just over a week later. On 6 August 1623, at the papal conclave following the death of Pope Gregory XV, Barberini was chosen as Gregory XV's successor and took the name Urban VIII.
Upon Pope Urban VIII's election, Zeno, the Venetian envoy, wrote the following description of him:
Urban VIII's papacy covered 21 years of the Thirty Years' War, (1618-1648) and was an eventful one, even by the standards of the day. He canonized Elizabeth of Portugal, Andrew Corsini and Conrad of Piacenza, and issued the papal bulls of canonization for Ignatius of Loyola (founder of the Society of Jesus, "Jesuits") and Francis Xavier (also a Jesuit), who had been canonized by his predecessor, Pope Gregory XV.
Despite an early friendship and encouragement for his teachings, Urban VIII was responsible for summoning the scientist and astronomer Galileo to Rome in 1633 to recant his work. Urban VIII was opposed to Copernican heliocentrism and he ordered Galileo's second trial after the publication of "Dialogue Concerning the Two Chief World Systems", in which Urban's point of view is argued by the character "Simplicio".
Urban VIII practiced nepotism on a grand scale; various members of his family were enormously enriched by him, so that it seemed to contemporaries as if he were establishing a Barberini dynasty. He elevated his brother Antonio Marcello Barberini (Antonio the Elder) and then his nephews Francesco Barberini and Antonio Barberini (Antonio the Younger) to Cardinal. He also bestowed upon their brother, Taddeo Barberini, the titles "Prince of Palestrina", Gonfalonier of the Church, Prefect of Rome and "Commander of Sant'Angelo". Historian Leopold von Ranke estimated that during his reign, Urban VIII's immediate family amassed 105 million scudi in personal wealth.
Urban VIII was a skilled writer of Latin verse, and a collection of Scriptural paraphrases as well as original hymns of his composition have been frequently reprinted.
The 1638 papal bull "Commissum Nobis" protected the existence of Jesuit missions in South America by forbidding the enslavement of natives who were at the Jesuit Reductions. At the same time, Urban VIII repealed the Jesuit monopoly on missionary work in China and Japan, opening these countries to missionaries of other orders and missionary societies.
Urban VIII issued a 1624 papal bull that made the use of tobacco in holy places punishable by excommunication; Pope Benedict XIII repealed the ban one hundred years later.
Urban VIII canonized five saints during his pontificate: Stephen Harding (1623), Elizabeth of Portugal and Conrad of Piacenza (1625), Peter Nolasco (1628), and Andrea Corsini (1629). The pope also beatified 68 individuals including the Martyrs of Nagasaki (1627).
The pope created 74 cardinals in eight consistories throughout his pontificate, and this included his nephews Francesco and Antonio, cousin Lorenzo Magalotti, and the pope's own brother Antonio Marcello. He also created Giovanni Battista Pamphili as a cardinal, with Pamphili becoming his immediate successor Pope Innocent X. The pope also created eight of those cardinals whom he had reserved "in pectore".
Urban VIII's military involvement was aimed less at the restoration of Catholicism in Europe than at adjusting the balance of power to favour his own independence in Italy. In 1626, the duchy of Urbino was incorporated into the papal dominions, and, in 1627, when the direct male line of the Gonzagas in Mantua became extinct, he controversially favoured the succession of the Protestant Duke Charles of Nevers against the claims of the Catholic Habsburgs. He also launched the Wars of Castro in 1641 against Odoardo Farnese, Duke of Parma and Piacenza, whom he excommunicated. Castro was destroyed and its duchy incorporated into the Papal States.
Urban VIII was the last pope to extend the papal territory. He fortified Castelfranco Emilia on the Mantuan frontier and commissioned Vincenzo Maculani to fortify the Castel Sant'Angelo in Rome. Urban VIII also established an arsenal in the Vatican, an arms factory at Tivoli and fortified the harbour of Civitavecchia.
For the purposes of making cannon and the baldacchino in St Peters, massive bronze girders were pillaged from the portico of the Pantheon leading to the well known lampoon: "quod non fecerunt barbari, fecerunt Barberini," "what the barbarians did not do, the Barberini did."
Urban VIII and his family patronized art on a grand scale. He expended vast sums bringing polymaths like Athanasius Kircher to Rome and funding various substantial works by the sculptor and architect Bernini, from whom he had already commissioned "Boy with a Dragon" around 1617 and who was particularly favored during Urban VIII's reign. As well as several portrait busts of Urban, Urban commissioned Bernini to work on the family palace in Rome, the Palazzo Barberini, the College of the Propaganda Fide, the Fontana del Tritone in the Piazza Barberini, the baldacchino and "cathedra" in St Peter's Basilica and other prominent structures in the city. Numerous members of Barberini's family also had their likeness caught in stone by Bernini, such as his brothers Carlo and Antonio. Urban also had rebuilt the Church of Santa Bibiana and the Church of San Sebastiano al Palatino on the Palatine Hill.
The Barberini patronised painters such as Nicolas Poussin and Claude Lorrain. One of the most eulogistic of these artistic works in its celebration of his reign, is the huge "Allegory of Divine Providence and Barberini Power" painted by Pietro da Cortona on the ceiling of the large salon of the Palazzo Barberini.
Another such acquisition, in a vast collection, was the purchase of the 'Barberini vase'. This was allegedly found at the mausoleum of the Roman Emperor Severus Alexander and his family at Monte Del Grano. The discovery of the vase is described by Pietro Santi Bartoli and referenced on page 28 of a book on The Portland Vase. Pietro Bartoli indicates that the vase contained the ashes of the Roman Emperor. However, this together with the interpretations of the scenes depicted on it are the source of countless theories and disputed 'facts'. The vase remained in the Barberini family collection for some 150 years before passing through the hands of Sir William Hamilton Ambassador to the Royal Court in Naples. It was later sold to the Duke and Duchess of Portland, and has subsequently been known as the Portland Vase. Following catastrophic damage, this glass vase (1-25BC) has been reconstructed three times and resides in the British Museum. The Portland vase itself was borrowed and near copied by Josiah Wedgewood who appears to have added modesty drapery. The vase formed the basis of Jasperware.
A consequence of these military and artistic endeavours was a massive increase in papal debt. Urban VIII inherited a debt of 16 million scudi, and by 1635 had increased it to 28 million.
According to contemporary John Bargrave, in 1636 members of the Spanish faction of the College of Cardinals were so horrified by the conduct of Pope Urban VIII that they conspired to have him arrested and imprisoned (or killed) so that they could replace him with a new pope; namely Laudivio Zacchia. When Urban VIII travelled to Castel Gandolfo to rest, the members of the Spanish faction met in secret and discussed ways to advance their plan. But they were discovered and the pope raced back to Rome where he immediately held a consistory and demanded to know who the new pope was. To put an end to the conspiracy, the pope decreed that all Cardinal-Bishops should leave Rome and return to their own churches.
With the Spanish plan having failed, by 1640 the debt had reached 35 million scudi, consuming more than 80% of annual papal income in interest repayments.
Urban VIII's death on 29 July 1644 is said to have been hastened by chagrin at the result of the Wars of Castro. Because of the costs incurred by the city of Rome to finance this war, Urban VIII became immensely unpopular with his subjects.
On his death, the bust of Urban VIII that lay beside the Palace of the Conservators on the Capitoline Hill was rapidly destroyed by an enraged crowd, and only a quick-thinking priest saved the sculpture of the late pope belonging to the Jesuits from a similar fate.
Following his death, international and domestic machinations resulted in the papal conclave not electing Cardinal Giulio Cesare Sacchetti, who was closely associated with some members of the Barberini family. Instead, it elected Cardinal Giovanni Battista Pamphili, who took the name of Innocent X, as his successor at the papal conclave of 1644.
In the papal bull "Sanctissimus Dominus Noster" of 13 March 1625, Urban instructed Catholics not to venerate the deceased or represent them in the manner of saints without Church sanction. It required a bishop’s approval for the publication of private revelations. Since the nineteenth century, it has become common for books of popular devotion to carry a disclaimer. One read in part: "In obedience to the decrees of Urban the Eighth, I declare that I have no intention of attributing any other than a purely human authority to the miracles, revelations, favours, and particular cases recorded in this book..."
Urban VIII is a recurring character in the "Ring of Fire" alternative history hypernovel by Eric Flint et al. where he is favorably portrayed. He is especially prominent in "1634: The Galileo Affair" (in which he made the fictional Grantville priest, Larry Mazzare, a cardinal), and in "1635: The Cannon Law", "1635: The Papal Stakes", and "1636: The Vatican Sanction". | https://en.wikipedia.org/wiki?curid=24308 |
Pope Silverius
Pope Silverius (died 2 December 537) ruled the Holy See from 8 June 536 to his deposition in 537, a few months before his death. His rapid rise to prominence from a deacon to the papacy coincided the efforts of Ostrogothic king Theodahad (nephew to Theodoric the Great), who intended to install a pro-Gothic candidate just before the Gothic War. Later deposed by Byzantine general Belisarius, he was tried and sent to exile on the desolated island of Palmarola, where he starved to death in 537.
He was a legitimate son of Pope Hormisdas, born in Frosinone, Lazio, some time before his father entered the priesthood. Silverius was probably consecrated 8 June 536. He was a subdeacon when king Theodahad of the Ostrogoths forced his election and consecration. Historian Jeffrey Richards interprets his low rank prior to becoming pope as an indication that Theodahad was eager to put a pro-Gothic candidate on the throne on the eve of the Gothic War and "had passed over the entire diaconate as untrustworthy". The "Liber Pontificalis" alleges that Silverius had purchased his elevation from King Theodahad.
On 9 December 536, the Byzantine general Belisarius entered Rome with the approval of Pope Silverius. Theodahad's successor Witiges gathered together an army and besieged Rome for several months, subjecting the city to privation and starvation. In the words of Richards, "What followed is as tangled a web of treachery and double-dealing as can be found anywhere in the papal annals. Several different versions of the course of events following the elevation of Silverius exist." In outline, all accounts agree: Silverius was deposed by Belisarius in March 537 and sent into exile after being judged by the wife of Belisarius, Antonina, who accused him of conspiring with the Goths. Not only did Belisarius exile Silverius, he also banished a number of distinguished senators, Flavius Maximus—a descendant of a previous emperor—among them. Vigilius, who was in Constantinople as "apocrisiarius" or papal legate, was brought to Rome to replace Silverius as the pontiff.
The fullest account is in the "Breviarium" of Liberatus of Carthage, who portrays Vigilius "as a greedy and treacherous pro-Monophysite who ousted and virtually murdered his predecessor." In exchange for being made Pope, Liberatus claims he promised Empress Theodora to restore the former patriarch of Constantinople, Anthimus, to his position. Silverius was sent into exile at Patara in Lycia, whose bishop petitioned the emperor for a fair trial for Silverius. Rattled by this, Justinian ordered Silverius returned to Rome to be tried accordingly. However, when Silverius returned to Italy, instead of holding a trial Belisarius handed him over to Vigilius, who according to The Liber Pontificalis banished Silverius to the desolate island Palmarola (part of the Pontine Islands), where he starved to death a few months later.
The account in the "Liber Pontificalis" is hardly more favorable to Vigilius. That work agrees with Liberatus that the restoration of Anthimus to the Patriarchate was the cause of Silverius' deposition, but Vigilius was initially sent to persuade Silverius to agree to this, not replace him. Silverius refused and Vigilius then claimed to Belisarius that Pope Silverius had written to Witiges offering to betray the city. Belisarius did not believe this accusation, but Vigilius produced false witnesses to testify to this, and through persistence overcame his scruples. Silverius was summoned to the Pincian palace, where he was stripped of his vestments and handed over to Vigilius, who dispatched him into exile. Procopius omits all mention of religious controversy in Vigilius' actions. He writes that Silverius was accused of offering to betray Rome to the Goths. Upon learning of this, Belisarius had him deposed, put in a monk's habit and exiled to Greece. Several other senators were also banished from Rome at the same time on similar charges. Belisarius then appointed Vigilius. Deprived of sufficient sustenance, Silverius starved to death on the island of Palmarola.
Richards attempts to reconcile these divergent accounts into a unified account. He points out that Liberatus wrote his "Breviarium" at the height of the Three-Chapter Controversy, "when Vigilius was being regarded by his opponents as anti-Christ and Liberatus was prominent among these opponents", and the "Liber Pontificalis" drew from an account written at the same time. Once these religious elements are removed, Richards argues that it is clear "the whole episode was political in nature." He points out for Justinian's plans to recover Rome and Italy, "that there should be a pro-Eastern pope substituted as soon as possible. The ideal candidate was at hand in Constantinople. The deacon Vigilius' principal motivation throughout his career, as far as can be ascertained, was the desire to be pope and he was not really concerned about which faction put him there."
Silverius was later recognized as a saint by popular acclamation, and is now the patron saint of the island of Ponza, Italy. The first mention of his name in a list of saints dates to the 11th century. He is also called Saint Silverius (San Silverio). While Pope Silverius perished without fanfare and largely unlamented during the 6th century, the people from the neighboring island of Ponza have honored the virtuous St. Silverio, a heritage that reaches from the island to the United States, where many settlers from the island have settled in the Morisania section of the Bronx. From there, they celebrate the Festival of San Silverio at Our Lady of Pity Church on 151st Street and Morris Avenue, just as they have for centuries, calling on him for help. After the Church of Our Lady of Pity closed, the statue of San Silverio found a home at St. Ann's Church at 31 College Place, Yonkers, New York. The feast of San Silverio is observed here every year on June 20 with a special Mass and procession of the Statue of San Silverio. The statue is on permanent display for veneration by the faithful. According to Ponza Islands legend, fishermen were in a small boat in a storm off Palmarola and they called on Saint Silverius for help. An apparition of Saint Silverius called them to Palmarola, where they survived. This miracle made him venerated as a saint. | https://en.wikipedia.org/wiki?curid=24309 |
Pope Sylvester III
Pope Sylvester III (1000 – October 1063), born John in Rome, was bishop of Rome and ruler of the Papal States from 20 January to March 1045.
Christened John, he was born into the powerful Roman patrician family Crescentii. Upon the death of Pope John XIX in October 1032, the papal throne became the subject of dispute between rival factions of nobles. Theophylactus, a youth of about twenty, the son of Alberic III, Count of Tusculum, was supported by the nobles of Tusculum. Giovanni de' Crescenzi–Ottaviani was supported by the Crescenzi family. Alberic secured the election of his son through bribery. The nephew and namesake of Pope Benedict VIII, he took the name Benedict IX. The young man was not only unqualified, but led a reportedly dissolute life, and factional strife continued. A revolt in Rome led to Benedict IX being driven from the city in 1044.
John, bishop of Sabina, was elected after fierce and protracted infighting. He took the name Sylvester III in January 1045. Benedict IX excommunicated him, and in March returned to Rome and expelled Sylvester, who himself returned to Sabina to again take up his office of bishop in that diocese.
Nearly two years later (in December 1046), the Council of Sutri deprived him of his bishopric and priesthood and ordered him sent to a monastery. This sentence was obviously suspended because he continued to function and was recognized as bishop of Sabina until at least 1062. A successor bishop to the see of Sabina is recorded for October 1063, indicating that John must have died prior to that date.
Though some consider him to have been an antipope, Sylvester III continues to be listed as an official pope (1045) in Vatican lists. A similar situation applies to Pope Gregory VI (1045–1046). His pontifical name was used again by Antipope Theodoric because, at that time, he was not considered a legitimate pontiff. | https://en.wikipedia.org/wiki?curid=24311 |
Planar graph
In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a plane graph or planar embedding of the graph. A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points.
Every graph that can be drawn on a plane can be drawn on the sphere as well, and vice versa, by means of stereographic projection.
Plane graphs can be encoded by combinatorial maps.
The equivalence class of topologically equivalent drawings on the sphere is called a planar map. Although a plane graph has an external or unbounded face, none of the faces of a planar map have a particular status.
Planar graphs generalize to graphs drawable on a surface of a given genus. In this terminology, planar graphs have graph genus 0, since the plane (and the sphere) are surfaces of genus 0. See "graph embedding" for other related topics.
The Polish mathematician Kazimierz Kuratowski provided a characterization of planar graphs in terms of forbidden graphs, now known as Kuratowski's theorem:
A subdivision of a graph results from inserting vertices into edges (for example, changing an edge •——• to •—•—•) zero or more times.
Instead of considering subdivisions, Wagner's theorem deals with minors:
A minor of a graph results from taking a subgraph and repeatedly contracting an edge into a vertex, with each neighbor of the original end-vertices becoming a neighbor of the new vertex.
Klaus Wagner asked more generally whether any minor-closed class of graphs is determined by a finite set of "forbidden minors". This is now the Robertson–Seymour theorem, proved in a long series of papers. In the language of this theorem, "K"5 and "K"3,3 are the forbidden minors for the class of finite planar graphs.
In practice, it is difficult to use Kuratowski's criterion to quickly decide whether a given graph is planar. However, there exist fast algorithms for this problem: for a graph with "n" vertices, it is possible to determine in time O("n") (linear time) whether the graph may be planar or not (see planarity testing).
For a simple, connected, planar graph with "v" vertices and "e" edges and "f" faces, the following simple conditions hold for "v" ≥ 3:
In this sense, planar graphs are sparse graphs, in that they have only O("v") edges, asymptotically smaller than the maximum O("v"2). The graph "K"3,3, for example, has 6 vertices, 9 edges, and no cycles of length 3. Therefore, by Theorem 2, it cannot be planar. These theorems provide necessary conditions for planarity that are not sufficient conditions, and therefore can only be used to prove a graph is not planar, not that it is planar. If both theorem 1 and 2 fail, other methods may be used.
Euler's formula states that if a finite, connected, planar graph is drawn in the plane without any edge intersections, and "v" is the number of vertices, "e" is the number of edges and "f" is the number of faces (regions bounded by edges, including the outer, infinitely large region), then
As an illustration, in the butterfly graph given above, "v" = 5, "e" = 6 and "f" = 3.
In general, if the property holds for all planar graphs of "f" faces, any change to the graph that creates an additional face while keeping the graph planar would keep "v" − "e" + "f" an invariant. Since the property holds for all graphs with "f" = 2, by mathematical induction it holds for all cases. Euler's formula can also be proved as follows: if the graph isn't a tree, then remove an edge which completes a cycle. This lowers both "e" and "f" by one, leaving "v" − "e" + "f" constant. Repeat until the remaining graph is a tree; trees have "v" = "e" + 1 and "f" = 1, yielding "v" − "e" + "f" = 2, i. e., the Euler characteristic is 2.
In a finite, connected, "simple", planar graph, any face (except possibly the outer one) is bounded by at least three edges and every edge touches at most two faces; using Euler's formula, one can then show that these graphs are "sparse" in the sense that if "v" ≥ 3:
Euler's formula is also valid for convex polyhedra. This is no coincidence: every convex polyhedron can be turned into a connected, simple, planar graph by using the Schlegel diagram of the polyhedron, a perspective projection of the polyhedron onto a plane with the center of perspective chosen near the center of one of the polyhedron's faces. Not every planar graph corresponds to a convex polyhedron in this way: the trees do not, for example. Steinitz's theorem says that the polyhedral graphs formed from convex polyhedra are precisely the finite 3-connected simple planar graphs. More generally, Euler's formula applies to any polyhedron whose faces are simple polygons that form a surface topologically equivalent to a sphere, regardless of its convexity.
Connected planar graphs with more than one edge obey the inequality formula_3, because each face has at least three face-edge incidences and each edge contributes exactly two incidences. It follows via algebraic transformations of this inequality with Euler's formula formula_4 that for finite planar graphs the average degree is strictly less than 6. Graphs with higher average degree cannot be planar.
We say that two circles drawn in a plane "kiss" (or "osculate") whenever they intersect in exactly one point. A "coin graph" is a graph formed by a set of circles, no two of which have overlapping interiors, by making a vertex for each circle and an edge for each pair of circles that kiss. The circle packing theorem, first proved by Paul Koebe in 1936, states that a graph is planar if and only if it is a coin graph.
This result provides an easy proof of Fáry's theorem, that every simple planar graph can be embedded in the plane in such a way that its edges are straight line segments that do not cross each other. If one places each vertex of the graph at the center of the corresponding circle in a coin graph representation, then the line segments between centers of kissing circles do not cross any of the other edges.
The density formula_5 of a planar graph, or network, is defined as a ratio of the number of edges formula_6 to the number of possible edges in a network with formula_7 nodes, given by a planar graph formula_8, giving formula_9. A completely sparse planar graph has formula_10, alternatively a completely dense planar graph has formula_11
A simple graph is called maximal planar if it is planar but adding any edge (on the given vertex set) would destroy that property. All faces (including the outer one) are then bounded by three edges, explaining the alternative term plane triangulation. The alternative names "triangular graph" or "triangulated graph" have also been used, but are ambiguous, as they more commonly refer to the line graph of a complete graph and to the chordal graphs respectively. Every maximal planar is 3-vertex-connected.
If a maximal planar graph has "v" vertices with "v" > 2, then it has precisely 3"v" − 6 edges and 2"v" − 4 faces.
Apollonian networks are the maximal planar graphs formed by repeatedly splitting triangular faces into triples of smaller triangles. Equivalently, they are the planar 3-trees.
Strangulated graphs are the graphs in which every peripheral cycle is a triangle. In a maximal planar graph (or more generally a polyhedral graph) the peripheral cycles are the faces, so maximal planar graphs are strangulated. The strangulated graphs include also the chordal graphs, and are exactly the graphs that can be formed by clique-sums (without deleting edges) of complete graphs and maximal planar graphs.
Outerplanar graphs are graphs with an embedding in the plane such that all vertices belong to the unbounded face of the embedding. Every outerplanar graph is planar, but the converse is not true: "K"4 is planar but not outerplanar. A theorem similar to Kuratowski's states that a finite graph is outerplanar if and only if it does not contain a subdivision of "K"4 or of "K"2,3. The above is a direct corollary of the fact that a graph "G" is outerplanar if the graph formed from "G" by adding a new vertex, with edges connecting it to all the other vertices, is a planar graph.
A 1-outerplanar embedding of a graph is the same as an outerplanar embedding. For "k" > 1 a planar embedding is "k"-outerplanar if removing the vertices on the outer face results in a ("k" − 1)-outerplanar embedding. A graph is "k"-outerplanar if it has a "k"-outerplanar embedding.
A Halin graph is a graph formed from an undirected plane tree (with no degree-two nodes) by connecting its leaves into a cycle, in the order given by the plane embedding of the tree. Equivalently, it is a polyhedral graph in which one face is adjacent to all the others. Every Halin graph is planar. Like outerplanar graphs, Halin graphs have low treewidth, making many algorithmic problems on them more easily solved than in unrestricted planar graphs.
An apex graph is a graph that may be made planar by the removal of one vertex, and a "k"-apex graph is a graph that may be made planar by the removal of at most "k" vertices.
A 1-planar graph is a graph that may be drawn in the plane with at most one simple crossing per edge, and a "k"-planar graph is a graph that may be drawn with at most "k" simple crossings per edge.
A map graph is a graph formed from a set of finitely many simply-connected interior-disjoint regions in the plane by connecting two regions when they share at least one boundary point. When at most three regions meet at a point, the result is a planar graph, but when four or more regions meet at a point, the result can be nonplanar.
A toroidal graph is a graph that can be embedded without crossings on the torus. More generally, the genus of a graph is the minimum genus of a two-dimensional surface into which the graph may be embedded; planar graphs have genus zero and nonplanar toroidal graphs have genus one.
Any graph may be embedded into three-dimensional space without crossings. However, a three-dimensional analogue of the planar graphs is provided by the linklessly embeddable graphs, graphs that can be embedded into three-dimensional space in such a way that no two cycles are topologically linked with each other. In analogy to Kuratowski's and Wagner's characterizations of the planar graphs as being the graphs that do not contain "K"5 or "K"3,3 as a minor, the linklessly embeddable graphs may be characterized as the graphs that do not contain as a minor any of the seven graphs in the Petersen family. In analogy to the characterizations of the outerplanar and planar graphs as being the graphs with Colin de Verdière graph invariant at most two or three, the linklessly embeddable graphs are the graphs that have Colin de Verdière invariant at most four.
An upward planar graph is a directed acyclic graph that can be drawn in the plane with its edges as non-crossing curves that are consistently oriented in an upward direction. Not every planar directed acyclic graph is upward planar, and it is NP-complete to test whether a given graph is upward planar.
The asymptotic for the number of (labeled) planar graphs on formula_12 vertices is formula_13, where formula_14 and formula_15.
Almost all planar graphs have an exponential number of automorphisms.
The number of unlabeled (non-isomorphic) planar graphs on formula_12 vertices is between formula_17 and formula_18.
The Four Color Theorem states that every planar graph is 4-colorable (i.e. 4-partite).
Fáry's theorem states that every simple planar graph admits an embedding in the plane such that all edges are straight line segments which don't intersect. A universal point set is a set of points such that every planar graph with "n" vertices has such an embedding with all vertices in the point set; there exist universal point sets of quadratic size, formed by taking a rectangular subset of the integer lattice. Every simple outerplanar graph admits an embedding in the plane such that all vertices lie on a fixed circle and all edges are straight line segments that lie inside the disk and don't intersect, so "n"-vertex regular polygons are universal for outerplanar graphs.
Given an embedding "G" of a (not necessarily simple) connected graph in the plane without edge intersections, we construct the dual graph "G"* as follows: we choose one vertex in each face of "G" (including the outer face) and for each edge "e" in "G" we introduce a new edge in "G"* connecting the two vertices in "G"* corresponding to the two faces in "G" that meet at "e". Furthermore, this edge is drawn so that it crosses "e" exactly once and that no other edge of "G" or "G"* is intersected. Then "G"* is again the embedding of a (not necessarily simple) planar graph; it has as many edges as "G", as many vertices as "G" has faces and as many faces as "G" has vertices. The term "dual" is justified by the fact that "G"** = "G"; here the equality is the equivalence of embeddings on the sphere. If "G" is the planar graph corresponding to a convex polyhedron, then "G"* is the planar graph corresponding to the dual polyhedron.
Duals are useful because many properties of the dual graph are related in simple ways to properties of the original graph, enabling results to be proven about graphs by examining their dual graphs.
While the dual constructed for a particular embedding is unique (up to isomorphism), graphs may have different (i.e. non-isomorphic) duals, obtained from different (i.e. non-homeomorphic) embeddings.
A "Euclidean graph" is a graph in which the vertices represent points in the plane, and the edges are assigned lengths equal to the Euclidean distance between those points; see Geometric graph theory.
A plane graph is said to be "convex" if all of its faces (including the outer face) are convex polygons. A planar graph may be drawn convexly if and only if it is a subdivision of a 3-vertex-connected planar graph.
Scheinerman's conjecture (now a theorem) states that every planar graph can be represented as an intersection graph of line segments in the plane.
The planar separator theorem states that every "n"-vertex planar graph can be partitioned into two subgraphs of size at most 2"n"/3 by the removal of O() vertices. As a consequence, planar graphs also have treewidth and branch-width O().
For two planar graphs with "v" vertices, it is possible to determine in time O("v") whether they are isomorphic or not (see also graph isomorphism problem).
The meshedness coefficient of a planar graph normalizes its number of bounded faces (the same as the circuit rank of the graph, by Mac Lane's planarity criterion) by dividing it by 2"n" − 5, the maximum possible number of bounded faces in a planar graph with "n" vertices. Thus, it ranges from 0 for trees to 1 for maximal planar graphs.
Word-representable planar graphs include triangle-free planar graphs and, more generally, 3-colourable planar graphs , as well as certain face subdivisions of triangular grid graphs , and certain triangulations of grid-covered cylinder graphs . | https://en.wikipedia.org/wiki?curid=24314 |
Pellucidar
Pellucidar is a fictional Hollow Earth invented by American writer Edgar Rice Burroughs for a series of action adventure stories. In a crossover event between Burroughs's series, there is a Tarzan story in which he travels into Pellucidar.
The stories initially involve the adventures of mining heir David Innes and his inventor friend Abner Perry after they use an "iron mole" to burrow 500 miles into the Earth's crust. Later protagonists include indigenous caveman Tanar and additional visitors from the surface world, notably Tarzan, Jason Gridley, and Frederich Wilhelm Eric von Mendeldorf und von Horst.
In Burroughs' concept, the Earth is a hollow shell with Pellucidar as the internal surface of his shell. Pellucidar is accessible to the surface world via a polar tunnel, allowing passage between both the inner and outer worlds through which a rigid airship visits in the fourth book of the series. Although the inner surface of the Earth has a smaller total area than the outer, Pellucidar actually has a greater land area, as its continents mirror the surface world's oceans and its oceans mirror the surface world's continents.
The peculiarities of Pellucidar's geography is because of the concave curvature of its surface, there is no horizon. The further distant an object is, the higher it appears to be, until it is finally lost in a void of atmospheric haze.
Pellucidar is lit by a miniature sun suspended at the center of the hollow sphere, so it's perpetually overhead and gives a sensation of eternal noon wherever one is in Pellucidar. The sole exception is a region directly under a tiny geostationary moon of the internal sun. As a result, this region is under a perpetual eclipse and is known as the "Land of Awful Shadow". The moon has its own plant life and (presumably) animal life. Basically, the moon either has its own atmosphere or shares that with Pellucidar. The miniature sun doesn't change in brightness and never sets. So, with no night or seasonal progression, the natives have little concept of time. The events of the series suggest that time is elastic, passing at different rates in different areas of Pellucidar and varying even in single locations. Also, several characters from the outer world, who have lived in Pellucidar, appear to age slowly and exhibit considerable longevity. This is known through their interactions with visitors from the outer world, where time remains fixed.
Pellucidar is populated by primitive civilizations and prehistoric creatures, including dinosaurs. The region in which Innes and Perry initially find themselves is ruled by the Mahars, a species of intelligent flying reptiles resembling "Rhamphorhynchus" with vast psychic powers. The Mahars use telekinesis on the neighboring tribes of Stone Age humans as a way of securing their territory. Eventually, two explorers united the tribes in overthrowing the Mahars' reign and establish a human "Empire of Pellucidar" instead.
While the Mahars are the dominant species in the Pellucidar novels, these creatures are usually confined to their handful of cities. Before their downfall, the Mahars used Sagoths (a race of gorilla-men who speak the same language as Tarzan's Mangani) in enforcing their rule over any tribes who disobeyed their orders. Though Burrough's novels suggest that the Mahars' domain is limited to one relatively small region of Pellucidar, John Eric Holmes' authorized sequel "Mahars of Pellucidar" indicates there are other areas ruled by Mahars.
Beyond the Mahars' domain exist independent human cultures, many of them at a Stone Age level of development. Technically, more advanced exceptions include the Korsars (corsairs), a maritime raiding society descended from surface-world Barbary pirates, and the Xexots, an indigenous Bronze Age civilization. All of the human inhabitants in Pellucidar share a common worldwide language.
Various animals reside in Pellucidar. Many of Pellucidar's fauna consist of prehistoric creatures, which are extinct on the surface world. However, some animals are creations of Edgar Rice Burroughs himself. They are listed below by outer world name (if known), Pellucidarian name (if known), and the novel in which they first appear, along with any relevant comments.
Pellucidar is also inhabited by enclaves of various non-human or semi-human races . Among the known races and tribes in Pellucidar are:
John Eric Holmes's "Mahars of Pellucidar" was a sequel to Burroughs' Pellucidar novels authorized by the Burroughs estate. Publication of Holmes' follow-up novel, "Red Axe of Pellucidar", reportedly ready for print in 1980, was reportedly blocked by the estate, and only saw print much later in a limited private edition.
When DC Comics had the rights to the Burroughs properties in the early 70s, they did a comic book adaptation of "At the Earth's Core" that ran in "Korak, Son of Tarzan" #46, then moved to "Weird Worlds" #1–5, then continued with an adaptation of "Pellucidar" in #6–7. Another Pellucidar story appeared in "Tarzan Family" #66. Dark Horse Comics will reprint this in trade paperback in 2017.
Pellucidar has appeared in one movie adaptation. The first novel was filmed as "At the Earth's Core" (1976), directed by Kevin Connor with Doug McClure as David Innes and Peter Cushing as Abner Perry.
The 1996 pilot of the TV series "" also features Pellucidar, as well as the character Jana from the novel "Tarzan at the Earth's Core". This story also features a race of Mahars who are able to transform into humanoid form. Also, in the 1996 novelization of by R. A. Salvatore, based on the teleplay for the pilot of the series, Pellucidar is featured in the later part of the story. The story is inspired by "The Return of Tarzan" and "Tarzan at the Earth's Core".
Pellucidar appears in a few episodes of the Disney cartoon series "The Legend of Tarzan", loosely inspired by "Tarzan at the Earth's Core". In the show, however, Pellucidar is merely described as being a region below Africa where dinosaurs still live. None of the characteristics of it described in the novels are seen. In the episode "Tarzan and the Hidden World", Tarzan leads Professor Porter into Pellucidar so he can become famous before his rival, Professor Philander, who has also arrived in Africa. Professor Porter accidentally steals an egg from a T-rex and the dinosaur retaliates by chasing them. Philander's photographic proof of Pellucidar is ruined by a monkey who took photos with his camera. Pellucidar is mentioned again in "Tarzan and the Beast From Below". The episode revolves around some Velociraptors which escaped from Pellucidar and scare Terk.
Pellucidar appears in the "Tarzan, Lord of the Jungle" episode "Tarzan at the Earth's Core".
Pellucidar is revisited by Tarzan and is the central location of the Dark Horse Comics crossover "Tarzan vs. Predator: At the Earth's Core", where Tarzan faces off against the alien Predators in Pellucidar.
A tribute story, "Maureen Birnbaum at the Earth's Core", appeared in "Maureen Birnbaum, Barbarian Swordsperson".
Pellucidar was the major inspiration for Lin Carter's "Zanthodon" novels of the late 1970s and early 1980s, set in the vast cavern of Zanthodon beneath the Sahara desert.
The Hollow Earth milieu of Skartaris in the "Warlord" series of comic books by Mike Grell, published from 1976 to 1989, is essentially a translation of Pellucidar into the graphic medium, with the admixture of magic and elements of the Atlantis myth.
The hollow interior of Earth seen in the 2008 Asylum film "Journey to the Center of the Earth" bears some similarity to Pellucidar, although the film was intended as a film adaptation of a novel by Jules Verne.
The Hollow World of the fictional "Dungeons & Dragons" setting of Mystara shares many concepts from Pellucidar, such as the polar opening(s), the central sun, the floating moon(s), and the primitive cultures living in the internal surface.
In James Blaylock's "The Digging Leviathan" (1984), a pair of rival scientific teams compete to reach Pellucidar. However, the story concludes before their goal is attained. Blaylock's "Zeuglodon" revisits the Pellucidar theme, when a group of children attempt to rescue Giles Peach, one of the characters traveling to Pellucidar in "The Digging Leviathan".
In Robert A. Heinlein's "Number of the Beast", the protagonists visit an inside-out world in their continua craft and discuss whether they've reached Pellucidar.
In John Crowley's "Little, Big" (1981), a drug named Pellucidar is mentioned and appears to have an exhilarating and even aphrodisiac effect.
During the initial explorations of Lechuguilla Cave in the late 1980s, a chamber was named "Pellucidar" in honor of these stories.
In Philip José Farmer's "Riders of the Purple Wage", there is a concept known as "the Pellucidar Breakthrough"
In Roderick Gordon's "Tunnels" series, the Garden of the Second Sun is strongly based on Pellucidar.
The Hollow Earth concept was used by Vladimir Obruchev in his "Plutonia" novel, published in 1924, also inhabited by ancient life forms. Due to his scientific and geology career, Obruchev wasn't a fan of the Hollow Earth concept. However, Obruchev did use his novel to describe Pleistocene, Jurassic, and Carboniferous fauna. | https://en.wikipedia.org/wiki?curid=24316 |
Peace Now
Peace Now ( "Shalom Achshav", ) is a non-governmental organization, "liberal advocacy" and activist group in Israel with the aim of promoting a two-state solution to the Israeli-Palestinian conflict.
Peace Now was formed during the 1978 Israeli-Egyptian peace talks between Israeli Prime Minister Menachem Begin and Egyptian President Anwar Sadat at a time when the talks looked close to collapse.
348 reserve officers and soldiers from Israeli army combat units published an open letter to the Prime Minister of Israel in which they called for the Israeli government not to squander the historic opportunity for peace between the two nations. The officers realised that Israel could not retain its Jewish-democratic nature whilst it continued to perpetuate its rule over one million Arabs. They concluded that Israel's security needs could only be met by the attainment of peace between Israel and its neighbours via a negotiated agreement. Subsequently tens of thousands of Israelis petitioned the Israeli government in support of the letter, and as a result the movement known as Peace Now was born.
Peace Now again came to prominence following Israel's 1982 Invasion of Lebanon, and in particular the massacre of Palestinian refugees by Christian Lebanese Phalangists at the Israeli controlled Sabra and the Shatila refugee camp. On 25 September 1982 Peace Now held a mass protest in Tel Aviv in order to pressure the government to establish a national inquiry commission to investigate the massacres, as well as calling for the resignation of the Defence Minister Ariel Sharon. Peace Now's 1982 demonstration was attended by 400,000 people, approximately 10% of Israel's population at the time.
Subsequently the Israeli government established the Kahan Commission on 28 September 1982. Four months later the commission found Israel to be indirectly responsible for the massacres, and recommended Ariel Sharon's resignation.
Israeli Prime Minister Menachem Begin at first refused to adopt the Kahan Commission's recommendations. Consequently, Peace Now decided to hold a demonstration on 10 February 1983 that marched from Zion Square towards the Prime Minister's residence in Jerusalem in order to pressure the government to do so.
In the wake of the Sabra and Shatila massacre, Peace Now led a march from Zion Square and moved towards the Prime Ministers' Office in Jerusalem on 10 February 1983. During the march Peace Now demonstrators encountered a group of right-wing activists. In the ensuing confrontation, Yona Avrushmi tossed a hand-grenade into the crowd, killing Emil Grunzweig, a prominent Peace Now activist, and injuring several others.
Yona Avrushmi was duly arrested, convicted of murder and given a mandatory life sentence, which was commuted to 27 years by President Ezer Weizman in 1995. Avrushmi was released on 26 January 2011.
As a result of mounting public pressure on Menachem Begin to adopt the Kahan Commission's recommendations Ariel Sharon agreed to step down as Defence Minister. However, he remained in the government as a minister without portfolio.
In 1988 Yasser Arafat (Chairman of the PLO) publicly accepted United Nations Security Council Resolution 242 at the PNC in Algiers. For the first time, Yasser Arafat accepted Israel's existence according to its borders set out in United Nations General Assembly Resolution 181, and rejected and condemned the use of terrorism in all its forms. In reaction Peace Now led a demonstration of more than 100,000 people, calling for immediate Israeli-Palestinian negotiations for the purposes of attaining peace between the two parties. Following this, Peace Now led the Hands Around Jerusalem event, in which 25,000 Israelis and Palestinians linked hands to encircle the walls of the Old City of Jerusalem in a chain of peace.
In part due to the Israeli-Palestinian discourse engendered by Peace Now and its activists, Israeli Prime Minister Yitzchak Rabin and Yasser Arafat succeeded in signing the Declaration of Principles/Oslo Accords on the lawn of the White House on 13 September 1993. Peace Now was the first Israeli organisation to meet with the PLO, at a time when such an undertaking was deemed illegal by the Israeli government.
The signing of the Oslo accords marked a milestone in Israeli-Palestinian relations, as for the first time both peoples recognised their counterpart's right to exist. Furthermore, the Oslo Peace Process was initiated; this process was a framework for future negotiations which aimed to resolve the Israel-Palestine conflict within a five-year period according to the logic of the 'two state solution', as set out in UN General Assembly Resolution 181.
Peace Now supported the Oslo Accords, and since then it has called upon all Israeli administrations to date to adhere to the terms of interim agreements which were agreed upon as part of the Oslo Peace Process.
Since the outbreak of the violent Second Intifada in December 2000, Peace Now has arguably lost a certain degree of the Israeli public's support. While the First Intifada was largely a popular movement on the part of the Palestinian public, the Second Intifada consisted of far more violent confrontations between Palestinian militants and the IDF, Israeli settlers within the West Bank and the Gaza Strip, and Israeli civilians. According to the Israeli Ministry of Foreign Affairs, 132 Israeli individuals were killed by Palestinian militant bomb/suicide attacks within Israel proper between 2000 and 2004.
Despite the arguable decline in the Israeli public's support for the Oslo Peace Process, Peace Now succeeded in leading a demonstration of between 60,000 to 100,000 in May 2002, after Israeli military forces began on March 29 a large-scale military Operation Defensive Shield in the West Bank and as Prime Minister Ariel Sharon was mobilizing reserve forces for a possible military invasion of Gaza. The demonstration was held under the banner "Get Out of The Territories". According to "Peace Now" itself, shortly after the outbreak of the Second Intifada, it was instrumental in creating the Israeli Peace Coalition, which later evolved into the Israeli-Palestine Peace Coalition. Its main objective is to end the Israeli occupation of Palestinian lands, and to achieve a just, lasting and comprehensive peace based on a two-state solution.
Peace Now was a key advocate of Israel's 2004 Disengagement Plan. Peace Now led the 'Mate ha-Rov' ("majority camp") demonstration on 14 May 2004 in Tel Aviv, in order to pressure the Israeli government to adopt the Disengagement Plan. However, support for the Disengagement Plan faced contention within the Peace Now camp over its unilateral nature. Peace Now decided it was most important for Israel to withdraw from the Gaza Strip, regardless of the manner in which this withdrawal was to take place.
One of the most important activities of Peace Now is its ongoing Settlement Watch project which monitors and protests against the building of Jewish settlements in the West Bank, including East Jerusalem. Dror Aktes headed this committee until 2007 when he was replaced by Hagit Ofran. The project focuses on the following issues with regards to the settlements:-
Peace Now's Settlement Watch project has resulted in the following developments:-
Similarly, the movement continues activity on the ground in support of evacuation through demonstrations, vigils and other campaign activity. Activities include:-
According to leaked documents released by WikiLeaks in April 2011, Peace Now has regularly updated both the U.S. government and the Israeli Ministry of Defense on ongoing settlement construction in the West Bank. The documents indicate that the Defence Ministry used Peace Now's services to monitor West Bank settlement construction. In 2006, Peace Now director Yariv Oppenheimer reportedly urged the U.S. to pressure Israel into evacuating West Bank outposts, according to a leaked U.S. diplomatic cable. Oppenheimer was quoted as saying that Israel might "evacuate a few outposts to show the U.S. that it is doing something, but in exchange it is trying to co-opt the settlers by retroactively approving some outposts and giving them a freer hand in building in the West Bank."
In a report issued in November 2006, Peace Now wrote that 38.8 percent of the land set aside for Israeli settlements, outposts and industrial land in the West Bank was privately owned by Palestinians. This included 86.4 percent of the land set aside for Ma'ale Adumim and 35.1 percent of Ariel's land. After successfully appealing to a court for access to a government database operated by the Israeli Civil Administration, Peace Now reduced its overall estimate to 32.4 percent and the estimate for Ma'ale Adumim to 0.5 percent. A spokesman for the Civil Administration replied that the new report was still "inaccurate in many places".
Peace Now seeks to educate the Israeli youth on the nature of, and solution to, the Israeli-Palestinian conflict. In order to achieve this, the organisation
Peace Now seeks to promote its various causes via an active presence on such social networking sites as Facebook. Against the background of the 'Boycott Laws' which were being passed through the Knesset in July 2011, the popular Israeli internet site 'Horim B’reshet' made a survey of the most popular Israeli protest Facebook pages, of which Peace Now's page ranked 5th.
Peace Now aims to educate leading decision makers on the perceived counterproductive effects the settlements have on the attainment of the two-state solution.
One such tour was conducted by Peace Now in August 2009 and attended by such figures as
MKs Ophir Pines-Paz (Labor), Daniel Ben-Simon (Labor) and Chaim Oron (Meretz Chairman).
Peace Now organises demonstrations and rallies in support of peace and human rights:-
Due to Peace Now's continued opposition to the development and construction of Jewish settlements in the West Bank/East Jerusalem, the organisation and several of its leading activists have been subject to 'price-tag' attacks and death-threats.
A 'price-tag' attack is defined as a violent act committed against Palestinians, Israeli security forces and/or anti-settlement organisations by pro-settlement advocates in retaliation for terrorist attacks on Israeli targets, government demolition of structures in West Bank settlements or curbs on Israeli settlement construction in the West Bank.
In response to the demolition of three homes in the Migron settlement (as a result of a petition submitted to the Israeli Supreme Court by Peace Now in 2006), right-wing demonstrators spray-painted 'Price Tag Migron', 'Revenge' and 'Death to Traitors' on the entrance to the residence of Hagit Ofran, the head of Peace Now's 'Settlement Watch' project, in early October 2011. Following the incident, a police investigation was opened. Approximately two months later, another 'price tag' attack was carried out, again at Hagit Ofran's residence.
At the 2011 Rabin commemoration rally in Tel Aviv, Hagit Ofran stated in reference to the recent 'price tag attacks':
"The graffiti was sprayed in my home, but the taunts are in all of our stairwells. The tag may have marked me, but we all pay the price. We must not fear. We are here, and we are many. We have a voice and we must raise it. And today we say to Benjamin Netanyahu: We are not afraid."
On 6 November 2011, Peace Now's Jerusalem office received a bomb threat. Police were called to the scene and the building was evacuated. The threat was later found to be a hoax. Following a brief investigation, Jerusalem District Police arrested a 21-year-old male resident of a settlement near Jerusalem who was suspected of vandalizing Peace Now offices in Jerusalem. Police also tried to ascertain whether the suspect was involved in the 'price-tag' attacks on Hagit Ofran's residence. A gag order was initially placed on the publication of his name and picture due to the “ongoing investigation” into the attacks. Once the investigation was complete, the gag order remained in effect, despite the suspect not being a minor. The order also applied to details about the suspect's parents, due to the politically sensitive nature of their occupation.
Although the suspect had been arrested two months previously for issuing death threats against Peace Now's Director General Yariv Oppenhimer and a bomb hoax at the organisation's Jerusalem office, he was released shortly afterwards.
Following court proceedings against the suspect, he was released to house arrest and forced to wear an electronic bracelet, yet his 'price-tag' activities continued. On 27 November 2011, it was reported that the unnamed individual issued death-threats (via email) against all of Peace Now's core team from his Jerusalem house. The gag order remained.
Peace Now has received funding from foreign states and international organizations for such projects as those which measure the expansion of Israeli settlements in the West Bank. In 2008 Peace Now received from the Norwegian embassy, from the British Foreign Office, from Germany's , and from the Dutch Ministry of Foreign Affairs. According to Im Tirtzu, Peace Now received from the embassy of Norway in 2009, as well as from the United Kingdom and from the Belgian government.
The Knesset passed a law in 2008 requiring Israeli organizations to publicize any foreign funding they receive. The law was aimed specifically at Peace Now. In 2011, the Knesset passed a law which required organizations to report each quarter on any foreign funding they receive. In November 2011, Benyamin Netanyahu's government began proceedings to introduce legislation which would place a ILS 20,000 (approx $5000) limit on what NGOs could receive from foreign governments, government-supported foundations and/or groups of governments (e.g. the European Union and the United Nations). Another bill, advanced by Avigdor Lieberman's Yisrael Beiteinu party, proposed a 45% tax on foreign government donations to organizations that do not receive Israeli state funding.
Individuals such as Prime Minister Benyamin Netanyahu, Foreign Minister Avigdor Lieberman and MKs Tzipi Hotovely, Ofir Akunis and Fania Kirshenbaum have supported the proposed legislation. They argue that the legislation prevents foreign governments and organizations from unduly influencing Israel's domestic affairs. The legislation has encountered notable resistance from within Israel itself and abroad. The governments of the United Kingdom and the United States warned Benyamin Netanyahu that the adoption of such measures would harm Israel's standing in the West as a democratic country.
Peace Now received a certificate of merit from the Israeli government and IDF for support given to IDF reserve soldiers.
The certificate was issued as part of a competition which honours organizations, businesses and companies whose workers serve as reservists and are supported by their workplace. The certificate was personally signed by Ehud Barak and Chief Reserve Officer Brigadier General Shuki Ben-Anat. It read:
'For your activity and care for employees serving in reserve duty. Your activity is commendable and greatly contributes to the IDF's fortitude and the State of Israel's security."
Notable individuals such as American actor Leonard Nimoy, American authors Michael Chabon and Ayelet Waldman, and Israeli authors David Grossman and Amos Oz support Peace Now's objectives. Author Mordechai Bar-On described Peace Now as a key instrument for peace. Actor Mandy Patinkin expressed his support for Peace Now during a visit to Israel in 2012.
Peace Now's logo was designed by Israeli graphic designer David Tartakover in 1978. The logo emerged from a poster created by Tartakover for a mass rally, held in what is now Rabin Square in Tel Aviv on 1 April 1978, titled "Peace Now." It became the name of the organization, and was used on the first political bumper sticker in Israel. It is still one of Israel's most popular stickers. Tartakover, commenting in 2006, said "The movement activists liked the logo, [b]ut they thought there should also be a symbol. I told them it wasn't needed - this is the symbol. It took time until they understood that this was the first political sticker in Israel." | https://en.wikipedia.org/wiki?curid=24319 |
Palestine Liberation Organization
The Palestine Liberation Organization (PLO; , ) is an organization founded in 1964 with the purpose of the "liberation of Palestine" through armed struggle, with much of its violence aimed at Israeli civilians. It is recognized as the "sole legitimate representative of the Palestinian people" by over 100 states with which it holds diplomatic relations, and has enjoyed observer status at the United Nations since 1974. The PLO was considered by the United States and Israel to be a terrorist organization until the Madrid Conference in 1991. In 1993, the PLO recognized Israel's right to exist in peace, accepted UN Security Council resolutions 242 and 338, and rejected "violence and terrorism". In response, Israel officially recognized the PLO as the representative of the Palestinian people. However, the PLO has employed violence in the years since 1993, particularly during the 2000–2005 Second Intifada. On 29 October 2018, the Palestinian Central Council suspended the recognition of Israel and halted security and economic coordination in all its forms with it.
At its first summit meeting in Cairo in 1964, the Arab League initiated the creation of an organization representing the Palestinian people. The Palestinian National Council convened in Jerusalem on 28 May 1964. Concluding this meeting the PLO was founded on 2 June 1964. Its stated goal was the "liberation of Palestine" through armed struggle.
The ideology of the PLO was formulated in the founding year 1964 in the Palestinian National Covenant. The document is a combative anti-Zionist statement dedicated to the "restoration of the Palestinian homeland". It has no reference to religion. In 1968, the Charter was replaced by a comprehensively revised version.
Until 1993, the only promoted option was armed struggle. From the signing of the Oslo Accords, negotiation and diplomacy became the only official policy. In April 1996, a large number of articles, which were inconsistent with the Oslo Accords, were wholly or partially nullified.
At the core of the PLO's ideology is the belief that Zionists had unjustly expelled the Palestinians from Palestine and established a Jewish state in place under the pretext of having historic and Jewish ties with Palestine. The PLO demanded that Palestinian refugees be allowed to return to their homes. This is expressed in the National Covenant:
Article 2 of the Charter states that ″Palestine, with the boundaries it had during the British mandate, is an indivisible territorial unit″, meaning that there is no place for a Jewish state. This article was adapted in 1996 to meet the Oslo Accords.
Article 20 states: ″The Balfour Declaration, the Mandate for Palestine, and everything that has been based upon them, are deemed null and void. Claims of historical or religious ties of Jews with Palestine are incompatible with the facts of history and the true conception of what constitutes statehood. Judaism, being a religion, is not an independent nationality. Nor do Jews constitute a single nation with an identity of its own; they are citizens of the states to which they belong″. This article was nullified in 1996.
Article 3 reads: ″The Palestinian Arab people possess the legal right to their homeland and have the right to determine their destiny after achieving the liberation of their country in accordance with their wishes and entirely of their own accord and will″.
The PLO has always labelled the Palestinian people as Arabs. This was a natural consequence of the fact that the PLO was an offshoot of the Arab League. It also has a tactical element, as to keep the backing of Arab states. Over the years, the Arab identity remained the stated nature of the Palestinian State. It is a reference to the ″Arab State″ envisioned in the UN Partition Plan.
The PLO and its dominating faction Fatah are often contrasted to more religious orientated factions like Hamas and the Palestinian Islamic Jihad (PIJ). All, however, represent a predominant Muslim population. Practically the whole population of the Territories is Muslim, most of them Sunni. Only some 50,000 (ca 1%) of the 4.6 million Palestinians in the occupied Palestinian territories (OPT) are Palestinian Christian.
The National Charter has no reference to religion. Under President Arafat, the Fatah-dominated Palestinian Authority adopted the 2003 Amended Basic Law, which stipulates Islam as the sole official religion in Palestine and the principles of Islamic sharia as a principal source of legislation. The draft Constitution, which never materialized, contains the same provisions. At the time, the Palestine Legislative Council (PLC), the unicameral legislature of the Palestinian Authority, elected by the Palestinian residents of the Palestinian territories of the West Bank and Gaza Strip, did not include a single Hamas member. The draft Constitution was formulated by the ″Constitutional Committee″, appointed with the approval of the PLO.
The PLO incorporates a range of generally secular ideologies of different Palestinian movements "committed to the struggle for Palestinian independence and liberation," hence the name of the organization. It formally is an umbrella organization that includes "numerous organizations of the resistance movement, political parties, and popular organizations." From the beginning, the PLO was designed as a government in exile, with a parliament, the Palestine National Council (PNC), chosen by the Palestinian people, as the highest authority in the PLO, and an executive government (EC), elected by the PNC. In practice, however, the organization was rather a hierarchic one with a military-like character, needed for its function as liberation organization, the "liberation of Palestine".
Beside a Palestinian National Charter, which describes the ideology of the PLO, a constitution, named "Fundamental Law", was adopted, which dictates the inner structure of the organization and the representation of the Palestinian people. A draft Constitution was written in 1963, to rule the PLO until free general elections among all the Palestinians in all the countries in which they resided could be held. The Constitution was revised in 1968.
The Palestinian National Council has 740 members and the Executive Committee or ExCo has 18 members. The Palestinian Central Council or CC or PCC, established by the PNC in 1973, is the second leading body of the PLO. The CC consists of 124 members from the PLO Executive Committee, PNC, PLC and other Palestinian organizations. The EC includes 15 representatives of the PLC. The CC functions as an intermediary body between the PNC and the EC. The CC makes policy decisions when PNC is not in session, acting as a link between the PNC and the PLO-EC. The CC is elected by the PNC and chaired by the PNC speaker.
The PNC serves as the parliament for all Palestinians inside and outside of the Occupied Palestinian Territory, including Jerusalem. The PLO is governed internally by its "Fundamental Law", which describes the powers and the relations between the organs of the PLO.
Ahmad Shukeiri was the first Chairman of the PLO Executive Committee from 1964 to 1967. In 1967, he was replaced by Yahia Hammuda. Yasser Arafat occupied the function from 1969 until his death in 2004. He was succeeded by Mahmoud Abbas (also known as Abu Mazen).
According to an internal PLO document, the current PNC remains in function if elections are not possible. In absence of elections, most of the members of the PNC are appointed by the Executive Committee. The document further states that "the PNC represents all sectors of the Palestinian community worldwide, including numerous organizations of the resistance movement, political parties, popular organizations and independent personalities and figures from all sectors of life, including intellectuals, religious leaders and businessmen".
As of 2015, there have not been elections for many years, neither for the PNC, nor for the EC, the PCC or the President of the State of Palestine. The Executive Committee has formally 18 members, including its Chairman, but in past years many vacant seats in the Executive remained empty. Moreover, Hamas, the largest representative of the inhabitants of the Palestinian Territories alongside Fatah, is not represented in the PLO at all. The results of the last parliamentary elections for the PLC, held in the Territories in 2006, with Hamas as the big winner while not even a member of the PLO, "underlined the clear lack of a popular mandate by the PLO leadership", according to PASSIA. Individual elected members of the PLC representing Hamas, however, are automatically members of the PNC.
The representative status of the PLO has often been challenged in the past. It was for example doubted in 2011 by a group of Palestinian lawyers, jurists and legal scholars, due to lack of elections. They questioned the PLO's legitimacy to alter the status and role of the Organisation in respect of their status within the UN. They demanded immediate and direct elections to the Palestine National Council to ″activate representative PLO institutions in order to preserve, consolidate, and strengthen the effective legal representation of the Palestinian people as a whole″, before changing the status within the UN.
The 1993–1995 Oslo Accords deliberately detached the Palestinian population in the Occupied Palestinian Territories from the PLO and the Palestinians in exile by creating a Palestinian Authority (PA) for the Territories. A separate parliament and government were established. Mahmoud Abbas was one of the architects of the Oslo Accords.
Although many in the PLO opposed the Oslo Agreements, the Executive Committee and the Central Council approved the Accords. It marked the beginning of the PLO's decline, as the PA came to replace the PLO as the prime Palestinian political institution. Political factions within the PLO that had opposed the Oslo process were marginalized. Only during the Hamas-led PA Government in 2006–2007, the PLO resurfaced. After Hamas had taken over Gaza in 2007, Abbas issued a decree suspending the PLC and some sections of the Palestinian Basic Law, and appointing Salam Fayyad as Prime Minister.
The PLO managed to overcome the separation by uniting the power in PLO and PA in one individual, Yasser Arafat. In 2002, Arafat held the functions Chairman of the PLO/Executive Committee and Chairman of Fatah, the dominating faction within the PLO, as well as President of the Palestinian National Authority. He also controlled the Palestinian National Security Forces.
On 4 February 1969, Fatah founder Arafat was elected Chairman of the PLO in Cairo. Since, Fatah has been the dominant factor within the PLO, which still continues in 2015.
Under pressure from the international community led by Israel and US, and from inside his own party Fatah, Arafat partially transferred some of his strongly centralized power in 2003, causing strong tensions within the Palestinian leadership. Arafat appointed Mahmoud Abbas as prime minister, but this resulted in disputes about the transfer of tasks and responsibilities. Abbas was strongly supported by the US and the international community, because he was supposed to be more willing to give far-reaching concessions to Israel. While Arafat had retained most of his power and a power struggle within Fatah continued, the leadership was criticised for corruption and nepotism.
After Arafat's death, Abbas increasingly gained exclusive powers within both PLO and PA as well as in Fatah, until he had acquired the same power as was previously held by Arafat. Abbas is criticized for his autocratic rule and refusal to share powers and plans with other Palestinians. In the absence of a functioning parliament and Executive, he even began to issue his own laws. Senior representative of Abbas' Fatah faction and former Fatah minister of prisoner affairs Sufian Abu Zaida complained that Abbas appointed himself as the chief judge and prosecutor, making a mockery of the Palestinian judicial system. There appeared reports of widespread corruption and nepotism within the Palestinian Authority. Only Hamas-ruled Gaza has a more or less functioning parliament.
With a "de facto" defunct parliament and Executive, Mahmoud Abbas increasingly gained exclusive powers within both PLO and PA, as well as in Fatah. After the announcement in August 2015 of Abbas' resignation as Chairman of the Executive Committee and of nine other members as well, many Palestinians saw the move as just an attempt to replace some members in the Executive Committee, or to force a meeting of the PNC and remain in their jobs until the PNC decides whether to accept or to reject their resignations. Met with fierce criticism by many Palestinian factions, a session of the PNC, who had to approve the resignations, was postponed indefinitely.
The Palestine Liberation Organization is recognized by the Arab League as "the "sole and legitimate" representative of the Palestinian people", and by the United Nations as "the representative of the Palestinian people".
The PLO was designated a terrorist organization by the United States in 1987, but in 1988, a presidential waiver was issued, which permitted contact with the organization. Most of the rest of the world recognized the PLO as the legitimate representatives of the Palestinian people from the mid-1970s onwards (after the PLO's admission to the UN as an observer.)
Israel considered the PLO to be a terrorist organization until the Madrid Conference in 1991. In 1993, PLO chairman Yasser Arafat recognized the State of Israel in an official letter to its prime minister, Yitzhak Rabin. In response to Arafat's letter, Israel decided to revise its stance toward the PLO and to recognize the organization as the representative of the Palestinian people. This led to the signing of the Oslo Accords in 1993.
The United Nations General Assembly recognized the PLO as the "representative of the Palestinian people" in Resolution 3210 and Resolution 3236, and granted the PLO observer status on 22 November 1974 in Resolution 3237. On 12 January 1976 the UN Security Council voted 11–1 with 3 abstentions to allow the Palestinian Liberation Organization to participate in a Security Council debate without voting rights, a privilege usually restricted to UN member states. It was admitted as a full member of the Asia group on 2 April 1986.
After the Palestinian Declaration of Independence the PLO's representation was renamed Palestine. On 7 July 1998, this status was extended to allow participation in General Assembly debates, though not in voting.
When President Mahmoud Abbas submitted an application for UN state membership, in September 2011, Palestinian lawyers, jurists and legal scholars expressed their concern that the change of Palestine's status in the UN (since 1988 designated as "Palestine" in place of "Palestine Liberation Organization") could have negative implications on the legal position of the Palestinian people. They warned for the risk of fragmentation, where the State of Palestine would represent the people within the UN and the PLO represent the people outside the UN, the latter including the Palestinians in exile, where refugees constitute more than half of the Palestinian people. They were also afraid of the loss of representation of the refugees in the UN. In Resolution 67/19 November 2012, Palestine was at last awarded non-member observer State status, but the General Assembly maintained the status of the PLO.
By September 2012, with their application for full membership stalled due to the inability of Security Council members to 'make a unanimous recommendation', the PLO had decided to pursue an upgrade in status from "observer entity" to "non-member observer state". On 29 November 2012, Resolution 67/19 passed, upgrading Palestine to "non-member observer State" status in the United Nations. The new status equates Palestine's with that of the Holy See.
The "Palestine Information Office" was registered with the Justice Department of the United States as a foreign agent until 1968, when it was closed. It was reopened in 1989 as the "Palestine Affairs Center." The PLO Mission office, in Washington D.C was opened in 1994, and represented the PLO in the United States. On 20 July 2010, the United States Department of State agreed to upgrade the status of the PLO Mission in the United States to "General Delegation of the PLO". Secretary of State Tillerson in 2017 determined that the PLO Mission broke US law prescribing the PLO Mission from attempting to get the International Criminal Court to prosecute Israelis for offences against Palestinians, under penalty of closure. On 10 September 2018, National security advisor John Bolton announced the closure of the PLO Mission; Nauert, a U.S. Department of State spokeswoman, cited as a reason Palestine's "push to have the International Criminal Court investigate Israel for possible war crimes."
Initially, as a guerrilla organization, the PLO performed actions against Israel in the 1970s and early 1980s, regarded as terroristic activities by Israel and regarded as a war of liberation by the PLO. In 1988, however, the PLO officially endorsed a two-state solution, contingent on terms such as making East Jerusalem capital of the Palestinian state and giving Palestinians the right of return to land occupied by Palestinians prior to 1948, as well as the right to continue armed struggle until the end of "The Zionist Entity." In 1996, the PLO nullified the articles of the PLO's Charter, or parts of it, which called for the destruction of Israel and for armed resistance.
Following the failure of the armies of Egypt and Syria to defeat Israel in the October 1973 Yom Kippur War, which broke the status quo existing since the June 1967 Six-Day War, the PLO began formulating a strategic alternative. Now, they intended to establish a "national authority" over every territory they would be able to reconquer. From 1 to 9 June 1974, the Palestine National Council held its 12th meeting in Cairo. On 8 June, the Ten Point Program was adopted. The Program stated:
By "every part of Palestinian territory that is liberated" was implicitly meant the West Bank and Gaza Strip, albeit presented as an interim goal. The final goal remained "completing the liberation of all Palestinian territory" and "recover all their national rights and, first and foremost, their rights to return and to self-determination on the whole of the soil of their homeland". Also UN Resolution 242 was still rejected.
While clinging to armed struggle as the prime means, the PLO no longer excluded peaceful means. Therefore, the "Ten Point Program" was considered the first attempt by the PLO at peaceful resolution. In October 1974, the Arab League proclaimed the PLO "the sole legitimate representative of the Palestinian people in any Palestinian territory that is liberated", and also the UN recognized the PLO. From then, the diplomatic road was prepared. On the other hand, the Program was rejected by more radical factions and eventually caused a split in the movement.
In 1987, the First Intifada broke out in the West Bank and Gaza Strip. The Intifada caught the PLO by surprise, and the leadership abroad could only indirectly influence the events. A new local leadership emerged, the Unified National Leadership of the Uprising (UNLU), comprising many leading Palestinian factions. After King Hussein of Jordan proclaimed the administrative and legal separation of the West Bank from Jordan in 1988, the Palestine National Council adopted the Palestinian Declaration of Independence in Algiers, proclaiming an independent State of Palestine. The declaration made reference to UN resolutions without explicitly mentioning Security Council Resolutions 242 and 338.
A month later, Arafat declared in Geneva that the PLO would support a solution of the conflict based on these Resolutions. Effectively, the PLO recognized Israel's right to exist within pre-1967 borders, with the understanding that the Palestinians would be allowed to set up their own state in the West Bank and Gaza. The United States accepted this clarification by Arafat and began to allow diplomatic contacts with PLO officials. The Proclamation of Independence did not lead to statehood, although over 100 states recognised the State of Palestine.
In 1993, the PLO secretly negotiated the Oslo Accords with Israel. The accords were signed on 20 August 1993, with a subsequent public ceremony in Washington D.C. on 13 September 1993 with Yasser Arafat and Yitzhak Rabin. The Accords granted Palestinians the right to self-government in the Gaza Strip and the city of Jericho in the West Bank through the creation of the Palestinian Authority. Yasser Arafat was appointed head of the Palestinian Authority and a timetable for elections was laid out. The headquarters of the PLO were moved to Ramallah on the West Bank.
The PLO has been sued in the United States by families of those killed or injured in attacks by Palestinians. One of those lawsuits was settled prior to going to trial, while another went to trial. The PLO was found liable and ordered to pay a judgment of $655.5 million US dollars; however, that verdict was overturned on appeal for a lack of US federal jurisdiction over actions committed overseas.
The PLO began their militancy campaign from its inception with an attack on Israel's National Water Carrier in January 1965. The group used guerrilla tactics to attack Israel from their bases in Jordan (including the West Bank), Lebanon, Egypt (Gaza Strip), and Syria.
The most notable of what were considered terrorist acts committed by member organizations of the PLO were:
From 1967 to September 1970 the PLO, with passive support from Jordan, fought a war of attrition with Israel. During this time, the PLO launched artillery attacks on the moshavim and kibbutzim of Bet Shean Valley Regional Council, while fedayeen launched numerous attacks on Israeli forces. Israel raided the PLO camps in Jordan, including Karameh, withdrawing only under Jordanian military pressure.
This conflict culminated in Jordan's expulsion of the PLO to Lebanon in July 1971.
The PLO suffered a major reversal with the Jordanian assault on its armed groups, in the events known as Black September in 1970. The Palestinian groups were expelled from Jordan, and during the 1970s, the PLO was effectively an umbrella group of eight organizations headquartered in Damascus and Beirut, all devoted to armed struggle against Zionism or Israeli occupation, using methods which included direct clashing and guerrilla warfare against Israel. After Black September, the Cairo Agreement led the PLO to establish itself in Lebanon.
In the late 1960s, and especially after the expulsion of the Palestinian militants from Jordan in Black September events in 1970–1971, Lebanon had become the base for PLO operations. Palestinian militant organizations relocated their headquarters to South Lebanon, and relying on the support in Palestinian refugee camps, waged a campaign of attacks on the Galilee and on Israeli and Jewish targets worldwide. Increasing penetration of Palestinians into Lebanese politics and Israeli retaliations gradually deteriorated the situation.
By the mid-1970s, Arafat and his Fatah movement found themselves in a tenuous position. Arafat increasingly called for diplomacy, perhaps best symbolized by his Ten Point Program and his support for a UN Security Council resolution proposed in 1976 calling for a two-state settlement on the pre-1967 borders. But the Rejectionist Front denounced the calls for diplomacy, and a diplomatic solution was vetoed by the United States. In 1975, the increasing tensions between Palestinian militants and Christian militias exploded into the Lebanese Civil War, involving all factions. On 20 January 1976, the PLO took part in the Damour massacre in retaliation to the Karantina massacre. The PLO and Lebanese National Movement attacked the Christian town of Damour, killing 684 civilians and forcing the remainder of the town's population to flee. In 1976 Syria joined the war by invading Lebanon, beginning the 29‑year Syrian occupation of Lebanon, and in 1978 Israel invaded South Lebanon in response to the Coastal Road Massacre, executed by Palestinian militants based in Lebanon.
The population in the West Bank and Gaza Strip saw Arafat as their best hope for a resolution to the conflict. This was especially so in the aftermath of the Camp David Accords of 1978 between Israel and Egypt, which the Palestinians saw as a blow to their aspirations to self-determination. Abu Nidal, a sworn enemy of the PLO since 1974, assassinated the PLO's diplomatic envoy to the European Economic Community, which in the Venice Declaration of 1980 had called for the Palestinian right of self-determination to be recognized by Israel.
Opposition to Arafat was fierce not only among radical Arab groups, but also among many on the Israeli right. This included Menachem Begin, who had stated on more than one occasion that even if the PLO accepted UN Security Council Resolution 242 and recognized Israel's right to exist, he would never negotiate with the organization. This contradicted the official United States position that it would negotiate with the PLO if the PLO accepted Resolution 242 and recognized Israel, which the PLO had thus far been unwilling to do. Other Arab voices had recently called for a diplomatic resolution to the hostilities in accord with the international consensus, including Egyptian leader Anwar Sadat on his visit to Washington, DC in August 1981, and Crown Prince Fahd of Saudi Arabia in his 7 August peace proposal; together with Arafat's diplomatic maneuver, these developments made Israel's argument that it had "no partner for peace" seem increasingly problematic. Thus, in the eyes of Israeli hard-liners, "the Palestinians posed a greater challenge to Israel as a peacemaking organization than as a military one".
After the appointment of Ariel Sharon to the post of Minister of Defense in 1981, the Israeli government policy of allowing political growth to occur in the occupied West Bank and Gaza strip changed. The Israeli government tried, unsuccessfully, to dictate terms of political growth by replacing local pro-PLO leaders with an Israeli civil administration.
In 1982, after an attack on a senior Israeli diplomat by Lebanon-based Palestinian militants in Lebanon, Israel invaded Lebanon in a much larger scale in coordination with the Lebanese Christian militias, reaching Beirut and eventually resulting in ousting of the PLO headquarters in June that year. Low-level Palestinian insurgency in Lebanon continued in parallel with the consolidation of Shia militant organizations, but became a secondary concern to Israeli military and other Lebanese factions. With ousting of the PLO, the Lebanese Civil War gradually turned into a prolonged conflict, shifting from mainly PLO-Christian conflict into involvement of all Lebanese factions – whether Sunni, Shia, Druze, and Christians.
In 1982, the PLO relocated to Tunis, Tunisia after it was driven out of Lebanon by Israel during the First Lebanon War. Following massive raids by Israeli forces in Beirut, it is estimated that 8,000 PLO fighters evacuated the city and dispersed.
On 1 October 1985, in Operation Wooden Leg, Israeli Air Force F-15s bombed the PLO's Tunis headquarters, killing more than 60 people.
It is suggested that the Tunis period (1982–1991) was a negative point in the PLO's history, leading up to the Oslo negotiations and formation of the Palestinian Authority (PA). The PLO in exile was distant from a concentrated number of Palestinians and became far less effective. There was a significant reduction in centres of research, political debates or journalistic endeavours that had encouraged an energised public presence of the PLO in Beirut. More and more Palestinians were abandoned, and many felt that this was the beginning of the end.
The Second or Al-Aqsa Intifada started concurrently with the breakdown of July 2000 Camp David talks between Palestinian Authority Chairman Yasser Arafat and Israeli Prime Minister Ehud Barak. The Intifada never ended officially, but violence hit relatively low levels during 2005. The death toll, including both military personnel and civilians, of the entire conflict in 2000–2004 is estimated to be 3,223 Palestinians and 950 Israelis, although this number is criticized for not differentiating between combatants and civilians. Members of the PLO have claimed responsibility for a number of attacks against Israelis during the Second Intifada.
In February 2015, in a civil case considered by a US federal court the Palestinian Authority and Palestine Liberation Organization were found liable for the death and injuries of US citizens in a number of terrorist attacks in Israel from 2001 to 2004. The damages are to be $655.5 million.
According to a 1993 report by the British National Criminal Intelligence Service, the PLO was "the richest of all terrorist organizations", with $8–$10 billion in assets and an annual income of $1.5–$2 billion from "donations, extortion, payoffs, illegal arms dealing, drug trafficking, money laundering, fraud, etc." | https://en.wikipedia.org/wiki?curid=24324 |
Pol Pot
Pol Pot (born Saloth Sâr; 19 May 1925 – 15 April 1998) was a Cambodian revolutionary and politician who governed Cambodia as the Prime Minister of Democratic Kampuchea between 1975 and 1979. Ideologically a Marxist–Leninist and a Khmer nationalist, he was a leading member of Cambodia's communist movement, the Khmer Rouge, from 1963 until 1997 and served as the General Secretary of the Communist Party of Kampuchea from 1963 to 1981. Under his administration, Cambodia was converted into a one-party communist state governed according to Pol Pot's interpretation of Marxism–Leninism.
Born to a prosperous farmer in Prek Sbauv, French Cambodia, Pol Pot was educated at some of Cambodia's elite schools. While in Paris during the 1940s, he joined the French Communist Party. Returning to Cambodia in 1953, he involved himself in the Marxist–Leninist Khmer Việt Minh organisation and its guerrilla war against King Norodom Sihanouk's newly independent government. Following the Khmer Việt Minh's 1954 retreat into Marxist–Leninist controlled North Vietnam, Pol Pot returned to Phnom Penh, working as a teacher while remaining a central member of Cambodia's Marxist–Leninist movement. In 1959, he helped formalise the movement into the Kampuchean Labour Party, which was later renamed the Communist Party of Kampuchea (CPK). To avoid state repression, in 1962 he relocated to a jungle encampment and in 1963 became the CPK's leader. In 1968, he relaunched the war against Sihanouk's government. After Lon Nol deposed Sihanouk in a 1970 coup, Pol Pot's forces sided with the deposed leader against Lon Nol's government, which was bolstered by the United States military. Aided by the Việt Cộng militia and North Vietnamese troops, Pol Pot's Khmer Rouge forces advanced and controlled all of Cambodia by 1975.
Pol Pot transformed Cambodia into a one-party state called Democratic Kampuchea. Seeking to create an agrarian socialist society that he believed would evolve into a communist society, Pol Pot's government forcibly relocated the urban population to the countryside to work on collective farms. Pursuing complete egalitarianism, money was abolished and all citizens made to wear the same black clothing. Those the Khmer Rouge regarded as enemies were killed. These mass killings, coupled with malnutrition and poor medical care, killed between 1.5 and 2 million people, approximately a quarter of Cambodia's population, a period later termed the Cambodian genocide. Repeated purges of the CPK generated growing discontent; by 1978 Cambodian soldiers were mounting a rebellion in the east. After several years of border clashes, the newly unified Vietnam invaded Cambodia in December 1978, toppling Pol Pot and installing a rival Marxist–Leninist government in 1979. The Khmer Rouge retreated to the jungles near the Thai border, from where they continued to fight. In declining health, Pol Pot stepped back from many of his roles in the movement. In 1998 the Khmer Rouge commander Ta Mok placed Pol Pot under house arrest, shortly after which he died.
Taking power in Cambodia at the height of Marxism–Leninism's global impact, Pol Pot proved divisive among the international communist movement. Many claimed he deviated from orthodox Marxism–Leninism, but China backed his government as a bulwark against Soviet influence in Southeast Asia. To his supporters, he was a champion of Cambodian sovereignty in the face of Vietnamese imperialism and stood against the Marxist revisionism of the Soviet Union. Conversely, he has been internationally denounced for his role in the Cambodian genocide, regarded as a totalitarian dictator guilty of crimes against humanity.
Pol Pot was born in the village of Prek Sbauv, outside the city of Kampong Thom. He was named Saloth Sâr ( ), the word "sâr" ("white, pale") referencing his comparatively light skin complexion. French colonial records placed his birth date on 25 May 1928, but biographer Philip Short argues he was born in March 1925.
His family was of mixed Chinese and ethnic Khmer heritage, but did not speak Chinese and lived as though they were fully Khmer. His father Loth, who later took the name Saloth Phem, was a prosperous farmer who owned nine hectares of rice land and several draft cattle. Loth's house was one of the largest in the village and at transplanting and harvest time he hired poorer neighbors to carry out much of the agricultural labour. Sâr's mother, Sok Nem, was locally respected as a pious Buddhist. Sâr was the eighth of nine children (two girls and seven boys), three of whom died young. They were raised as Theravada Buddhists, and on festivals travelled to the Kampong Thom monastery.
Cambodia was a monarchy, but the French colonial regime, not the king, was in political control. Sâr's family had connections to the Cambodian royalty: his cousin Meak was a consort of King Sisowath Monivong and later worked as a ballet teacher. When Sâr was six years old, he and an older brother were sent to live with Meak in Phnom Penh; informal adoptions by wealthier relatives were then common in Cambodia. In Phnom Penh, he spent 18 months as a novice monk in the city's Vat Botum Vaddei monastery, learning Buddhist teachings and to read and write the Khmer language.
In summer 1935, Sâr went to live with his brother Suong and the latter's wife and child. That year he began an education at a Roman Catholic primary school, the École Miche, with Meak paying the tuition fees. Most of his classmates were the children of French bureaucrats and Catholic Vietnamese. He became literate in French and familiar with Christianity. Sâr was not academically gifted and was held back two years, receiving his Certificat d'Etudes Primaires Complémentaires in 1941 at the age of 16. He had continued to visit Meak at the king's palace and it was there that he had some of his earliest sexual experiences with some of the king's concubines.
While Sâr was at the school, the King of Cambodia died. In 1941 the French authorities appointed Norodom Sihanouk as his replacement. A new junior middle school, the Collége Pream Sihanouk, was established in Kampong Cham, and Sâr was selected as a boarder at the institution in 1942. This level of education afforded him a privileged position in Cambodian society. He learned to play the violin and took part in school plays. Much of his spare time was spent playing football and basketball. Several fellow pupils, among them Hu Nim and Khieu Samphan, later served in his government. During the new year vacation in 1945, Sâr and several friends from his college theatre troupe went on a provincial tour in a bus to raise money for a trip to Angkor Wat. In 1947, he left the school.
That year he passed exams that admitted him into the Lycée Sisowath, meanwhile living with Suong and his new wife. In summer 1948, he sat the "brevet" entry exams for the upper classes of the Lycée, but failed. Unlike several of his friends, he could not continue on at the school for a baccalauréat. Instead, he enrolled in 1948 to study carpentry at the Ecole Technique in Russey Keo, in Phnom Penh's northern suburbs. This drop from an academic education to a vocational one likely came as a shock. His fellow students were generally of a lower class than those at the Lycée Sisowath, though they were not peasants. At the Ecole Technique he met Ieng Sary, who became a close friend and later a member of his government. In summer 1949, Sâr passed his "brevet" and secured one of five scholarships allowing him to travel to France to study at one of its engineering schools.
During the Second World War, Nazi Germany invaded France and in 1945 the Japanese ousted the French from Cambodia, with Sihanouk proclaiming his country's independence. After the war ended with Germany's and Japan's defeat, France reasserted its control over Cambodia in 1946, but allowed for the creation of a new constitution and the establishment of various political parties. The most successful of these was the Democratic Party, which won the 1946 general election. According to Chandler, Sâr and Sary worked for the party during its successful election campaign; conversely, Short maintains that Sâr had no contact with the party. Sihanouk opposed the party's left-leaning reforms and in 1948 dissolved the National Assembly, instead ruling by decree. Operatives of Ho Chi Minh's better established Vietnamese Marxist–Leninist group, the Việt Minh, also established a nascent Marxist–Leninist movement, but it was beset by ethnic tensions between the Khmer and Vietnamese. News of the group was censored from the press and it is unlikely Sâr was aware of it.
Access to further education abroad made Sâr part of a tiny elite in Cambodia. He and the 21 other selected students sailed from Saigon aboard the SS "Jamaïque", stopping at Singapore, Colombo, and Djibouti en route to Marseille. In January 1950, Sâr enrolled at the École française de radioélectricité to study radio electronics. He took a room in the Cité Universitaire's Indochinese Pavilion, then lodgings on the rue Amyot, and eventually a bedsit on the corner of the rue de Commerce and the rue Letelier. Sâr earned good marks during his first year. He failed his first end-of-year exams but was allowed to retake them and narrowly passed, enabling him to continue his studies.
Sâr spent three years in Paris. In summer 1950, he was one of 18 Cambodian students who joined French counterparts in traveling to Yugoslavia, a Marxist–Leninist state, to volunteer in a labour battalion building a motorway in Zagreb. He returned to Yugoslavia the following year for a camping holiday. Sâr made little or no attempt to assimilate into French culture and was never completely at ease in the French language. He nevertheless became familiar with French literature, one of his favorite authors being Jean-Jacques Rousseau. His most significant friendships in the country were with Ieng Sary, who had joined him there, Thiounn Mumm and Keng Vannsak. He was a member of Vannsak's discussion circle, whose ideologically diverse membership discussed ways to achieve Cambodian independence.
In Paris, Ieng Sary and two others established the Cercle Marxiste ("Marxist Circle"), a Marxist–Leninist organisation arranged in a clandestine cell system. The cells met to read Marxist texts and hold self-criticism sessions. Sâr joined a cell that met on the rue Lacepède; his cell comrades included Hou Yuon, Sien Ary, and Sok Knaol. He helped to duplicate the Cercle's newspaper, "Reaksmei" ("The Spark"), named after a former Russian paper. In October 1951, Yuon was elected head of the Khmer Student Association (AEK; "l'Association des Etudiants Khmers"), establishing close links between the organisation and the leftist Union Nationale des Étudiants de France. The Cercle Marxiste manipulated the AEK and its successor organisations for the next 19 years. Several months after the Cercle Marxiste's formation, Sâr and Sary joined the French Communist Party (CFP). Sâr attended party meetings, including those of its Cambodian group, and read its magazine, "Les Cahiers Internationaux". The Marxist–Leninist movement was then in a strong position globally; the Communist Party of China had recently come to power under Mao Zedong and the French Communist Party was one of the country's largest, attracting the votes of around 25% of the French electorate.
Sâr found many of Karl Marx's denser texts difficult, later saying he "didn't really understand" them. But he became familiar with the writings of Soviet leader Joseph Stalin, including "The History of the Communist Party of the Soviet Union (Bolsheviks)". Stalin's approach to Marxism—known as Stalinism—gave Sâr a sense of purpose in life. Sâr also read Mao's work, especially "On New Democracy", a text outlining a Marxist–Leninist framework for carrying out a revolution in colonial and semi-colonial, semi-feudal societies. Alongside these texts, Sâr read the anarchist Peter Kropotkin's book on the French Revolution, "The Great Revolution". From Kropotkin he took the idea that an alliance between intellectuals and the peasantry was necessary for revolution; that a revolution had to be carried out without compromise to its conclusion to succeed; and that egalitarianism was the basis of a communist society.
In Cambodia, growing internal strife resulted in King Sihanouk dismissing the government and declaring himself prime minister. In response, Sâr wrote an article, "Monarchy or Democracy?", published in the student magazine "Khmer Nisut" under the pseudonym "Khmer daom" ("Original Khmer"). In it, he referred positively to Buddhism, portraying Buddhist monks as an anti-monarchist force on the side of the peasantry. At a meeting, the Cercle decided to send someone to Cambodia to assess the situation and determine which rebel group they should support; Sâr volunteered for the role. His decision to leave may also have been because he had failed his second-year exams two years in a row and thus lost his scholarship. In December, he boarded the "SS Jamaïque", returning to Cambodia without a degree.
Sâr arrived in Saigon on 13 January 1953, the same day on which Sihanouk disbanded the Democratic-controlled National Assembly, began ruling by decree, and imprisoned Democratic members of parliament without trial. Amid the broader First Indochina War in neighboring French Indochina, Cambodia was in a civil war, with civilian massacres and other atrocities carried out by all sides. Sâr spent several months at the headquarters of Prince Norodom Chantaraingsey—the leader of one faction—in Trapeng Kroloeung, before moving to Phnom Penh, where he met with fellow Cercle member Ping Say to discuss the situation. Sâr regarded the Khmer Việt Minh, a mixed Vietnamese and Cambodian guerrilla subgroup of the North Vietnam-based Việt Minh, as the most promising resistance group. He believed the Khmer Việt Minh's relationship to the Việt Minh and thus the international Marxist–Leninist movement made it the best group for the Cercle Marxiste to support. The Cercle members in Paris took his recommendation.
In August 1953, Sâr and Rath Samoeun travelled to Krabao, the headquarters of the Việt Minh Eastern Zone. Over the following nine months, around 12 other Cercle members joined them there. They found that the Khmer Việt Minh was run and numerically dominated by Vietnamese guerrillas, with Khmer recruits largely given menial tasks; Sâr was tasked with growing cassava and working in the canteen. At Krabao, he gained a rudimentary grasp of Vietnamese, and rose to become secretary and aide to Tou Samouth, the Secretary of the Khmer Việt Minh's Eastern Zone.
Sihanouk desired independence from French rule, but after France refused his requests he called for public resistance to its administration in June 1953. Khmer troops deserted the French Army in large numbers and the French government relented, rather than risk a costly, protracted war to retain control. In November, Sihanouk declared Cambodia's independence. The civil conflict then intensified, with France backing Sihanouk's war against the rebels. Following the Geneva Conference held to end the First Indochina War, Sihanouk secured an agreement from the North Vietnamese that they would withdraw Khmer Việt Minh forces from Cambodian territory. The last Khmer Việt Minh units left Cambodia for North Vietnam in October 1954. Sâr was not among them, deciding to remain in Cambodia; he trekked, via South Vietnam, to Prey Veng to reach Phnom Penh. He and other Cambodian Marxist–Leninists decided to pursue their aims through electoral means.
Cambodia's Marxist–Leninists wanted to operate clandestinely but also established a socialist party, Pracheachon, to serve as a front organization through which they could compete in the 1955 election. Although Pracheachon had strong support in some areas, most observers expected the Democratic Party to win. The Marxist–Leninists engaged in entryism to influence Democratic Party policy; Vannsak had become deputy party secretary, with Sâr as his assistant, perhaps helping to alter the party's platform. Sihanouk feared a Democratic Party government and in March 1955 abdicated the throne in favor of his father, Norodom Suramarit. This allowed him to legally establish a political party, the Sangkum Reastr Niyum, with which to contest the election. The September election witnessed widespread voter intimidation and electoral fraud, resulting in Sangkum winning all 91 seats. Sihanouk's establishment of a "de facto" one-party state extinguished hopes that the Cambodian left could take power electorally. North Vietnam's government nevertheless urged Cambodia's Marxist–Leninists not to restart the armed struggle; the former was focused on undermining South Vietnam and had little desire to destabilize Sihanouk's regime given that it had—conveniently for them—remained internationally un-aligned rather than following the Thai and South Vietnamese governments in allying with the anti-communist United States.
Sâr rented a house in the Boeng Keng Kang area of Phnom Penh. Although not qualified to teach at a state school, he gained employment teaching history, geography, French literature, and morals at a private school, the Chamraon Vichea ("Progressive Knowledge"); his pupils, who included the later novelist Soth Polin, described him as a good teacher. He courted society belle Soeung Son Maly before entering a relationship with fellow communist revolutionary Khieu Ponnary, the sister of Sary's wife Thirith. They were married in a Buddhist ceremony in July 1956. He continued to oversee many of the Marxist–Leninists' underground communications; all correspondence between the Democratic Party and the Pracheachon went through him. Sihanouk cracked down on the Marxist–Leninist movement, whose membership had halved since the end of the civil war. Links with the North Vietnamese Marxist–Leninists declined, something Sâr later portrayed as a boon. He and other members increasingly regarded Cambodians as too deferential to their Vietnamese counterparts; to deal with this, Sâr, Tou Samouth, and Nuon Chea drafted a programme and statutes for a new Marxist–Leninist party that would be allied with but not subordinate to the Vietnamese. They established party cells, emphasising the recruitment of small numbers of dedicated members, and organized political seminars in safe houses.
At a 1959 conference, the movement's leadership established the Kampuchean Labour Party, based on the Marxist–Leninist model of democratic centralism. Sâr, Tou Samouth and Nuon Chea were part of a four-man General Affair Committee leading the party. Its existence was to be kept secret from non-members. The Kampuchean Labour Party's conference, held clandestinely from September to October 1960 in Phnom Penh, saw Samouth become party secretary and Nuon Chea his deputy, while Sâr took the third senior position and Ieng Sary the fourth.
Sihanouk spoke out against the Cambodian Marxist–Leninists; although he was an ally of China's Marxist–Leninist government and admitted Marxism–Leninism's capacity to bring swift economic development and social justice, he also warned of its totalitarian character and its suppression of personal liberty. In January 1962, Sihanouk's security services cracked down further on Cambodia's socialists, incarcerating Pracheachon's leaders and leaving the party largely moribund. In July, Samouth was arrested, tortured and killed. Nuon Chea had also stepped back from his political activities, leaving open Sâr's path to become party leader.
As well as facing leftist opposition, Sihanouk's government faced hostility from right-wing opposition centred on Sihanouk's former Minister of State, Sam Sary, who was backed by the United States, Thailand and South Vietnam. After the South Vietnamese supported a failed coup against Sihanouk, relations between the countries deteriorated and the United States initiated an economic blockade of Cambodia in 1956. After Sihanouk's father died in 1960, Sihanouk introduced a constitutional amendment allowing himself to become head of state for life. In February 1962, anti-government student protests turned into riots, at which Sihanouk dismissed the Sangkum government, called new elections, and produced a list of 34 left-leaning Cambodians, demanding that they meet him to establish a new administration. Sâr was on the list, perhaps because of his role as a teacher, but refused to meet with Sihanouk. He and Ieng Sary left Phnom Penh for a Viet Cong encampment near Thboung Khmum in the jungle along Cambodia's border with South Vietnam. According to Chandler, "from this point on he was a full-time revolutionary".
Conditions at the Viet Cong camp were basic and food scarce. As Sihanouk's government cracked down on the movement in Phnom Penh, growing numbers of its members fled to join Sâr at his jungle base. In February 1963, at the party's second conference, held in a central Phnom Penh apartment, Sâr was elected party secretary, but soon fled into the jungle to avoid repression by Sihanouk's government. In early 1964, Sâr established his own encampment, Office 100, on the South Vietnamese side of the border. The Viet Cong allowed his actions to be officially separate from its own, but still wielded significant control over his camp. At a plenum of the party's Central Committee, it was agreed that they should re-emphasize their independence from the Vietnamese Marxist–Leninists and endorse armed struggle against Sihanouk.
The Central Committee met again in January 1965 to denounce the "peaceful transition" to socialism espoused by Soviet Premier Nikita Khrushchev, accusing him of being a revisionist. In contrast to Khrushchev's interpretation of Marxism–Leninism, Sâr and his comrades sought to develop their own, explicitly Cambodian variant of the ideology. Their interpretation moved away from the orthodox Marxist focus on the urban proletariat as the forces of a revolution to build socialism, giving that role instead to the rural peasantry, a far larger class in Cambodian society. By 1965, the party regarded Cambodia's small proletariat as full of "enemy agents" and systematically refused them membership. The party's main area of growth was in the rural provinces and by 1965 membership was at 2000. In April 1965, Sâr travelled by foot along the Ho Chi Minh Trail to Hanoi to meet North Vietnamese government figures, among them Ho Chi Minh and Lê Duẩn. The North Vietnamese were preoccupied with the ongoing Vietnam War and thus did not want Sâr's forces to destabilize Sihanouk's government; the latter's anti-American stance rendered him a "de facto" ally. In Hanoi, Sâr read through the archives of the Workers' Party of Vietnam, concluding that the Vietnamese Marxist–Leninists were committed to pursuing an Indochinese Federation and that their interests were therefore incompatible with Cambodia's.
In November 1965, Saloth Sâr flew from Hanoi to Beijing, where his official host was Deng Xiaoping, although most of his meetings were with Peng Zhen. Sâr gained a sympathetic hearing from many in the governing Communist Party of China (CPC)—especially Chen Boda, Zhang Chunqiao and Kang Sheng—who shared his negative view of Khrushchev amid the Sino-Soviet split. CPC officials also trained him on topics like dictatorship of the proletariat, class struggles and political purge. In Beijing, Sâr witnessed China's ongoing Cultural Revolution, influencing his later policies.
Sâr left Beijing in February 1966, and flew back to Hanoi before a four-month journey along the Ho Chi Minh Trail to reach the Cambodian Marxist–Leninists' new base at Loc Ninh. In October 1966, he and other Cambodian party leaders made several key decisions. They renamed their organisation the Communist Party of Kampuchea (CPK), a decision initially kept secret. Sihanouk began referring to its members as the "Khmer Rouge" ('Red Cambodians'), but they did not adopt this term themselves. It was agreed that they would move their headquarters in Ratanakiri Province, away from the Viet Cong, and that—despite the views of the North Vietnamese—they would command each of the party's zone committees to prepare for the relaunch of armed struggle. North Vietnam refused to assist in this, rejecting their requests for weaponry. In November 1967, Sâr travelled from Tay Ninh to base Office 102 near Kang Lêng. During the journey, he contracted malaria and required a respite in a Viet Cong medical base near Mount Ngork. By December, plans for armed conflict were complete, with the war to begin in the North-West Zone and then spread to other regions. As communication across Cambodia was slow, each Zone would have to operate independently much of the time.
In January 1968, the war was launched with an attack on the Bay Damran army post south of Battambang. Further attacks targeted police and soldiers and seized weaponry. The government responded with scorched-earth policies, aerially bombarding areas where rebels were active. The army's brutality aided the insurgents' cause; as the uprising spread, over 100,000 villagers joined them. In the summer, Sâr relocated his base 30 miles north to the more mountainous Naga's Tail, to avoid encroaching government troops. At this base, called K-5, he increased his dominance over the party and had his own separate encampment, staff, and guards. No outsider was allowed to meet him without an escort. He took over from Sary as the Secretary of the North East Zone. In November 1969, Sâr trekked to Hanoi to persuade the North Vietnamese government to provide direct military assistance. They refused, urging him to revert to a political struggle. In January 1970 he flew to Beijing. There, his wife began showing early signs of the chronic paranoid schizophrenia she would later be diagnosed with.
In March 1970, while Sâr was in Beijing, Cambodian parliamentarians led by Lon Nol deposed Sihanouk when he was out of the country. Sihanouk also flew to Beijing, where the Chinese and North Vietnamese Communist Parties urged him to form an alliance with the Khmer Rouge to overthrow Lon Nol's right-wing government. Sihanouk agreed. On Zhou Enlai's advice, Sâr also agreed, although his dominant role in the CPK was concealed from Sihanouk. Sihanouk then formed his own government-in-exile in Beijing and launched the National United Front of Kampuchea to rally Lon Nol's opponents.
In April 1970, Sâr flew to Hanoi. He stressed to Lê Duẩn that while he wanted the Vietnamese to supply the Khmer Rouge with weapons, he did not want troops: the Cambodians needed to oust Lon Nol themselves. North Vietnamese armies, in collaboration with the Viet Cong, nevertheless invaded Cambodia to attack Lon Nol's forces; in turn, South Vietnam and the United States sent troops to the country to bolster his government. This pulled Cambodia into the Second Indochina War already raging across Vietnam. The U.S. dropped three times as many bombs on Cambodia during the conflict as they had on Japan during World War II. Although targeting Viet Cong and Khmer Rouge encampments, the bombing primarily affected civilians. This fuelled recruitment to the Khmer Rouge, which had an estimated 12,000 regular soldiers at the end of 1970 and four times that number by 1972.
In June 1970, Sâr left Vietnam and reached his K-5 base. In July he headed south; it was at this point that he began referring to himself as "Pol", a name he later lengthened to "Pol Pot". By September, he was based at a camp on the border of Kratie and Kompong Thom, where he convened a meeting of the CPK Standing Committee. Although few senior members could attend, it issued a resolution setting out the principle of "independence-mastery", the idea that Cambodia must be self-reliant and fully independent of other countries. In November, Pol Pot, Ponnary, and their entourage relocated to the K-1 base at Dângkda. His residence was set up on the northern side of the Chinit river; entry was strictly controlled. By the end of the year, Marxist forces had a presence in over half of Cambodia; the Khmer Rouge played a restricted role in this, for throughout 1971 and 1972, the majority of fighting against Lon Nol was carried out by Vietnamese or by Cambodians under Vietnamese control.
In January 1971, a Central Committee meeting was held at this base, bringing together 27 delegates to discuss the war. During 1971, Pol Pot and the other senior party members focused on the construction of a regular Khmer Rouge army and administration that could take a central role when the Vietnamese withdrew. Membership of the party was made more selective, permitting only those regarded as "poor peasants", not those seen as "middle peasants" or students. In July and August, Pol Pot oversaw a month-long training course for CPK cadres in the Northern Zone headquarters. This was followed by the CPK's Third Congress, attended by around 60 delegates, where Pol Pot was confirmed as the Secretary of the Central Committee and Chairman of its Military Commission.
In early 1972, Pol Pot embarked on his first tour of the Marxist-controlled areas across Cambodia. In these areas, called "liberated zones", corruption was stamped out, gambling was banned, and alcohol and extramarital affairs were discouraged. From 1970 to 1971, the Khmer Rouge had generally sought to cultivate good relations with the inhabitants, organising local elections and assemblies. Some people regarded as hostile to the movement were executed, although this was uncommon. Private motor transport was requisitioned. Cooperative stores selling goods like medicines, cloth, and kerosene were formed, providing goods imported from Vietnam. Wealthier peasants had their land redistributed so that by the end of 1972, all families living in the Marxist-controlled areas possessed an equal amount of land. The poorest strata of Cambodian society benefited from these reforms.
From 1972, the Khmer Rouge began trying to refashion all of Cambodia in the image of the poor peasantry, whose rural, isolated, and self-sufficient lives were regarded as worthy of emulation. As of May 1972, the group began ordering all of those living under its control to dress like poor peasants, with black clothes, red-and-white "krama" scarves, and sandals made from car tyres. These restrictions were initially imposed on the Cham ethnic group before being rolled out across other communities. Pol Pot also dressed in this fashion.
CPK members were expected to attend regular (sometimes daily) "lifestyle meetings" in which they engaged in criticism and self-criticism. These cultivated an atmosphere of perpetual vigilance and suspicion within the movement. Pol Pot and Nuon Chea led such sessions at their headquarters, although they were exempt from criticism themselves. By early 1972, relations between the Khmer Rouge and its Vietnamese Marxist allies were becoming strained and some violent clashes had broken out. That year, the North Vietnamese and Viet Cong main-force divisions began pulling out of Cambodia, primarily because they were needed for the offensive against Saigon. As it became more dominant, the CPK imposed increasing numbers of controls over Vietnamese troops active in Cambodia. In 1972, Pol Pot suggested that Sihanouk leave Beijing and tour the areas of Cambodia under CPK control. When Sihanouk did so, he met with senior CPK figures, including Pol Pot, although the latter's identity was concealed from the king.
In May 1973, Pol Pot ordered the collectivisation of villages in the territory it controlled. This move was both ideological, in that it was seen as helping to build a socialist society free from private property, and pragmatic, in that it allowed the Khmer Rouge greater control over the food supply, ensuring that farmers did not sell their produce to government forces. Many villagers resented the collectivisation and slaughtered their livestock to prevent it from becoming collective property. Over the following six months, about 60,000 Cambodians fled from areas under Khmer Rouge control. The Khmer Rouge introduced conscription to bolster its forces. Relations between the Khmer Rouge and the North Vietnamese remained strained. After the latter temporarily reduced the flow of arms to the Khmer Rouge, in July 1973 the CPK Central Committee agreed that the North Vietnamese should be considered "a friend with a conflict". Pol Pot ordered the internment of many of the Khmer Rouge who had spent time in North Vietnam and were considered too sympathetic to them. Most of these were later executed.
In summer 1973, the Khmer Rouge launched its first major assault on Phnom Penh, but was forced back amid heavy losses. Later that year, it began bombarding the city with artillery. In the autumn, Pol Pot travelled to a base at Chrok Sdêch on the eastern foothills of the Cardamon Mountains. By winter, he was back at the Chinit Riber base where he conferred with Sary and Chea. He concluded that the Khmer Rouge should start talking openly about its commitment to making Cambodia a socialist society and launch a secret campaign to oppose Sihanouk's influence. In September 1974, a Central Committee meeting was held at Meakk in Prek Kok commune. There the Khmer Rouge agreed that it would expel the populations of Cambodia's cities to rural villages. They saw this as necessary for dismantling capitalism and its associated urban vices.
By 1974, Lon Nol's government had lost a great deal of support, both domestically and internationally. In 1975, the troops defending Phnom Penh began discussing surrender, eventually doing so and allowing the Khmer Rouge to enter the city on 17 April. There, Khmer Rouge soldiers executed between 700 and 800 senior government, military, and police figures. Other senior figures escaped; Lon Nol fled into exile in the US. He left Saukham Khoy as acting president, although he too fled aboard a departing US Navy ship just twelve days later. Within the city, Khmer Rouge militia under the control of different Zone commanders clashed with one another, partly as a result of turf wars and partly due to the difficulty of establishing who was a group member and who was not.
The Khmer Rouge had long viewed Phnom Penh's population with mistrust, particularly as the city's numbers had been swelled by peasant refugees who had fled the Khmer Rouge's advance and were seen as traitors. Shortly after taking the city, the Khmer Rouge announced that its inhabitants had to evacuate to escape a forthcoming US bombing raid; the group falsely claimed that the population would be allowed to return after three days. This evacuation entailed moving over 2.5 million people out of the city with very little preparation; between 15,000 and 20,000 of these were removed from the city's hospitals and forced to march. Checkpoints were erected along the roads out of the city where Khmer Rouge cadres searched marchers and removed many of their belongings. The march took place in the hottest month of the year and an estimated 20,000 people died along the route. For the Khmer Rouge, emptying Phnom Penh was seen as demolishing not just capitalism in Cambodia, but also Sihanouk's power base and the spy network of the U.S. Central Intelligence Agency (CIA). This helped to secure Khmer Rouge dominance over the country and was a step toward ensuring the urban population's move toward agricultural production.
On 20 April 1975, three days after Phnom Penh fell, Pol Pot secretly arrived in the abandoned city. Along with other Khmer Rouge leaders, he based himself in the railway station, which was easy to defend. In early May, they moved their headquarters to the former Finance Ministry building. The party leadership soon held a meeting at the Silver Pagoda, where they agreed that raising agricultural production should be their government's top priority. Pol Pot declared that "agriculture is key both to nation-building and to national defence"; he believed that unless Cambodia could develop swiftly then it would be vulnerable to Vietnamese domination, as it had been in the past. Their goal was to reach 70 to 80% farm mechanisation in five to ten years, and a modern industrial base in fifteen to twenty years. As part of this project, Pol Pot saw it as imperative that they develop means of ensuring that the farming population worked harder than before.
The Khmer Rouge wanted to establish Cambodia as a self-sufficient state. They did not reject foreign assistance altogether although regarded it as pernicious. While China supplied them with substantial food aid, this was not publicly acknowledged. Shortly after the taking of Phnom Penh, Ieng Sary travelled to Beijing, negotiating the provision of 13,300 tons of Chinese weaponry to Cambodia. At the National Congress meeting in April, the Khmer Rouge declared that it would not permit any foreign military bases on Cambodian soil, a threat to Vietnam, which still had 20,000 troops in Cambodia. To quell tensions arising from recent territorial clashes with Vietnamese soldiers over the disputed Wai Island, Pol Pot, Nuon Chea, and Ieng Sary travelled secretly to Hanoi in May, where they proposed a Friendship Treaty between the two countries. In the short term, this successfully eased tensions. After Hanoi, Pol Pot proceeded to Beijing, again in secret. There he met with Mao and then Deng. Although communication with Mao was hindered by the reliance on translators, Mao urged the younger Cambodian to not uncritically imitate the path to socialism pursued by China or any other country and to avoid repetition of more extreme acts that the Khmer Rouge had been conducting. In China, Pol Pot also received medical treatment for his malaria and gastric ailments. Pol Pot then travelled to North Korea, meeting with Kim Il Sung. In mid-July he returned to Cambodia, and spent August touring the South-Western and Eastern Zones.
In May, Pol Pot adopted the Silver Pagoda as his main residence. He then relocated to the city's tallest structure, the 1960s-built Bank Buildings, which became known as "K1". Several other senior government figures—Nuon Chea, Sary, and Vorn Vet—also lived there. Pol Pot's wife, whose schizophrenia had worsened, was sent to live in a house in Boeung Keng Kâng. Later in 1975, Pol Pot also took Ponnary's old family home in the rue Docteur Hahn as a residence, and subsequently also took a villa in the south of the city for his own. To give his government a greater appearance of legitimacy, Pol Pot organised a parliamentary election, although there was only one candidate in every constituency except in Phnom Penh. The parliament then met for only three hours.
Although Pol Pot and the Khmer Rouge remained the "de facto" government, initially the formal government was the GRUNK coalition, although its nominal head, Penn Nouth, remained in Beijing. Throughout 1975, the Communist Party's governance of Cambodia was kept secret. At a special National Congress meeting from 25–27 April, the Khmer Rouge agreed to make Sihanouk the nominal head of state, a status he retained throughout 1975. Sihanouk had been dividing his time between Beijing and Pyongyang but in September was allowed to return to Cambodia. Pol Pot was aware that if left abroad, Sihanouk could become a rallying point for opposition and thus was better brought into the Khmer government itself; he also hoped to take advantage of Sihanouk's stature in the Non-Aligned Movement. Once home, Sihanouk settled into his palace and was well treated. He was allowed to travel abroad, in October addressing the UN General Assembly to promote the new Cambodian government and in November embarking on an international tour.
The Khmer Rouge's military forces remained divided into differing zones and at a July military parade Pol Pot announced the formal integration of all troops into a national Revolutionary Army, to be headed by Son Sen. Although a new Cambodian currency had been printed in China during the civil war, the Khmer Rouge decided not introduce it. At the Central Committee Plenum held in Phnom Penh in September, they agreed that currency would lead to corruption and undermine their efforts to establish a socialist society. Thus, there were no wages in Democratic Kampuchea. The population were expected to do whatever the Khmer Rouge commanded of them, without pay. If they refused, they faced punishment, sometimes execution. For this reason, Short characterised Pol Pot's Cambodia as a "slave state", with its people effectively forced into slavery by working without pay. At the September Plenum, Pol Pot announced that all farmers were expected to meet a quota of three tons of paddy, or unmilled rice, per hectare, an increase on what was previously the average yield. There he also announced that manufacturing should focus on the production of basic agricultural machinery and light industrial goods such as bicycles.
From 1975 on, all those living in rural co-operatives, meaning the vast majority of Cambodia's population, were reclassified as members of one of three groups: the full-rights members, the candidates, and the depositees. The full-rights members most of whom were poor or lower-middle peasants, were entitled to full rations, and able to hold political posts in the co-operatives and join both the army and the Communist Party. Candidates could still hold low-level administrative positions. The application of this tripartite system was uneven and it was introduced to different areas at different times. On the ground, the basic societal division remained between the "base" people and the "new" people. It was never Pol Pot and the party's intention to exterminate all "new" people although the latter were usually treated harshly and this led some commentators to believe extermination was the government's desire. Pol Pot instead wanted to double or triple the country's population, hoping it could reach between 15 and 20 million within a decade.
Within the village co-operatives, Khmer Rouge militia regularly killed those they deemed to be "bad elements". A common statement used by the Khmer Rouge to those they executed was that "to keep you is no profit, to destroy you is no loss." Those killed were often buried by the fields, to act as fertiliser. During the first year of Khmer Rouge governance, most areas of the country were able to stave off starvation despite significant population increases caused by the evacuation of the cities. There were exceptions, such as parts of the North-West Zone and western areas of Kompong Chhnang, where starvation did occur in 1975.
The new Standing Committee decreed that the population would work ten day weeks with one day off from labor; a system modelled on that used after the French Revolution. Measures were taken to indoctrinate those living in the co-operatives, with set phrases about hard work and loving Cambodia being widely employed, for instance broadcast via loudspeakers or on the radio. New neologisms were introduced and everyday vocabulary was altered to encourage a more collectivist mentality; Cambodians were encouraged to talk about themselves in the plural "we" rather than the singular "I". While working in the fields, people were typically segregated by sex. Sport was prohibited. The only reading material that the population were permitted to read was that produced by the government, most notably the newspaper "Padevat" ("Revolution"). Restrictions were placed on movement, with people permitted to travel only with the permission of the local Khmer Rouge authorities.
In January 1976, a cabinet meeting was held to promulgate a new constitution declaring that the country was to be renamed "Democratic Kampuchea". The constitution asserts state ownership of the means of production, declared equality of men and women, and the rights and obligation of all citizens to work. It outlined that the country would be governed by a three-person presidium, and at the time Pol Pot and the Khmer Rouge leaders expected that Sihanouk would take one of these roles. Sihanouk was nevertheless increasingly uncomfortable with the new government and in March he resigned his role as head of state. Pol Pot tried repeatedly, but unsuccessfully, to get him to change his mind. Sihanouk asked to be allowed to travel to China, citing the need for medical treatment, although this was denied. He was instead kept within his palace, where he was sufficiently stocked with goods to live a luxurious lifestyle throughout the Khmer Rouge years.
The removal of Sihanouk ended the pretence that the Khmer Rouge government was a united front. With Sihanouk no longer part of the government, Pol Pot's government stated that the "national revolution" was over and that the "socialist revolution" could begin, allowing the country to move towards pure communism as swiftly as possible. Pol Pot described the new state as "a precious model for humanity" with a revolutionary spirit that outstripped that of earlier revolutionary socialist movements. In the 1970s, Marxist–Leninism was at its strongest point in history, and Pol Pot presented the Cambodian example as the one which other revolutionary movements should follow.
As part of the new Presidium, Pol Pot became the country's Prime Minister. It was at this point that he took on the public pseudonym of "Pol Pot"; as no-one in the country knew who this was, a fictitious biography was presented. Pol Pot's key allies took the other two positions, with Nuon Chea as President of the Standing Committee of the National Assembly and Khieu Samphân as the head of state. In principle, the Khmer Rouge Standing Committee made decisions on the basis of the Leninist principle of democratic centralism. In reality it was more autocratic, with Pol Pot's decisions being implemented. The parliament which had been elected the previous year never met after 1976. In September 1976, Pol Pot publicly revealed that Angkar was a Marxist–Leninist organisation. In September 1977, at a rally in the Olympic Stadium, Pol Pot then revealed that Angkar was a pseudonym for the CPK. In September 1976, it was announced that Pol Pot had stepped down as Prime Minister, to be replaced by Nuon Chea, but in reality he remained in power, returning to his former position in October. This was possibly a diversionary tactic to distract the Vietnamese government while Pol Pot purged the CPK of individuals he suspected of harbouring Vietnamese sympathies.
The Cambodian population were officially known as "Kampuchean" rather than "Khmer" to avoid the ethnic specificity associated with the latter term. The Khmer language, now labelled "Kampuchean" by the government, was the only legally recognised language, and the Sino-Khmer minority were prohibited from speaking in the Chinese languages they commonly used. Pressure was exerted on the Cham to culturally assimilate into the larger Khmer population.
Pol Pot initiated a series of major irrigation projects across the country. In the Eastern Zone, for instance, a huge dam was built. Many of these irrigation projects failed due to a lack of technical expertise on the part of the workers.
Pol Pot started the "Maha Lout Ploh" in Cambodia, copying the "Great Leap Forward" of China.
The Standing Committee agrees to link several villages in a single co-operative of 500 to 1000 families, with the goal of later forming commune-sized units twice that size. Communal kitchens were also introduced so that all members of a commune ate together rather than in their individual homes. Foraging or hunting for additional food was prohibited, regarded as individualistic behaviour.
From the summer of 1976, the government ordered that children over the age of seven would live not with their parents but communally with Khmer Rouge instructors.
The co-operatives produced less food than the government believed, in part due to a lack of motivation among laborers and the diversion of the strongest workers to irrigation projects. Many party cadres also claimed that they met the government's food production quota when they had failed to do, fearing that they would be criticised for failure. The government became aware of this, and by the end of 1976 Pol Pot acknowledged food shortages in three quarters of failure.
Members of the Khmer Rouge received special privileges not enjoyed by the rest of the population. Party members had better food, with cadres sometimes having access to clandestine brothels. Members of the Central Committee could go to China for medical treatment, and the highest echelons of the party had access to imported luxury products.
The Khmer Rouge also classified people based on their religious and ethnic backgrounds. Under the leadership of Pol Pot, the Khmer Rouge had a policy of state atheism. Buddhist monks were viewed as social parasites and designated a "special class". Within a year of the Khmer Rouge's victory in the civil war, the country's monks were set to manual labor in the rural co-operatives and irrigation projects.
Despite its ideological iconoclasm, many historical monuments were left undamaged by the Khmer Rouge; for Pol Pot's government, like its predecessors, the historic state of Angkor was a key point of reference.
Several isolated revolts broke out against Pol Pot's government. The Khmer Rouge Western Zone regional chief Koh Kong and his followers began launching small-scale attacks on government targets along the Thai border. There were also several village rebellions among the Cham. In February 1976, explosions in Siem Reap destroyed a munitions depot. Pol Pot suspected senior military figures were behind the bombing and, although unable to prove who was responsible, had several army officers arrested.
In September 1976, various party members were arrested and accused of conspiring with Vietnam to overthrow Pol Pot's government. Over the coming months the numbers arrested grew. The government invented claims of assassination attempts against its leading members to justify this internal crack-down within the CPK itself. These party members were accused of being spies for either the CIA, the Soviet KGB, or the Vietnamese. They were encouraged to confess to the accusations, often after torture or the threat of torture, with these confession then being read out at party meetings. As well as occurring in the area around Phnom Penh, trusted party cadres were sent into the country's zones to initiate further purges among the party membership there.
The Khmer Rouge converted a disused secondary school in Phnom Penh's Tuol Sleng region into a security prison, S-21. It was placed under the responsibility of the defence minister, Son Sen. The numbers sent to S-21 grew steadily as the CPK purge proceeded. In the first half of 1976, about 400 people were sent there; in the second half of the year that number was nearer to 1000. By the spring of 1977, 1000 people were being sent there each month. Between 15,000 and 20,000 people would be killed at S-21 during the Khmer Rouge period. About a dozen of them were Westerners. Pol Pot never personally visited S-21.
From late 1976 onward, and especially in the middle of 1977, the levels of violence increased across Democratic Kampuchea, particularly at the village level. Across the country, peasant cadres tortured and killed members of their communities whom they disliked. Many cadres ate the livers of their victims and tore unborn foetuses from their mothers for use as kun krak talismans. The CPK Central Command was aware of such practices but did nothing to stop them. By 1977, the growing violence, coupled with poor food, was generating disillusionment even within the Khmer Rouge's core support base. Growing numbers of Cambodians attempted to flee into Thailand and Vietnam. In the autumn of 1977, Pol Pot declared the purges at an end. According to the CPK's own figures, by August 1977 between 4000 and 5000 party members had been liquidated as "enemy agents" or "bad elements".
In 1978, the government initiated a second purge, during which tens of thousands of Cambodians were accused of being Vietnamese sympathisers and killed. At this point the remaining CPK members who had spent time in Hanoi were killed, along with their children. In January 1978, Pol Pot announced to his colleagues that their slogan should be "Purify the Party! Purify the army! Purify the cadres!"
Short says that Cambodia had become a 'slave state', in which 'Pol enslaved the people literally, by incarcerating them within a social and political structure...'
Outwardly, relations between Cambodia and Vietnam were warm following the establishment of Democratic Kampuchea; after Vietnam was unified in July 1976, the Cambodian government issued a message of congratulations. Privately, relations between the two were declining. In a speech on the first anniversary of their victory in the civil war, Khieu referred to the Vietnamese as imperialists. In May 1976, a negotiation to draw up a formal border between the two countries failed.
On taking power, the Khmer Rouge spurned both the Western states and the Soviet Union as sources of support. Instead, China became Cambodia's main international partner. With Vietnam increasingly siding with the Soviet Union over China, the Chinese saw Pol Pot's government as a bulwark against Vietnamese influence in Indochina. Mao pledged $1 billion in military and economic aid to Cambodia, including an immediate $20 million grant. Many thousands of Chinese military advisors and technicians were also sent to the country to assist in projects like the construction of the Kampong Chhnang military airport. The relationship between the Chinese and Cambodian governments was nevertheless marred by mutual suspicion and China had little influence on Pol Pot's domestic policies. It had greater influence on Cambodia's foreign policy, successfully pushing the country to pursue rapprochement with Thailand and open communication with the U.S. to combat Vietnamese influence in the region.
After Mao died in September 1976, Pol Pot praised him and Cambodia declared an official period of mourning. In November 1976, Pol Pot travelled secretly to Beijing, seeking to retain his country's alliance with China after the Gang of Four were arrested. From Beijing, he was then taken on a tour of China, visiting sites associated with Mao and the Chinese Communist Party. The Chinese were the only country allowed to retain their old Phnom Penh embassy. All other diplomats were made to live in assigned quarters on the Boulevard Monivong. This street was barricaded off and the diplomats were not permitted to leave without escorts. Their food was brought to them and provided through the only shop that remained open in the country. Pol Pot saw the Khmer Rouge as an example that should be copied by other revolutionary movements across the world and courted Marxist leaders from Burma, Indonesia, Malaysia, and Thailand, allowing Thai Marxists to establish bases along the Cambodian border with Thailand. In November 1977, Burma's Ne Win was the first foreign head of government to visit Democratic Kampuchea, followed soon after by Romania's Nicolae Ceaușescu.
Ben Kiernan estimates that 1.671 million to 1.871 million Cambodians died as a result of Khmer Rouge policy, or between 21% and 24% of Cambodia's 1975 population. A study by French demographer Marek Sliwinski calculated slightly fewer than 2 million unnatural deaths under the Khmer Rouge out of a 1975 Cambodian population of 7.8 million; 33.5% of Cambodian men died under the Khmer Rouge compared to 15.7% of Cambodian women. According to a 2001 academic source, the most widely accepted estimates of excess deaths under the Khmer Rouge range from 1.5 million to 2 million, although figures as low as 1 million and as high as 3 million have been cited; conventionally accepted estimates of deaths due to Khmer Rouge executions range from 500,000 to 1 million, "a third to one half of excess mortality during the period". However, a 2013 academic source (citing research from 2009) indicates that execution may have accounted for as much as 60% of the total, with 23,745 mass graves containing approximately 1.3 million suspected victims of execution.
While considerably higher than earlier and more widely accepted estimates of Khmer Rouge executions, the Documentation Center of Cambodia (DC-Cam)'s Craig Etcheson defended such estimates of over one million executions as "plausible, given the nature of the mass grave and DC-Cam's methods, which are more likely to produce an under-count of bodies rather than an over-estimate." Demographer Patrick Heuveline estimated that between 1.17 million and 3.42 million Cambodians died unnatural deaths between 1970 and 1979, with between 150,000 and 300,000 of those deaths occurring during the civil war. Heuveline's central estimate is 2.52 million excess deaths, of which 1.4 million were the direct result of violence. Despite being based on a house-to-house survey of Cambodians, the estimate of 3.3 million deaths promulgated by the Khmer Rouge's successor regime, the People's Republic of Kampuchea (PRK), is generally considered to be an exaggeration; among other methodological errors, the PRK authorities added the estimated number of victims that had been found in the partially-exhumed mass graves to the raw survey results, meaning that some victims would have been double-counted.
An estimated 300,000 Cambodians starved to death between 1979 and 1980, largely as a result of the after-effects of Khmer Rouge policies.
In December 1976, the Cambodian Central Committee's annual plenum proposed the country ready itself for the prospect of war with Vietnam. Pol Pot believed that Vietnam was committed to expansionism and thus was a threat to Cambodian independence. There were renewed border clashes between Cambodia and Vietnam in early 1977, continuing into April. On 30 April, Cambodian units, backed by artillery fire, entered Vietnam and attacked a series of villages, killing several hundred Vietnamese civilians. Vietnam responded by ordering its Air Force to bomb Cambodian border positions. Several months later, the fighting resumed; in September, two divisions of the Cambodian Eastern Zone entered the Tay Ninh area of Vietnam, where they attacked several villages and slaughtered their inhabitants. That month, Pol Pot travelled to Beijing, and from there to North Korea, where Kim Il Sung spoke out against Vietnam in solidarity with the Khmer Rouge.
In December, Vietnam sent 50,000 troops over the border along a 100-mile stretch, penetrating 12 miles into Cambodia. Cambodia then formally broke off diplomatic relations with Vietnam. Cambodian forces fought back against the invaders, who had withdrawn to Vietnam by 6 January 1978. At this point, Pol Pot ordered Cambodia's military to take an aggressive, proactive stance, attacking Vietnamese troops before the latter had the chance to act. The Vietnamese Politburo then concluded that it must not leave Pol Pot in power, but must remove him from power before the Cambodian military strengthened further. In 1978, it established military training camps for Cambodian refugees in southern Vietnam. The Cambodian government also readied itself for war. Plans for a personality cult revolving around Pol Pot were drawn up, based on the Chinese and North Korean models, in the belief that such a cult would unify the population in wartime. The cult was ultimately never implemented.
The failure of Cambodian troops in the Eastern Zone to successfully resist the Vietnamese incursion made Pol Pot suspicious of their allegiances. He ordered a purge of the Eastern Zone, with over 400 CPK cadres from the area being sent to S-21. Aware that they would be killed on Pol Pot's orders, increasing numbers of Eastern Zone troops began rebelling against the Khmer Rouge government. Pol Pot sent more troops into the Eastern Zone to defeat the rebels, ordering them to slaughter the inhabitants of any villages that were believed to be harbouring any rebel forces. This suppression in the east was, according to Short, "the bloodiest single episode under Pol Pot's rule". Fleeing the government troops, many leading rebels—including Zone deputy chiefs Heng Samrin and Pol Saroeun—made it into Vietnam, where they joined the anti-Pol Pot exile community. By August 1978, Pol Pot could only consider Mok's forces in the south-west and Pauk's in the Central Zone as being reliable.
Early in 1978, Pol Pot's government began trying to improve relations with various foreign countries, such as Thailand, to bolster its position against Vietnam. Many other governments in Southeast Asia sympathised with Cambodia's situation, fearing the impact of Vietnamese expansionism and Soviet influence on their own countries. Although supportive of the Cambodians, the Chinese government decided not to send its army into Cambodia, fearing that an all-out conflict with Vietnam could provoke a war with the Soviet Union. Meanwhile, Vietnam was planning its full-scale invasion of Cambodia. In December 1978, it formally launched the Khmer National United Front for National Salvation (KNUFNS), a group made up of Cambodian exiles which it hoped to install in place of the Khmer Rouge. Initially, KNUFNS was headed by Heng Samrin. Fearing this Vietnamese threat, Pol Pot wrote an anti-Vietnamese tract titled the "Black Paper".
In September 1978, Pol Pot began increasingly courting Sihanouk in the hope that the latter could prove a rallying point in support of the Khmer Rouge government. That same month, Pol Pot flew to China to meet with Deng. Deng condemned Vietnamese aggression but suggested that the Khmer Rouge had precipitated the conflict by being too radical in its policies and by allowing Cambodian troops to behave anarchically along the border with Vietnam. On returning to Cambodia, in October Pol Pot ordered the country's army to switch tactics, adopting a defensive strategy involving the heavy use of land mines to stop Vietnamese incursions. He also cautioned the army to avoid direct confrontations which would incur heavy losses and instead adopt guerrilla tactics. In November 1978, the CPK held its Fifth Congress. Here, Mok was appointed the third ranked figure in the government, behind Pol Pot and Nuon Chea. Soon after the Congress, two senior government members—Vorn Vet and Kong Sophal—were arrested and sent to S-21. This precipitated another round of purges.
On 25 December 1978, the Vietnamese Army launched its full-scale invasion. Its columns initially advanced into north-east Cambodia, taking Kratie on 30 December and Stung Treng on 3 January. The Vietnamese main force then entered Cambodia on 1 January 1979, heading along Highways one and seven toward Phnom Penh. Cambodia's forward defences failed to stop them. With an attack on Phnom Penh imminent, in January Pol Pot ordered Sihanouk and his family to be sent to Thailand. The entire diplomatic corps followed shortly after. On 7 January Pol Pot and other senior government figures left the city and drove to Pursat. They spent two days there before moving on to Battambang.
After the Khmer Rouge evacuated Phnom Penh, Mok was the only senior government figure left in the city, tasked with overseeing its defence. Nuon Chear ordered the cadres in control of S-21 to kill all remaining inmates prior to it being captured by the Vietnamese. However, the troops guarding the city were unaware how close the Vietnamese Army actually were; the government had concealed the extent of the Vietnamese gains from the population. As the Vietnamese approached, many officers and other soldiers guarding the city fled; the defence was highly disorganised. There were isolated examples of Cambodian villagers killing Khmer Rouge officials in revenge. In January, Vietnam installed a new government under Samrin, composed of Khmer Rouge who had fled to Vietnam to avoid the purges. The new government renamed Cambodia the "People's Republic of Kampuchea". Although many Cambodians had initially hailed the Vietnamese as saviours, over time resentment against the occupying force grew.
The Khmer Rouge turned to China for support against the invasion. Sary travelled to China via Thailand. There, Deng urged the Khmer Rouge to continue a guerrilla war against the Vietnamese and to establish a broad, non-communist front against the invaders, with a prominent role given to Sihanouk.
China sent its vice premier, Geng Biao, to Thailand to negotiate the shipment of arms to the Khmer Rouge through Thailand. China also sent diplomats to stay with the Khmer Rouge encampments near the Thai border. Pol Pot met with these diplomats twice before the Chinese government withdrew them for their safety in March. In China, the Khmer Rouge set up their "Voice of Democratic Kampuchea" radio station, which remained their main outlet for communicating with the world. In February, the Chinese attacked northern Vietnam, hoping to draw Vietnamese troops away from the invasion of Cambodia. As well as China, the Khmer Rouge also received the support of the United States and most other non-Marxist southeast Asian countries who feared Vietnamese aggression as a tool of Soviet influence in the region.
On 15 January, the Vietnamese reached Sisophon. Pol Pot, Nuon Chea, and Khieu Samphan then moved to Palin on the Thai side of the border, and in late January relocated again, to Tasanh, where Sary joined them. There, on 1 February, they held a Central Committee conference, deciding against Deng's advice about a united front. In the second half of March, the Vietnamese moved to hem in the Khmer Rouge along the Thai border, at which many of Pol Pot's troops crossed into Thailand itself. The Vietnamese advanced on Tasanh, which the Khmer Rouge leaders fleeing only a few hours before it was captured.
In July 1979, Pol Pot established a new headquarters, Office 131, on the western flank of Mount Thom. He dropped the name "Pol Pot" and began calling himself "Phem". In September 1979, Kheiu announced that the Khmer Rouge was establishing a new united front, the Patriotic Democratic Front, bringing together all Cambodians who opposed the Vietnamese occupation. Senior Khmer Rouge members began disavowing the cause of socialism. The group members stopped wearing uniform black outfits; Pol Pot himself started wearing jungle green fatigues and later Thai-made safari suits. Short believed that these changes reflected a genuine ideological shift in the Khmer Rouge. In October, Pol Pot ordered an end to executions, a command which was largely followed. In November 1979, the United Nations General Assembly voted to recognise the Khmer Rouge delegation, rather than that of the Vietnamese-backed government, as the legitimate government of Cambodia. In December, Samphân replaced Pol Pot as prime minister of Democratic Kampuchea, a move that allowed Pol Pot to focus on the war effort and which was perhaps also designed to improve the Khmer Rouge's image.
During the monsoons of summer 1979, the Khmer Rouge troops began filtering back into Cambodia. Many young Cambodians joined the Khmer Rouge forces, wanting to drive the Vietnamese Army out. Boosted by the new Chinese supplies, the Khmer Rouge rebuilt its military structure in early 1980. By mid-1980, the Khmer Rouge claimed it had 40,000 troops active in Cambodia. From 1981, Pol Pot's main goal was to attract popular support among the Cambodian population, believing that this would be vital in enabling him to win the war. In August 1981, he travelled, via Bangkok, to Beijing, where he met with Deng and Zhao Ziyang. Deng had been pushing for Sihanouk, living in Pyongyang, to become Cambodian head of state, something the monarch had reluctantly agreed to in February 1981. In September, Sihanouk, Samphân, and Son Sann issued a joint statement in Singapore announcing the formation of their own coalition government.
In December 1981, Pol Pot and Nuon Chea decided to dissolve the Communist Party of Kampuchea, a decision taken with very little discussion among the party's membership, some of whom were shocked. Many outside commentators believed the dissolution was a ruse, and that the CPK was actually going underground once more, although Short noted that this was not the case. Pol Pot proposed a new Movement of Nationalists that would replace the party, although this failed to fully materialise. The CPK Standing Committee was replaced by a Military Directorate, the focus of which was on driving out the Vietnamese. Pol Pot's decision to disband the party was informed by global events; his anti-Vietnamese army was backed by many capitalist countries while the Vietnamese were backed by most Marxist-governed countries. At the same time, he believed that his main Marxist backers, the Chinese, were themselves restoring capitalism with Deng's reforms. Reflecting the ideological shift, among the Khmer Rouge, collective eating was ended, the ban on individual possessions was lifted, and children were again allowed to live with their parents. Pol Pot commented that his previous administration had been too left-wing and claimed that it had made mistakes because he had placed too much trust in treacherous individuals around him.
In June 1982, at an event in Kuala Lumpur, the Khmer Rouge were among the factions declaring the formation of a Coalition Government of Democratic Kampuchea (CGDK) as an alternative to the administration in Phnom Penh. On the ground in Cambodia there nevertheless remained little military collaboration between these factions, which included the Khmer Rouge as well as the Sihanoukist National Army and Son Senn's National Front for the Liberation of the Khmer People. In 1983, Pol Pot travelled to Bangkok for a medical check-up; there he was diagnosed with Hodgkin's disease. In mid-1984, Office 131 was moved to a new base further into Cambodia, near the O'Suosaday river. In December, the Vietnamese Army launched a major offensive, overrunning the Khmer Rouge's Cambodian basis and pushing Pol Pot back into Thailand. There, he established a new base, K-18, several miles outside Trat.
In September 1985, Pol Pot resigned as commander-in-chief of the Khmer Rouge forces in favour of Son Sen; he nevertheless continued to wield significant influence. In the summer he married a young woman named Mea; the following spring their daughter, Sitha, was born. He then travelled to Beijing to undergo cancer treatment at a military hospital, only returning to Cambodia in the summer of 1988. In 1988, the anti-Vietnamese factions entered into negotiations with the Phnom Penh government. Pol Pot deemed this too soon, for he feared that the Khmer Rouge had not gained sufficient popular support to produce significant gains in any post-war election.
The fall of the Berlin Wall and the subsequent end of the Cold War had repercussions for Cambodia. With the Soviet Union no longer a threat, the U.S. and its allies no longer saw Vietnamese domination of Cambodia as an issue. The U.S. announced that it no longer recognised the CGDK as the legitimate government of Cambodia at the UN General Assembly. In June, the various Cambodian factions agreed a ceasefire, to be overseen by the United Nations, with the formation of a new Supreme National Council to facilitate the implementation of democratic elections. Pol Pot agreed to these terms, fearing that if he refused the other factions would all unite against the Khmer Rouge. In November, Sihanouk returned to Cambodia. There, he praised the Vietnamese-backed leader, Hun Sen, and stated that the Khmer Rouge's leaders should be put on trial for their crimes. When Samphân arrived in Phnom Penh with the Khmer Rouge's delegation, he was beaten by a mob.
Pol Pot established a new headquarter along the border, near Pailin. He called on the Khmer Rouge to redouble their efforts in gaining support across Cambodia's villages. In June, Samphân announced that in contravention of earlier agreements its troops would not disarm, stating that it refused to do so while Vietnamese soldiers remained in Cambodia. The Khmer Rouge became increasingly confrontational, expanding its territory across western Cambodia. It carried out massacres of the Vietnamese settlers who had recently arrived in the area. Hun Sen's forces also carried out military activities, with UN peacekeepers proving ineffective in preventing the violence. In January 1993, Sihanouk returned to Beijing, declaring that Cambodia was unprepared for elections. The Khmer Rouge had formed a new party, the Cambodian National Union Party, through which it could take part in the election, but in March Pol Pot announced that they would boycott the vote. At this point he moved his headquarters to Phnom Chhat; Samphân joined him there, having withdrawn his Khmer Rouge delegation from Phnom Penh.
In the May 1993 elections, Norodom Ranariddh's FUNCINPEC won 58 of the 120 available seats in the National Assembly; Hun Sen's Cambodian People's Party came second. Sun, who was backed by the Vietnamese, refused to acknowledge defeat. Sihanouk negotiated the formation of a coalition government between the two parties, introducing a system whereby Cambodia would have two prime ministers, Ranariddh and Sen. The new Cambodian National Army then launched an offensive against the Khmer Rouge. By August, it had captured Phnom Chhat, with Pol Pot fleeing back into Thailand. The Khmer Rouge launched a counter-offensive, having regained much of the territory they recently lost by May 1994. Pol Pot moved to Anlong Veng, but as that was overrun in 1994 he relocated to Kbal Ansoang, on the crest of the Dangrek Mountains. The Khmer Rouge nevertheless faced growing levels of desertion over the first half of the 1990s.
Pol Pot placed renewed emphasis on those living in Khmer Rouge territory imitating the lives of the poorest peasants and in 1994 ordered the confiscation of private transport and an end to cross-border trade with Thailand. In September he ordered the execution of a Briton, a Frenchman, and an Australian who had been captured in a Khmer Rouge attack on a train. In July 1996, a mutiny broke out among the Khmer Rouge and in August it was announced that Ieng Sary, Y Chhean, and Sok Pheap were breaking away from the movement, taking with troops loyal to them. This meant that around 4000 soldiers left, almost halving the troop forces that the Khmer Rouge then commanded. By the end of 1996, the Khmer Rouge had lost almost all the territory they held in the interior of Cambodia, being restricted to a few hundred miles along the northern border. Pol Pot commented to his aides: "We are like a fish in a trap. We cannot last like this for very long". Pol Pot's health was declining. He suffered from aortic stenosis and no longer had access to follow-up treatment for his earlier cancer. A stroke left him paralysed on the left side of his body, and he eventually required daily access to oxygen. He spent increasing amounts of time with his family, in particular his daughter.
Pol Pot had grown suspicious of Son Sen and in June 1997 ordered his death. Khmer Rouge cadres subsequently killed Son and 13 of his family members and aides; Pol Pot later stated that he had not sanctioned all of these killings. Ta Mok was concerned that Pol Pot could turn on him too. Mok rallied troops loyal to him at Anlogn Veng, informing them that Pol Pot had betrayed their movement and then headed to Kbal Ansoang. Fearing Mok's troops, on 12 June Pol Pot, his family, and several bodyguards fled on foot. Pol Pot was very frail and had to be carried. After Mok's troops apprehended them, Pol Pot was placed under house arrest. Khieu Samphân and Nuon Chea sided with Mok.
In late July, Pol Pot and the three Khmer Rouge commanders who remained loyal to him were brought before a mass meeting near Sang'nam. The U.S. journalist Nate Thayer was invited to film the event. There, the Khmer Rouge sentenced Pol Pot to life imprisonment; the three other commanders were sentenced to death. Three months later, Ta Mok permitted Thayer to visit and interview Pol Pot.
On 15 April 1998, Pol Pot died in his sleep, apparently of heart failure. His body was preserved with ice and formaldehyde so that his death could be verified by journalists attending his funeral. Three days later, his wife cremated his body on a pyre of tyres and rubbish, utilising traditional Buddhist funerary rites.
There were suspicions that he had committed suicide by taking an overdose of the medication which he had been prescribed. Thayer, who was present, held the view that Pol Pot killed himself when he became aware of Ta Mok's plan to hand him over to the United States, saying that "Pol Pot died after ingesting a lethal dose of a combination of Valium and chloroquine".
In May, Pol Pot's widow and Tep Khunnal fled to Malay, where they married. The Khmer Rouge themselves continued to face territorial losses to the Cambodian Army and in March 1999 Ta Mok was also captured.
Pol Pot considered himself a communist, and described his CPK as adhering to a "Marxist–Leninist viewpoint", albeit one that had been adapted to Cambodian conditions. He took up ideas of orthodox Marxism–Leninism but, contrary to Marx and Lenin's concepts, he believed in the ideal of an entirely self-sufficient and agrarian socialist society that would be entirely free from all foreign influences. Joseph Stalin's work has been described as a "crucial formative influence" on Pol Pot. Even more influential was the work of Mao Zedong, particularly his New Democracy. Following Mao's thoughts and political example, in the mid-1960s Pol Pot reformulated his ideas about Marxism–Leninism to better suit the Cambodian situation. Due to these alterations, various other Marxist–Leninists said that he was not truly adhering to Marxist–Leninist ideas. In 1979, Deng for instance criticised the Khmer Rouge for engaging in "deviations from Marxism-Leninism".
In re-interpreting the revolutionary role of classes and questioning the Marxist focus on the proletariat, Pol Pot embraced the idea of a revolutionary alliance between the peasantry and the intellectuals, an idea that Short linked to his reading of Peter Kropotkin while he was in Paris. Contrary to the principles of historical dialectics, he believed that peasants could develop a proletarian consciousness as an effect of the communist party's education of the masses, which resembles orthodox Marxist–Leninist thought. In addition to that, Philip Short maintained that "the grammar of Theravada Buddhism permeated" Pol Pot's thought as much as Confucianism had influenced the development of Maoism in China. According to key Khmer Rouge figure Khieu Samphan, a key concept was "zero for him, zero for you - that is communism", in that in a society where all things were the possession of the state and no individual owned anything, everyone would be equal.
Short also thought that the Khmer Rouge's ideology stood apart from other forms of Marxism due to its "monastic stress on discipline", with "the systematic destruction of the individual" being a "hallmark" of its ideology. Pol Pot and the Khmer Rouge believed that in order to crush the individualistic attitude that they thought was endemic in Cambodian society, coercion was needed to ensure the creation of a collectivised state. Short noted that an underlying doctrinal view among the Khmer Rouge was that "it is always better to go too far than not far enough", an approach that was "at the root of many of the abuses" which occurred under their regime. Within the Communist Party itself, hunger, lack of sleep, and long hours of labour were employed at training camps to ramp up the physical and mental pressure and thus facilitate indoctrination. Short commented that "no other communist party" in history ever went "so far in its attempts directly to remould the minds of its members".
Pol Pot disbanded his Communist Party during the 1980s so as to emphasise a unified national struggle against Vietnamese occupation. That decade, Pol Pot commented that "We chose communism because we wanted to restore our nation. We helped the Vietnamese, who were communist. But now the communists are fighting us. So we have to turn to the West and follow their way." This action led Short to suggest that "the veneer of Marxism-Leninism which had cloaked Cambodian radicalism had only ever been skin-deep."
Pol Pot's government was totalitarian.
Pol Pot desired autarky, or complete self-sufficiency, for Cambodia. Short suggested that Pol Pot had been "an authentic spokesman" for the yearning that many Khmer felt for "the return of their former greatness", the era of the Khmer Empire.
The party leadership has been described as xenophobic.
Short observed that decision-making in Pol Pot's Cambodia was "unruly", making it dissimilar from the centralised, organised processes found in other Marxist–Leninist states.
Within Democratic Kampuchea, there was much regional and local variation in how party cadres implemented Pol Pot's orders.
Pol Pot repeatedly stated or implied that Cambodians were an intrinsically superior group to other ethnic or national groups and that they were immune to foreign influences. Short also noted that the Khmer Rouge generally regarded foreigners as enemies; during the Cambodian civil war, they killed numerous foreign journalists whom they captured, whereas the Vietnamese Marxists typically let them go.
Pol Pot was an extreme nativist, racist and xenophobe who sought to remove all ethnic and religious minorities from Kampuchea. In addition, native religions were banned as part of the Khmer Rouge's attempt to eliminate religion in the country.
Pol Pot had a thirst for power. He was introspective, highly reclusive, and fearful of the threat of assassination. Short stated that he "delighted in appearing to be what he was not – a nameless face in the crowd". During his political career, he used a wide array of pseudonyms: Pouk, Hay, Pol, 87, Grand-Uncle, Elder Brother, First Brother and in later years he used the pseudonyms 99 and Phem. He told a secretary that "the more often you change your name the better. It confuses the enemy". In later life he concealed and falsified many details of his life. He never explained why he chose the pseudonym "Pol Pot".
Pol Pot displayed what Chandler called a "genteel charisma", with many observers commenting on his distinctive smile. As a child, his brother characterized him as having been sweet tempered and equable, while fellow school pupils recalled that Pol Pot was mediocre but pleasant. As a teacher, he was characterized by his pupils as having been calm, honest and persuasive, having an "evident good nature and an attractive personality". According to Short, Pol Pot's varied and eclectic upbringing meant that he was "able to communicate naturally with people of all sorts and conditions, establishing an instinctive rapport that invariably made them want to like him". Pol Pot had a "magnetic personality", Short noted. When speaking to audiences he usually carried a fan, which in Cambodian culture was traditionally associated with monkhood.
Pol Pot was softly spoken. During speeches he was serene and calm, even in the midst of using violent rhetoric. Chandler noted that when meeting with people, Pol Pot displayed an "apparent warmth" and was known for his "slowly uttered words". Kong Duong, who worked with Pol Pot in the 1980s, said that he was "very likeable, a really nice person. He was friendly, and everything he said seemed very sensible. He would never blame you or scold you to your face."
Pol Pot suffered from insomnia and was frequently ill. He suffered from malaria and intestinal ailments, which left him ill several times a year whilst he was in power. During his childhood, Pol Pot developed a love of music and romantic French poetry, with the work of Paul Verlaine being among his favorites. He was a fan of traditional Khmer music.
Chandler suggested that the seven years that Pol Pot primarily spent in jungle encampments among his fellow Marxists had a significant effect on his world-view, and they "probably reinforced his sense of destiny and self-importance". Pol Pot had a nationalistic attitude and displayed little interest in events outside Cambodia.
Short related that "Pol did believe he was acting for the common good and that sooner or later everyone would recognise that."
Short suggested that Pol Pot, along with other senior members of the Khmer Rouge, engaged in the "glorification of violence" and saw bloodshed as a "cause for exultation". This, Short suggested, marked the Khmer Rouge's leadership out as being different from those who led the Chinese and Vietnamese Marxist movements, who tended to see violence as a necessary evil rather than something to embrace joyfully.
Pol Pot wanted his followers to develop a "revolutionary consciousness" that would allow them to act without his guidance and was often disappointed when they failed to display this. Partly because he did not fully trust subordinates he micro-managed events, scrutinising things such as menus for state receptions or the programming schedules for radio broadcasts.
In its obituary notice for Pol Pot, "The New York Times" referred to him as the creator of "one of the 20th century's most brutal and radical regimes".
Both the BBC News and "Time" magazine blamed his government for "one of the worst mass killings of the 20th century". In 2009, Deutsche Welle described Pol Pot's government as having initiated one of the "world's most infamous political experiments", while Short referred to the Khmer Rouge as "the most radical revolutionary movement of modern times". Writing for the U.S. socialist magazine "Jacobin" in 2019, the Dutch socialist Alex de Jong characterised Pol Pot's government as a "genocidal regime" and noted that the name of the Khmer Rouge had become "synonymous with murder and repression". Many Cambodians who lived through his administration later referred to it as "samai a-Pot" ("the era of the contemptible Pot".)
The idea that the deaths which occurred under Pol Pot's government should be considered genocide was first put forward by the Vietnamese government in 1979 after the revelations of the killings committed at Tuol Sleng prison. Short argued that while Pol Pot's administration was clearly responsible for crimes against humanity, it was misleading to accuse it of genocide because it never sought to eradicate an entire population.
Various Marxist–Leninist groups endorsed Pol Pot's government while it was in power. The small Canadian Communist League (Marxist–Leninist), for instance, praised his government and sent a delegation to meet with him in Phnom Penh in December 1978. Another sympathiser who visited Pol Pot that year was the Scottish communist Malcolm Caldwell, an economic historian based at London's School of Oriental and African Studies. He met with Pol Pot, but was murdered shortly afterward; the culprit was never identified. Also in 1978, the Khmer Rouge met with delegates of the Swedish Cambodian Friendship Association, whose members openly sympathised with Pol Pot's regime. One of its members, Gunnar Bergstrom, later noted that in the 1970s he had been a Marxist–Leninist who had become dissatisfied with the Soviet Union and believed that the Cambodian government was building a society based on freedom and equality. In his view, the Khmer Rouge regime was "an example to the Third World". Bergstrom noted that he and his fellow members had heard about atrocities that were taking place but "did not want to believe them". Bergstrom later renounced communism; in 2008 he returned to Cambodia for a tour during which he publicly apologised for supporting the regime. | https://en.wikipedia.org/wiki?curid=24326 |
Prairie dog
Prairie dogs (genus Cynomys) are herbivorous burrowing rodents native to the grasslands of North America. The five species are: black-tailed, white-tailed, Gunnison's, Utah, and Mexican prairie dogs. They are a type of ground squirrel, found in North America. In Mexico, prairie dogs are found primarily in the northern states, which lie at the southern end of the Great Plains: northeastern Sonora, north and northeastern Chihuahua, northern Coahuila, northern Nuevo León, and northern Tamaulipas. In the United States, they range primarily to the west of the Mississippi River, though they have also been introduced in a few eastern locales. They are also found in the Canadian Prairies. Despite the name, they are not actually canines.
Prairie dogs are named for their habitat and warning call, which sounds similar to a dog's bark. The name was in use at least as early as 1774. The 1804 journals of the Lewis and Clark Expedition note that in September 1804, they "discovered a Village of an animal the French Call the Prairie Dog". Its genus, "Cynomys", derives from the Greek for "dog mouse" (κυων "kuōn", κυνος "kunos" – dog; μυς "mus", μυός "muos" – mouse).
The black-tailed prairie dog ("Cynomys ludovicianus") was first described by Lewis and Clark in 1804. Lewis described it in more detail in 1806, calling it the "barking squirrel".
On average, these stout-bodied rodents will grow to be between long, including the short tail, and weigh between . Sexual dimorphism in body mass in the prairie dog varies 105 to 136% between the sexes. Among the species, black-tailed prairie dogs tend to be the least sexually dimorphic, and white-tailed prairie dogs tend to be the most sexually dimorphic. Sexual dimorphism peaks during weaning, when the females lose weight and the males start eating more, and is at its lowest when the females are pregnant, which is also when the males are tired from breeding.
Prairie dogs are chiefly herbivorous, though they eat some insects. They feed primarily on grasses and small seeds. In the fall, they eat broadleaf forbs. In the winter, lactating and pregnant females supplement their diets with snow for extra water. They also will eat roots, seeds, fruit, and buds. Grasses of various species are eaten. Black-tailed prairie dogs in South Dakota eat western bluegrass, blue grama, buffalo grass, six weeks fescue, and tumblegrass, while Gunnison’s prairie dogs eat rabbit brush, tumbleweeds, dandelions, saltbush, and cacti in addition to buffalo grass and blue grama. White-tailed prairie dogs have been observed to kill ground squirrels, a competing herbivore.
Prairie dogs live mainly at altitudes ranging from 2,000 to 10,000 ft above sea level. The areas where they live can get as warm as in the summer and as cold as in the winter. As prairie dogs live in areas prone to environmental threats, including hailstorms, blizzards, and floods, as well as drought and prairie fires, burrows provide important protection. Burrows help prairie dogs control their body temperature (Thermoregulation) as they are 5–10 °C during the winter and 15–25 °C in the summer. Prairie dog tunnel systems channel rainwater into the water table which prevents runoff and erosion, and can also change the composition of the soil in a region by reversing soil compaction that can result from cattle grazing.
Prairie dog burrows are long and below the ground. The entrance holes are generally in diameter. Prairie dog burrows can have up to six entrances. Sometimes the entrances are simply flat holes in the ground, while at other times they are surrounded by mounds of soil either left as piles or hard packed. Some mounds, known as dome craters, can be as high as high. Other mounds, known as rim craters, can be as high as 1 m. Dome craters and rim craters serve as observation posts used by the animals to watch for predators. They also protect the burrows from flooding. The holes also possibly provide ventilation as the air enters through the dome crater and leaves through the rim crater, causing a breeze though the burrow. Prairie dog burrows contain chambers to provide certain functions. They have nursery chambers for their young, chambers for night, and chambers for the winter. They also contain air chambers that may function to protect the burrow from flooding and a listening post for predators. When hiding from predators, prairie dogs use less-deep chambers that are usually a meter below the surface. Nursery chambers tend to be deeper, being two to three meters below the surface..
Highly social, prairie dogs live in large colonies or "towns" and collections of prairie dog families that can span hundreds of acres. The prairie dog family groups are the most basic units of its society. Members of a family group inhabit the same territory. Family groups of black-tailed and Mexican prairie dogs are called "coteries", while "clans" are used to describe family groups of white-tailed, Gunnison’s, and Utah prairie dogs. Although these two family groups are similar, coteries tend to be more closely knit than clans. Members of a family group interact through oral contact or "kissing" and grooming one another. They do not perform these behaviors with prairie dogs from other family groups.
A prairie dog town may contain 15–26 family groups. There may also be subgroups within a town, called "wards", which are separated by a physical barrier. Family groups exist within these wards. Most prairie dog family groups are made up of one adult breeding male, two to three adult females and one to two male offspring and one to two female offspring. Females remain in their natal groups for life and are thus the source of stability in the groups. Males leave their natal groups when they mature to find another family group to defend and breed in. Some family groups contain more breeding females than one male can control, so have more than one breeding adult male in them. Among these multiple-male groups, some may contain males that have friendly relationships, but the majority contain males that have largely antagonistic relationships. In the former, the males tend to be related, while in the latter, they tend not to be related. Two to three groups of females may be controlled by one male. However, among these female groups, there are no friendly relations.
The average prairie dog territory takes up 0.05–1.01 hectares. Territories have well-established borders that coincide with physical barriers such as rocks and trees. The resident male of a territory defends it and antagonistic behavior will occur between two males of different families to defend their territories. These interactions may happen 20 times per day and last five minutes. When two prairie dogs encounter each other at the edges of their territories, they will start staring, make bluff charges, flare their tails, chatter their teeth, and sniff each other's perianal scent glands. When fighting, prairie dogs will bite, kick and ram each other. If their competitor is around their size or smaller, the females will participate in fighting. Otherwise, if a competitor is sighted, the females signal for the resident male.
Prairie dog copulation occurs in the burrows, and this reduces the risk of interruption by a competing male. They are also at less risk of predation. Behaviors that signal that a female is in estrus include underground consorting, self-licking of genitals, dust-bathing, and late entrances into the burrow at night. The licking of genitals may protect against sexually transmitted diseases and genital infections, while dust-bathing may protect against fleas and other parasites. Prairie dogs also have a mating call which consists of a set of 2 to 25 barks with a 3- to 15-second pause between each one. Females may try to increase their reproduction success by mating with males outside their family groups. When copulation is over, the male is no longer interested in the female sexually, but will prevent other males from mating with her by inserting copulatory plugs..
For black-tailed prairie dogs, the resident male of the family group fathers all the offspring. Multiple paternity in litters seems to be more common in Utah and Gunnison’s prairie dogs. Mother prairie dogs do most of the care for the young. In addition to nursing the young, the mother also defends the nursery chamber and collects grass for the nest. Males play their part by defending the territories and maintaining the burrows. The young spend their first six weeks below the ground being nursed. They are then weaned and begin to surface from the burrow. By five months, they are fully grown. The subject of cooperative breeding in prairie dogs has been debated among biologists. Some argue prairie dogs will defend and feed young that are not theirs, and it seems young will sleep in a nursery chamber with other mothers; since most nursing occurs at night, this may be a case of communal nursing. In the case of the latter, others suggest communal nursing occurs only when mothers mistake another female's young for their own.
Infanticide is known to occur in prairie dogs. Males which take over a family group will kill the offspring of the previous male. This causes the mother to go into estrus sooner. However, most infanticide is done by close relatives. Lactating females will kill the offspring of a related female both to decrease competition for the female’s offspring and for increased foraging area due to a decrease in territorial defense by the victimized mother. Supporters of the theory that prairie dogs are communal breeders state that another reason for this type of infanticide is so that the female can get a possible helper. With their own offspring gone, the victimized mother may help raise the young of other females.
The prairie dog is well adapted to predators. Using its dichromatic color vision, it can detect predators from a great distance; it then alerts other prairie dogs of the danger with a special, high-pitched call. Constantine Slobodchikoff and others assert that prairie dogs use a sophisticated system of vocal communication to describe specific predators. According to them, prairie dog calls contain specific information as to what the predator is, how big it is and how fast it is approaching. These have been described as a form of grammar. According to Slobodchikoff, these calls, with their individuality in response to a specific predator, imply that prairie dogs have highly developed cognitive abilities. He also writes that prairie dogs have calls for things that are not predators to them. This is cited as evidence that the animals have a very descriptive language and have calls for any potential threat.
Alarm response behavior varies according to the type of predator announced. If the alarm indicates a hawk diving toward the colony, all the prairie dogs in its flight path dive into their holes, while those outside the flight path stand and watch. If the alarm is for a human, all members of the colony immediately rush inside the burrows. For coyotes, the prairie dogs move to the entrance of a burrow and stand outside the entrance, observing the coyote, while those prairie dogs that were inside the burrows will come out to stand and watch as well. For domestic dogs, the response is to observe, standing in place where they were when the alarm was sounded, again with the underground prairie dogs emerging to watch.There is debate over whether the alarm calling of prairie dogs is selfish or altruistic. It is possible that prairie dogs alert others to the presence of a predator so they can protect themselves. However, it is also possible that the calls are meant to cause confusion and panic in the groups and cause the others to be more conspicuous to the predator than the caller. Studies of black-tailed prairie dogs suggest that alarm-calling is a form of kin selection, as a prairie dog’s call alerts both offspring and nondescended kin, such as cousins, nephews and nieces. Prairie dogs with kin close by called more often than those that did not have kin nearby. In addition, the caller may be trying to make itself more noticeable to the predator. Predators, though, seem to have difficulty determining which prairie dog is making the call due to its "ventriloquistic" nature.
Perhaps the most striking of prairie dog communications is the territorial call or "jump-yip" display of the black-tailed prairie dog. A black-tailed prairie dog will stretch the length of its body vertically and throw its forefeet into the air while making a call. A jump-yip from one prairie dog causes others nearby to do the same.
Ecologists consider this rodent to be a keystone species. They are an important prey species, being the primary diet in prairie species such as the black-footed ferret, swift fox, golden eagle, red tailed hawk, American badger, coyote and ferruginous hawk. Other species, such as the golden-mantled ground squirrel, mountain plover, and the burrowing owl, also rely on prairie dog burrows for nesting areas. Even grazing species, such as plains bison, pronghorn, and mule deer have shown a proclivity for grazing on the same land used by prairie dogs.
Nevertheless, prairie dogs are often identified as pests and exterminated from agricultural properties because they are capable of damaging crops, as they clear the immediate area around their burrows of most vegetation.
As a result, prairie dog habitat has been affected by direct removal by farmers, as well as the more obvious encroachment of urban development, which has greatly reduced their populations. The removal of prairie dogs "causes undesirable spread of brush", the costs of which to livestock range may outweigh the benefits of removal. Black-tailed prairie dogs comprise the largest remaining community. In spite of human encroachment, prairie dogs have adapted, continuing to dig burrows in open areas of western cities.
One common concern which led to the widespread extermination of prairie dog colonies was that their digging activities could injure horses by fracturing their limbs. However, according to writer Fred Durso, Jr., of "E Magazine", "after years of asking ranchers this question, we have found not one example."
Another concern is their susceptibility to bubonic plague.
Until 2003, primarily black-tailed prairie dogs were collected from the wild for the exotic pet trade in Canada, the United States, Japan, and Europe. They were removed from their burrows each spring, as young pups, with a large vacuum device. They can be difficult to breed in captivity, but breed well in zoos. Removing them from the wild was a far more common method of supplying the market demand.
They can be difficult pets to care for, requiring regular attention and a very specific diet of grasses and hay. Each year, they go into a period called rut that can last for several months, in which their personalities can drastically change, often becoming defensive or even aggressive. Despite their needs, prairie dogs are very social animals and come to seem as though they treat humans as members of their colony, answering barks and chirps, and even coming when called by name.
In mid-2003, due to cross-contamination at a Madison, Wisconsin-area pet swap from an unquarantined Gambian pouched rat imported from Ghana, several prairie dogs in captivity acquired monkeypox, and subsequently a few humans were also infected. This led the CDC and FDA to issue a joint order banning the sale, trade, and transport within the United States of prairie dogs (with a few exceptions). The disease was never introduced to any wild populations. The European Union also banned importation of prairie dogs in response.
All "Cynomys" species are classed as a "prohibited new organism" under New Zealand's Hazardous Substances and New Organisms Act 1996, preventing it from being imported into the country.
Prairie dogs are also very susceptible to bubonic plague, and many wild colonies have been wiped out by it. Also, in 2002, a large group of prairie dogs in captivity in Texas were found to have contracted tularemia. The prairie dog ban is frequently cited by the CDC as a successful response to the threat of zoonosis.
Prairie dogs that were in captivity at the time of the ban in 2003 were allowed to be kept under a grandfather clause, but were not to be bought, traded, or sold, and transport was permitted only to and from a veterinarian under quarantine procedures.
On 8 September 2008, the FDA and CDC rescinded the ban, making it once again legal to capture, sell, and transport prairie dogs. Although the federal ban has been lifted, several states still have in place their own ban on prairie dogs.
The European Union has not lifted its ban on imports from the U.S. of animals captured in the wild. Major European Prairie Dog Associations, such as the Italian "Associazione Italiana Cani della Prateria" (AICDP), remain against import from the United States, due to the high death rate of wild captures. Several zoos in Europe have stable prairie dog colonies that generate enough surplus pups to saturate the EU internal demand, and several associations help owners to give adoption to captive-born animals.
Prairie dogs in captivity may live up to ten years.
In companies that use large numbers of cubicles in a common space, employees sometimes use the term "prairie dogging" to refer to the action of several people simultaneously looking over the walls of their cubicles in response to a noise or other distraction. This action is thought to resemble the startled response of a group of prairie dogs.
The Amarillo Sod Poodles, a minor league baseball team, use a nickname for prairie dogs as their cognomen. | https://en.wikipedia.org/wiki?curid=24327 |
Pope Stephen III
Pope Stephen III (; died 1 February 772) was the bishop of Rome and ruler of the Papal States from 7 August 768 to his death. Stephen was a Benedictine monk who worked in the Lateran Palace during the reign of Pope Zachary. In the midst of a tumultuous contest by rival factions to name a successor to Pope Paul I, Stephen was elected with the support of the Roman officials. He summoned the Lateran Council of 769, which sought to limit the influence of the nobles in papal elections. The Council also opposed iconoclasm.
A Greek born in Sicily, Stephen III was the son of a man named Olivus. Coming to Rome during the pontificate of Pope Gregory III, he was placed in the monastery of St. Chrysogonus, where he became a Benedictine monk. During the pontificate of Pope Zachary, he was ordained a priest, after which the pope decided to keep him to work at the Lateran Palace. Stephen gradually rose to high office in the service of successive popes, and was at the bedside of the dying Pope Paul I as powerful factions began manoeuvring to ensure the election of their own candidate in late June 767.
768 was consumed by the rival claims of antipopes Constantine II (a layman puppet forcibly installed by a faction of Tuscan nobles) and Philip (the candidate of the Lombards), who were forced out of office by the efforts of Christophorus, the primicerius of the notaries, and his son Sergius, the treasurer of the Roman Church. With the capture of Constantine II, Christophorus set about organising a canonical election, and on 1 August he summoned not only the Roman clergy and army, but also the people to assemble before the Church of St. Adrian in the area of the old Comitium. Here, on 7 August, the combined assembly elected Stephen as pope. They then proceeded to the Church of Santa Cecilia in Trastevere, where they acclaimed Stephen as pope-elect, and escorted him to the Lateran Palace.
At this point, supporters of the pope-elect Stephen began brutally to attack key members of Constantine’s regime, including Constantine himself, who was hounded through the streets of Rome, with heavy weights attached to his feet. Bishop Theodore, Constantine’s vice-dominus, was blinded and had his tongue cut out, while Constantine’s brother, Passivus, was also blinded. Constantine was officially dethroned on 6 August, and Stephen was consecrated pope on the following day. Retributions continued even after the consecration of Stephen; the town of Alatri revolted in support of Constantine, and after its capture, the key members of the revolt were blinded and had their tongues ripped out. Then on the orders of the papal chartularius, Gratiosus, Constantine was removed from his monastic cell, blinded, and left on the streets of Rome with specific instructions that no-one should aid him. Finally, on a charge of conspiring to kill Christophorus and many other nobles, with the intent of handing over the city to the Lombards, the priest Waldipert, who was the prime mover in the elevation of Philip, was arrested, blinded, and soon died of his wounds.
The role of Stephen III in these events is somewhat obscure. According to the historian Horace Mann, Stephen was an impotent observer, and that the responsible agent was in reality the chartularius, Gratiosus. However, according to Louis Marie DeCormenin, Stephen was the key person responsible for issuing the orders, and took great delight in destroying his rival and his rival's supporters. A middle position was taken by the historian Ferdinand Gregorovius, who observed that Stephen, while he may not have instigated or ordered the atrocities, did not seek to prevent them either, either through self-interest or the weakness of his position. What is clear, however, is that the recent creation of the Papal States had seen the traditional rivalries of the ruling families of Rome transformed into a murderous desire to control this new temporal power in Italy, dragging the papacy with it.
With Constantine’s supporters largely dealt with, Stephen wrote to the Frankish king, Pepin the Short, notifying him of his election, and asking for a number of bishops to participate in a council he was seeking to hold to discuss the recent confusion. As Pepin had died, it was Charlemagne and Carloman I who agreed to send twelve bishops to participate in the Lateran Council of 769. The council saw the final condemnation of Constantine II, who was beaten and had his tongue removed before being returned to his monastic cell. All clerical appointments made by Constantine were declared null and void. It also set about establishing strict rules for papal elections, thereby restricting the involvement of the nobility in subsequent elections. Finally, the rulings of the Council of Hieria were rejected, and the practice of devotion to icons was confirmed (see iconoclasm).
In 770, Stephen was asked to confirm the election of Michael, a layperson, as archbishop of Ravenna. In fact, Michael, in league with the Lombard king Desiderius and the duke of Rimini, had imprisoned Leo I, who had been elected first. Stephen refused to confirm Michael’s election; citing the conventions of the Lateran council, he sent letters and envoys to Michael, demanding that he stand down. Michael refused, and the stand-off continued for over a year, until the arrival of the Frankish ambassador in Ravenna along with the papal legates encouraged Michael’s opponents to overthrow him, and send him to Rome in chains. Leo followed soon after, when Stephen consecrated him as archbishop.
Throughout his pontificate, Stephen was apprehensive about the expansionist plans of the Lombards. Placing his hope in the Franks, he attempted to mediate in the quarrels between Charlemagne and Carloman I, Pepin's sons and successors, which were only helping the Lombards' cause in Italy. In 769, he helped them reconcile, and pressured them to support the still infant Papal States, by reminding them of the support that their father had given the papacy in the past. He also begged them to intercede on his behalf by entering into discussions with the Lombards.
Consequently, an embassy was sent to the Lombard king, Desiderius, in 770, which included Charlemagne’s mother, Bertrada of Laon. Their intervention achieved a result favourable to the papacy by restoring to the pope the parts of Benevento that the popes claimed. To Stephen’s consternation, however, Desiderius and Bertrada entered into discussions about a possible marriage between Desiderius’ daughter, Desiderata, and one of Bertrada’s sons. It is also possible that discussions took place around the marriage of Charlemagne’s sister, Gisela to Desiderius’ son, Adalgis.
Stephen therefore wrote to both Charlemagne and Carloman, protesting about the proposed alliance. Apart from noting that both men were already married, he reminded them of their promises to previous popes, that they would consider the pope’s enemies as their enemies, and that they had promised to Saint Peter to resist the Lombards and restore the rights of the Church. He wrote:
”You who are already, by the will of God and the commands of your father, lawfully married to noble wives of your own nation, whom you are bound to cherish. And certainly it is not lawful for you to put away the wives you have and marry others, or ally yourselves in marriage with a foreign people, a thing never done by any of your ancestors... It is wicked of you even to entertain the thought of marrying again when you are already married. You ought not to act thus, who profess to follow the law of God, and punish others to prevent men acting in this unlawful manner. Such things do the heathen. But they ought not to be done by you who are Christians, a holy people and a kingly priesthood.”
Stephen's pleas fell on deaf ears, and Charlemagne married Desiderata in 770, temporarily cementing a familial alliance with the Lombards.
Throughout 769 and 770, Stephen continued to rely on the support and advice of Christophorus and Sergius who had placed him on the papal throne. Their antipathy towards the Lombards and general pro-Frankish stance caused King Desiderius to engineer their downfall. He bribed the Papal Chamberlain, Paulus Afiarta, and other members of the papal court to spread rumors about them to the pope. When Desiderius attempted to enter Rome in 771 with an army, claiming to be on a pilgrimage to pray at the shrine of St. Peter, Christophorus and Sergius shut the gates of the city against them. Arriving at the gates and seeing armed troops manning the walls, the Lombard king asked to speak to the Pope, who came out to him. During Stephen’s absence, Afiarta and his supporters sought to stir up a mob to overthrow Christophorus and Sergius. But the Primicerius and his son gained the upper hand, and forced Afiarta and his colleagues to flee to the Lateran Palace.
By this stage, Stephen had returned to the Lateran, and he was confronted in the Basilica of St. Theodore by the fleeing Afiarta and his co-conspirators being chased by Christophorus and his supporters. Apparently at this point, a suspicious Christophorus, believing that Stephen had entered into some agreement with Desiderius, forced Stephen into taking an oath that he would not turn Christophorus or his son over to the Lombards. After this, a furious Stephen berated Christophorus, demanded he stop harassing Afiarta, and ordered him and his followers to withdraw, to which Christophorus complied. The next day, Stephen fled to St. Peter’s Basilica to seek the protection of Desiderius. The Lombard king, shutting Stephen up in his suites in the Basilica, made it clear to the Pope that the price for his help was to be the handing over of Christophorus and Sergius. The Pope sent two bishops to negotiate with Christophorus and Sergius, telling them that they must either retire to a monastery or come out to him at St. Peter’s. At the same time, a message was sent from Desiderius to the people of the city, declaring that: ”Pope Stephen bids you not to fight against your brethren, but to expel Christophorus from the city, and save it, yourselves, and your children.”
This message from the Lombard king had the desired effect; Christophorus and Sergius began to suspect their associates, who in turn rapidly abandoned them. Both were reluctant to leave the city, but eventually both made their way to the Pope during the night. The next day Stephen was allowed to return to the city, while Christophorus and Sergius were left in Lombard hands. Negotiations to secure their release were unsuccessful, and before the day was out, Afiarta arrived with his partisans. After discussing the situation with Desiderius, they had both men blinded. Christophorus died after three days, while Sergius was kept in a cell in the Lateran.
In an attempt to forestall the potential intervention of Charlemagne, Desiderius had Stephen write a letter to the Frankish king wherein he declared that Christophorus and Sergius had been involved in a plot with an envoy of Charlemagne’s brother, Carloman, to kill the Pope. Further, that Stephen had fled to Desiderius for protection, and that eventually Christophorus and Sergius were brought out against their will. While Stephen managed to save their lives, later a group of men had them blinded, but not on Stephen’s orders. He then concludes that if it wasn’t for “his most excellent son Desiderius”, he would have been in fatal danger, and that Desiderius had reached an agreement with him to restore to the Church all the lands that she had claims on that were still in Lombard hands.
That such a letter was a fiction was demonstrated very soon after; when Stephen asked Desiderius to fulfil the promises he had made over the body of Saint Peter, the Lombard king responded: ”Be content that I removed Christophorus and Sergius, who were ruling you, out of your way, and ask not for rights. Besides, if I do not continue to help you, great trouble will befall you. For Carloman, king of the Franks, is the friend of Christophorus and Sergius, and will be wishful to come to Rome and seize you.”
Desiderius continued to stir trouble in Italy; in 771, he managed to convince the bishops of Istria to reject the authority of the Patriarch of Grado, and to have them place themselves under the Patriarch of Aquileia, which was directly under Lombard control. Stephen wrote to the rebellious bishops, suspending them and ordering them to place themselves once again under the authority of Grado, or face excommunication.
After Christophorus’ fall, Paulus Afiarta continued to serve the papal court in a high capacity. During early 772, as Stephen fell ill and was soon clear that he was dying, Afiarta took advantage of this to exile a number of influential clergy and nobles from Rome, while others he put into prison. Then on 24 January, eight days before Stephen’s death, Afiarta dragged the blinded Sergius from his cell in the Lateran and had him strangled.
Stephen died on 24 January or 1 February 772. He was succeeded by Adrian I.
During the Middle Ages, Stephen III was considered a saint in his home island of Sicily. Various calendars, martyrologies, etc., such as the ancient calendar of the saints of Sicily, number Stephen among the saints, and assign his feast to 1 February. The citizens of Syracuse at one point attempted to convince the Holy See to officially endorse the sainthood of the pope, but this was not successful. | https://en.wikipedia.org/wiki?curid=24333 |
Pope Stephen IV
Pope Stephen IV (; c. 770 – 24 January 817) was the bishop of Rome and ruler of the Papal States from June 816 to his death. Stephen belonged to a noble Roman family. In October 816, he crowned Louis the Pious as emperor at Rheims, and persuaded him to release some Roman political prisoners he held in custody. He returned to Rome, by way of Ravenna, sometime in November and died the following January.
The son of a Roman noble called Marinus, Stephen belonged to the same family which also produced the Popes Sergius II and Adrian II. At a young age he was raised at the Lateran Palace during the pontificate of Adrian I, and it was under Leo III that he was ordained a subdeacon before he was subsequently made a deacon. Very popular among the Roman people, within ten days of Leo III's death, he was escorted to Saint Peter's Basilica and consecrated bishop of Rome on or about 22 June 816. It has been conjectured that his rapid election was an attempt by the Roman clergy to ensure that the Carolingian emperor Louis the Pious could not interfere.
Immediately after his consecration Stephen ordered the Roman people to swear fidelity to Emperor Louis, after which Stephen sent envoys to the emperor notifying him of his election, and to arrange a meeting between the two at the emperor's convenience. With Louis’ invitation, Stephen left Rome in August 816. Louis's nephew, King Bernard of Italy, was ordered to accompany Stephen to the emperor and the two crossed the Alps together. In early October, the pope and the emperor met at Rheims, where Louis prostrated himself three times before Stephen. At Mass on Sunday, 5 October 816, Stephen anointed Louis as emperor, placing a crown on his head that was claimed to belong to Constantine the Great. At the same time he also crowned Louis’ wife, Ermengarde of Hesbaye, and saluted her as "augusta". This event has been seen as an attempt by the papacy to establish a role in the creation of an emperor, which had been placed in doubt by Louis' self-coronation in 813.
Louis gave Stephen a number of presents, including an estate of land (most likely at Vendeuvre-sur-Barse) granted to the Roman church. They also renewed the pact between the popes and the kings of the Franks, confirming the privileges of the Roman church, and the continued existence of the recently emerged Papal States. Stephen also raised Bishop Theodulf of Orléans to the rank of archbishop, and had Louis release from their exile all political prisoners originally from Rome who had been held by the emperor resulting from the conflict that plagued the early part of Pope Leo III's reign. It is also believed that Stephen asked Louis to enforce reforms for the clergy who lived under the Rule of Chrodegang. This included ensuring that the men and women who lived there were to stay in separate convents, and that they were to hold the houses under a title of common property. He also regulated how much food and wine they could consume.
After visiting Ravenna on his way back from Rheims, Stephen returned to Rome before the end of November 816. Here, he apparently discontinued Leo III's policies of favouring clergy over lay aristocracy. After holding the traditional ordination of priests and bishops in December and confirming Farfa Abbey’s possessions on condition that every day the monks would recite one hundred Kyrie eleison as well as a yearly payment to the Roman Church of ten golden solidi, Stephen died on 24 January 817. He was buried at St. Peter's, and was succeeded by Paschal I. | https://en.wikipedia.org/wiki?curid=24334 |
Pope Stephen VII
Pope Stephen VII (; died 15 March 931) was the bishop of Rome and nominal ruler of the Papal States from February 929 to his death in 931. A candidate of the infamous Marozia, his pontificate occurred during the period known as the "Saeculum obscurum".
Stephen was a Roman by birth, the son of Theodemundus. He was the cardinal-priest of St Anastasia in Rome. He was probably handpicked by Marozia, the true ruler of Rome during the "Saeculum obscurum", to become pope as a stop-gap measure until her own son John was ready to assume the role.
Very little is known about Stephen's pontificate. During his two years as pope, Stephen confirmed the privileges of a few religious houses in France and Italy. As a reward for helping free Stephen from the oppression of Hugh of Arles, Stephen granted Cante di Gabrielli the position of papal governor of Gubbio, and control over a number of key fortresses. Stephen was also noted for the severity with which he treated clergy who strayed in their morals. He was also, apparently, according to a hostile Greek source from the twelfth century, the first pope who went around clean shaved whilst pope.
Stephen died around 15 March 931, and was succeeded by Marozia's son John XI. | https://en.wikipedia.org/wiki?curid=24338 |
Pope Stephen VIII
Pope Stephen VIII (; died October 942) was the bishop of Rome and nominal ruler of the Papal States from 14 July 939 to his death. His pontificate occurred during the "Saeculum obscurum", when the power of popes was diminished by the ambitious counts of Tusculum, and was marked by the conflict between his patron, Alberic II of Spoleto, and King Hugh of Italy.
Stephen VIII was born of a Roman family, and prior to becoming pope was attached to the church of Saints Silvester and Martin.
After becoming pope, Stephen gave his attention to the situation in West Francia. In early 940, Stephen intervened on behalf of Louis IV of France, who had been trying to bring to heel his rebellious vassals, Hugh the Great and Herbert II of Vermandois, both of whom had appealed for support from King Otto I of Germany. Stephen dispatched a papal legate to the Frankish nobles, instructing them to acknowledge Louis, and to cease their rebellious actions against him, under threat of excommunication. Although the embassy did not achieve its stated objective, it did have the effect of removing the support of the Frankish bishops who had been backing Hugh and Herbert.
Emboldened, Stephen sought to break up the alliance against Louis by offering Herbert's son, Hugh of Vermandois, the office of archbishop of Reims. Along with the pallium, Stephen sent another legate, with instructions to the Frankish nobility, insisting that they submit to Louis. This time they were informed that if the pope had not received their embassies by Christmas, notifying him of their intent to submit to the king, they would be excommunicated. This time, there was a shift in support to Louis, as a number of the more important nobles declared for him, and by the end of 942, all of the nobility had affirmed their loyalty to Louis, and notified the pope of their intent.
The continuing domination of the counts of Tusculum was evident throughout Stephen's pontificate, and the period is thus known as "Saeculum obscurum". Although Stephen was subject to Alberic II of Spoleto and did not in reality rule the Papal States, Stephen himself was not a member of that family, nor had he any relationship with Alberic's mother, Marozia, who had dominated Roman and papal politics during the preceding decades. Stephen was, however, caught up in the ongoing conflict between Alberic and King Hugh of Italy, with Hugh besieging Rome in 940. After a failed attempt to assassinate him, which involved a number of bishops, Alberic cracked down on any potential dissent in Rome, with his enemies either scourged, beheaded or imprisoned. If there is any truth to Martin of Opava’s account of the torture and maiming of Stephen VIII by supporters of Alberic, it must have occurred at this juncture, in the aftermath of the conspiracy, and just prior to Stephen's death.
On 17 August 942, Alberic summoned a council in Rome, where he demonstrated his control over the papacy by making use of various papal officials, such as the primicerius, the secundicerius of the notaries, and the vestararius. Stephen died during October 942, and was succeeded by Marinus II.
According to the late 13th century chronicler Martin of Opava, Stephen VIII was described as being a German, who was elected pope due to the power and influence of relative Otto I. Martin states that Otto ignored the will of the cardinals in imposing Stephen upon them, and because Stephen was hated for being a German, he was taken by supporters of Alberic II, who proceeded to maim and disfigure him to such an extent that Stephen was unable to appear in public again. This version of events has largely been discredited; contemporary and near-contemporary catalogues state that Stephen was a Roman. Further, Otto's intervention in and influence over Italian affairs was still over a decade away, and during this period Otto was still trying to consolidate his hold on power in Germany, with major rebellions by the German dukes. Consequently, Otto would have been too preoccupied to concern himself over the papal succession at this juncture. Finally, Stephen's intervention on behalf of the Frankish king Louis IV (who was in conflict with Otto) would not have occurred had Stephen been a relative of the German king, and had Stephen received the papal throne through Otto's intervention. The maiming of Stephen may have occurred, however, in the aftermath of the conspiracy against Alberic in the middle of 942. | https://en.wikipedia.org/wiki?curid=24339 |
Pope Stephen IX
Pope Stephen IX (; c. 1020 – 29 March 1058) was the bishop of Rome and ruler of the Papal States from 3 August 1057 to his death.
Christened Frederick, he was a younger brother of Duke Godfrey the Bearded of Lorraine, and part of the Ardennes-Verdun dynasty that would play a prominent role in the politics of the period, which included their strong ties to the abbey of St. Vanne.
Frederick, previously archdeacon of St. Lambert's Cathedral in Liège, was appointed cardinal-deacon of Santa Maria in Domnica by Pope Leo IX, and later raised to cardinal-presbyter of San Crisogono by Pope Victor II. In 1054, he discharged the function of one of three papal legates at Constantinople, participating in the events that led to the East-West Schism. In 1057, he was appointed abbot of Monte Cassino.
On 3 August 1057, five days after the death of Pope Victor II, Frederick was chosen to become the new pope. He took the name Stephen IX. As pope, he enforced the policies of the Gregorian Reform as to clerical celibacy. In regional politics, he was planning for the expulsion of the Normans from southern Italy, and in order to achieve that he decided, at the beginning of 1058, to send a delegation to the new Byzantine Emperor Isaac I Komnenos (1057-1059). Papal delegates departed from Rome, but when they reached Byzantine held Bari, news came that Stephen IX has died, and the mission was abandoned.
At the beginning of 1058, Stephen IX was planning the elevation of his brother to the imperial throne when he was seized by a severe illness. After a partial recovery, Stephen IX died at Florence on 29 March 1058. He is considered by the modern Catholic Church to have been succeeded by Nicholas II, though others consider his successor to be Benedict X, now officially regarded as an antipope. | https://en.wikipedia.org/wiki?curid=24340 |
Projective plane
In mathematics, a projective plane is a geometric structure that extends the concept of a plane. In the ordinary Euclidean plane, two lines typically intersect in a single point, but there are some pairs of lines (namely, parallel lines) that do not intersect. A projective plane can be thought of as an ordinary plane equipped with additional "points at infinity" where parallel lines intersect. Thus "any" two distinct lines in a projective plane intersect in one and only one point.
A projective plane is a 2-dimensional projective space, but not all projective planes can be embedded in 3-dimensional projective spaces. Such embeddability is a consequence of a property known as Desargues' theorem, not shared by all projective planes.
A projective plane consists of a set of lines, a set of points, and a relation between points and lines called incidence, having the following properties:
The second condition means that there are no parallel lines. The last condition excludes the so-called degenerate cases (see below). The term "incidence" is used to emphasize the symmetric nature of the relationship between points and lines. Thus the expression "point "P" is incident with line "ℓ" " is used instead of either ""P" is on "ℓ" " or ""ℓ" passes through "P" ".
To turn the ordinary Euclidean plane into a projective plane proceed as follows:
The extended structure is a projective plane and is called the extended Euclidean plane or the real projective plane. The process outlined above, used to obtain it, is called "projective completion" or "projectivization". This plane can also be constructed by starting from R3 viewed as a vector space, see below.
The points of the Moulton plane are the points of the Euclidean plane, with coordinates in the usual way. To create the Moulton plane from the Euclidean plane some of the lines are redefined. That is, some of their point sets will be changed, but other lines will remain unchanged. Redefine all the lines with negative slopes so that they look like "bent" lines, meaning that these lines keep their points with negative "x"-coordinates, but the rest of their points are replaced with the points of the line with the same "y"-intercept but twice the slope wherever their "x"-coordinate is positive.
The Moulton plane has parallel classes of lines and is an affine plane. It can be projectivized, as in the previous example, to obtain the projective Moulton plane. Desargues' theorem is not a valid theorem in either the Moulton plane or the projective Moulton plane.
This example has just thirteen points and thirteen lines. We label the points P1...,P13 and the lines m1...,m13. The incidence relation (which points are on which lines) can be given by the following incidence matrix. The rows are labelled by the points and the columns are labelled by the lines. A 1 in row "i" and column "j" means that the point P"i" is on the line m"j", while a 0 (which we represent here by a blank cell for ease of reading) means that they are not incident. The matrix is in Paige-Wexler normal form.
To verify the conditions that make this a projective plane, observe that every two rows have exactly one common column in which 1's appear (every pair of distinct points are on exactly one common line) and that every two columns have exactly one common row in which 1's appear (every pair of distinct lines meet at exactly one point). Among many possibilities, the points P1,P4,P5,and P8, for example, will satisfy the third condition. This example is known as the projective plane of order three.
Though the line at infinity of the extended real plane may appear to have a different nature than the other lines of that projective plane, this is not the case. Another construction of the same projective plane shows that no line can be distinguished (on geometrical grounds) from any other. In this construction, each "point" of the real projective plane is the one-dimensional subspace (a "geometric" line) through the origin in a 3-dimensional vector space, and a "line" in the projective plane arises from a ("geometric") plane through the origin in the 3-space. This idea can be generalized and made more precise as follows.
Let "K" be any division ring (skewfield). Let "K"3 denote the set of all triples "x" = ("x"0, "x"1, "x"2) of elements of "K" (a Cartesian product viewed as a vector space). For any nonzero "x" in "K"3, the minimal subspace of "K"3 containing "x" (which may be visualized as all the vectors in a line through the origin) is the subset
of "K"3. Similarly, let "x" and "y" be linearly independent elements of "K"3, meaning that implies that . The minimal subspace of "K"3 containing "x" and "y" (which may be visualized as all the vectors in a plane through the origin) is the subset
of "K"3. This 2-dimensional subspace contains various 1-dimensional subspaces through the origin that may be obtained by fixing "k" and "m" and taking the multiples of the resulting vector. Different choices of "k" and "m" that are in the same ratio will give the same line.
The projective plane over "K", denoted PG(2,"K") or "K"P2, has a set of "points" consisting of all the 1-dimensional subspaces in "K"3. A subset "L" of the points of PG(2,"K") is a "line" in PG(2,"K") if there exists a 2-dimensional subspace of "K"3 whose set of 1-dimensional subspaces is exactly "L".
Verifying that this construction produces a projective plane is usually left as a linear algebra exercise.
An alternate (algebraic) view of this construction is as follows. The points of this projective plane are the equivalence classes of the set modulo the equivalence relation
Lines in the projective plane are defined exactly as above.
The coordinates ("x"0, "x"1, "x"2) of a point in PG(2,"K") are called homogeneous coordinates. Each triple ("x"0, "x"1, "x"2) represents a well-defined point in PG(2,"K"), except for the triple (0, 0, 0), which represents no point. Each point in PG(2,"K"), however, is represented by many triples.
If "K" is a topological space, then "K"P2, inherits a topology via the product, subspace, and quotient topologies.
The real projective plane RP2, arises when "K" is taken to be the real numbers, R. As a closed, non-orientable real 2-manifold, it serves as a fundamental example in topology.
In this construction consider the unit sphere centered at the origin in R3. Each of the R3 lines in this construction intersects the sphere at two antipodal points. Since the R3 line represents a point of RP2, we will obtain the same model of RP2 by identifying the antipodal points of the sphere. The lines of RP2 will be the great circles of the sphere after this identification of antipodal points. This description gives the standard model of elliptic geometry.
The complex projective plane CP2, arises when "K" is taken to be the complex numbers, C. It is a closed complex 2-manifold, and hence a closed, orientable real 4-manifold. It and projective planes over other fields (known as pappian planes) serve as fundamental examples in algebraic geometry.
The quaternionic projective plane HP2 is also of independent interest.
By Wedderburn's Theorem, a finite division ring must be commutative and so a field. Thus, the finite examples of this construction are known as "field planes". Taking "K" to be the finite field of "q" = "p""n" elements with prime "p" produces a projective plane of "q"2 + "q" + 1 points. The field planes are usually denoted by PG(2,"q") where PG stands for projective geometry, the "2" is the dimension and "q" is called the order of the plane (it is one less than the number of points on any line). The Fano plane, discussed below, is denoted by PG(2,2). The third example above is the projective plane PG(2,3).
The Fano plane is the projective plane arising from the field of two elements. It is the smallest projective plane, with only seven points and seven lines. In the figure at right, the seven points are shown as small black balls, and the seven lines are shown as six line segments and a circle. However, one could equivalently consider the balls to be the "lines" and the line segments and circle to be the "points" – this is an example of duality in the projective plane: if the lines and points are interchanged, the result is still a projective plane (see below). A permutation of the seven points that carries collinear points (points on the same line) to collinear points is called a "collineation" or "symmetry" of the plane. The collineations of a geometry form a group under composition, and for the Fano plane this group (PΓL(3,2) = PGL(3,2)) has 168 elements.
The theorem of Desargues is universally valid in a projective plane if and only if the plane can be constructed from a three-dimensional vector space over a skewfield as above. These planes are called Desarguesian planes, named after Girard Desargues. The real (or complex) projective plane and the projective plane of order 3 given above are examples of Desarguesian projective planes. The projective planes that can not be constructed in this manner are called non-Desarguesian planes, and the Moulton plane given above is an example of one. The PG(2,"K") notation is reserved for the Desarguesian planes. When "K" is a field, a very common case, they are also known as "field planes" and if the field is a finite field they can be called "Galois planes".
A subplane of a projective plane is a subset of the points of the plane which themselves form a projective plane with the same incidence relations.
When "N" is a square, subplanes of order are called "Baer subplanes". Every point of the plane lies on a line of a Baer subplane and every line of the plane contains a point of the Baer subplane.
In the finite Desarguesian planes PG(2,"pn"), the subplanes have orders which are the orders of the subfields of the finite field GF("pn"), that is, "pi" where "i" is a divisor of "n". In non-Desarguesian planes however, Bruck's theorem gives the only information about subplane orders. The case of equality in the inequality of this theorem is not known to occur. Whether or not there exists a subplane of order "M" in a plane of order "N" with "M"2 + "M" = "N" is an open question. If such subplanes existed there would be projective planes of composite (non-prime power) order.
A Fano subplane is a subplane isomorphic to PG(2,2), the unique projective plane of order 2.
If you consider a "quadrangle" (a set of 4 points no three collinear) in this plane, the points determine six of the lines of the plane. The remaining three points (called the "diagonal points" of the quadrangle) are the points where the lines that do not intersect at a point of the quadrangle meet. The seventh line consists of all the diagonal points (usually drawn as a circle or semicircle).
In finite desarguesian planes, PG(2,"q"), Fano subplanes exist if and only if "q" is even (that is, a power of 2). The situation in non-desarguesian planes is unsettled. They could exist in any non-desarguesian plane of order greater than 6, and indeed, they have been found in all non-desarguesian planes in which they have been looked for (in both odd and even orders).
An open question is: Does every non-desarguesian plane contain a Fano subplane?
A theorem concerning Fano subplanes due to is:
Projectivization of the Euclidean plane produced the real projective plane. The inverse operation — starting with a projective plane, remove one line and all the points incident with that line — produces an affine plane.
More formally an affine plane consists of a set of lines and a set of points, and a relation between points and lines called incidence, having the following properties:
The second condition means that there are parallel lines and is known as Playfair's axiom. The expression "does not meet" in this condition is shorthand for "there does not exist a point incident with both lines."
The Euclidean plane and the Moulton plane are examples of infinite affine planes. A finite projective plane will produce a finite affine plane when one of its lines and the points on it are removed. The order of a finite affine plane is the number of points on any of its lines (this will be the same number as the order of the projective plane from which it comes). The affine planes which arise from the projective planes PG(2,"q") are denoted by AG(2,"q").
There is a projective plane of order "N" if and only if there is an affine plane of order "N". When there is only one affine plane of order "N" there is only one projective plane of order "N", but the converse is not true. The affine planes formed by the removal of different lines of the projective plane will be isomorphic if and only if the removed lines are in the same orbit of the collineation group of the projective plane. These statements hold for infinite projective planes as well.
The affine plane "K"2 over "K" embeds into "K"P2 via the map which sends affine (non-homogeneous) coordinates to homogeneous coordinates,
The complement of the image is the set of points of the form (0, "x"1, "x"2). From the point of view of the embedding just given, these points are the points at infinity. They constitute a line in "K"P2 — namely, the line arising from the plane
in "K"3 — called the line at infinity. The points at infinity are the "extra" points where parallel lines intersect in the construction of the extended real plane; the point (0, "x"1, "x"2) is where all lines of slope "x"2 / "x"1 intersect. Consider for example the two lines
in the affine plane "K"2. These lines have slope 0 and do not intersect. They can be regarded as subsets of "KP2 via the embedding above, but these subsets are not lines in "KP2. Add the point (0, 1, 0) to each subset; that is, let
These are lines in "K"P2; ū arises from the plane
in "K"3, while ȳ arises from the plane
The projective lines ū and ȳ intersect at (0, 1, 0). In fact, all lines in "K"2 of slope 0, when projectivized in this manner, intersect at (0, 1, 0) in "K"P2.
The embedding of "K"2 into "K"P2 given above is not unique. Each embedding produces its own notion of points at infinity. For example, the embedding
has as its complement those points of the form ("x"0, 0, "x"2), which are then regarded as points at infinity.
When an affine plane does not have the form of "K"2 with "K" a division ring, it can still be embedded in a projective plane, but the construction used above does not work. A commonly used method for carrying out the embedding in this case involves expanding the set of affine coordinates and working in a more general "algebra".
One can construct a coordinate "ring"—a so-called planar ternary ring (not a genuine ring)—corresponding to any projective plane. A planar ternary ring need not be a field or division ring, and there are many projective planes that are not constructed from a division ring. They are called non-Desarguesian projective planes and are an active area of research. The Cayley plane (OP2), a projective plane over the octonions, is one of these because the octonions do not form a division ring.
Conversely, given a planar ternary ring (R,T), a projective plane can be constructed (see below). The relationship is not one to one. A projective plane may be associated with several non-isomorphic planar ternary rings. The ternary operator T can be used to produce two binary operators on the set R, by:
The ternary operator is linear if T(x,m,k) = x•m + k. When the set of coordinates of a projective plane actually form a ring, a linear ternary operator may be defined in this way, using the ring operations on the right, to produce a planar ternary ring.
Algebraic properties of this planar ternary coordinate ring turn out to correspond to geometric incidence properties of the plane. For example, Desargues' theorem corresponds to the coordinate ring being obtained from a division ring, while Pappus's theorem corresponds to this ring being obtained from a commutative field. A projective plane satisfying Pappus's theorem universally is called a "Pappian plane". Alternative, not necessarily associative, division algebras like the octonions correspond to Moufang planes.
There is no known purely geometric proof of the purely geometric statement that Desargues' theorem implies Pappus' theorem in a finite projective plane (finite Desarguesian planes are Pappian). (The converse is true in any projective plane and is provable geometrically, but finiteness is essential in this statement as there are infinite Desarguesian planes which are not Pappian.) The most common proof uses coordinates in a division ring and Wedderburn's theorem that finite division rings must be commutative; give a proof that uses only more "elementary" algebraic facts about division rings.
To describe a finite projective plane of order "N"(≥ 2) using non-homogeneous coordinates and a planar ternary ring:
On these points, construct the following lines:
For example, for "N"=2 we can use the symbols {0,1} associated with the finite field of order 2. The ternary operation defined by T(x,m,k) = xm + k with the operations on the right being the multiplication and addition in the field yields the following:
Degenerate planes do not fulfill the third condition in the definition of a projective plane. They are not structurally complex enough to be interesting in their own right, but from time to time they arise as special cases in general arguments. There are seven degenerate planes according to . They are:
These seven cases are not independent, the fourth and fifth can be considered as special cases of the sixth, while the second and third are special cases of the fourth and fifth respectively. The special case of the seventh plane with no additional lines can be seen as an eighth plane. All the cases can therefore be organized into two families of degenerate planes as follows (this representation is for finite degenerate planes, but may be extended to infinite ones in a natural way):
1) For any number of points "P"1, ..., "P""n", and lines "L"1, ..., "L""m",
2) For any number of points "P"1, ..., "P""n", and lines "L"1, ..., "L""n", (same number of points as lines)
A collineation of a projective plane is a bijective map of the plane to itself which maps points to points and lines to lines that preserves incidence, meaning that if "σ" is a bijection and point P is on line m, then P"σ" is on m"σ".
If "σ" is a collineation of a projective plane, a point P with P = P"σ" is called a fixed point of "σ", and a line m with m = m"σ" is called a fixed line of "σ". The points on a fixed line need not be fixed points, their images under "σ" are just constrained to lie on this line. The collection of fixed points and fixed lines of a collineation form a closed configuration, which is a system of points and lines that satisfy the first two but not necessarily the third condition in the definition of a projective plane. Thus, the fixed point and fixed line structure for any collineation either form a projective plane by themselves, or a degenerate plane. Collineations whose fixed structure forms a plane are called planar collineations.
A homography (or "projective transformation") of PG(2,"K") is a collineation of this type of projective plane which is a linear transformation of the underlying vector space. Using homogeneous coordinates they can be represented by invertible 3 × 3 matrices over "K" which act on the points of PG(2,"K") by "y" = "M" "x"T, where "x" and "y" are points in "K"3 (vectors) and "M" is an invertible 3 × 3 matrix over "K". Two matrices represent the same projective transformation if one is a constant multiple of the other. Thus the group of projective transformations is the quotient of the general linear group by the scalar matrices called the projective linear group.
Another type of collineation of PG(2,"K") is induced by any automorphism of "K", these are called automorphic collineations. If α is an automorphism of "K", then the collineation given by (x0,x1,x2) → (x0α,x1α,x2α) is an automorphic collineation. The fundamental theorem of projective geometry says that all the collineations of PG(2,"K") are compositions of homographies and automorphic collineations. Automorphic collineations are planar collineations.
A projective plane is defined axiomatically as an incidence structure, in terms of a set "P" of points, a set "L" of lines, and an incidence relation "I" that determines which points lie on which lines. As P and L are only sets one can interchange their roles and define a plane dual structure.
By interchanging the role of "points" and "lines" in
we obtain the dual structure
where "I"* is the inverse relation of "I".
In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called the plane dual statement of the first. The plane dual statement of "Two points are on a unique line." is "Two lines meet at a unique point." Forming the plane dual of a statement is known as "dualizing" the statement.
If a statement is true in a projective plane C, then the plane dual of that statement must be true in the dual plane C*. This follows since dualizing each statement in the proof "in C" gives a statement of the proof "in C*."
In the projective plane C, it can be shown that there exist four lines, no three of which are concurrent. Dualizing this theorem and the first two axioms in the definition of a projective plane shows that the plane dual structure C* is also a projective plane, called the dual plane of C.
If C and C* are isomorphic, then C is called self-dual. The projective planes PG(2,"K") for any division ring "K" are self-dual. However, there are non-Desarguesian planes which are not self-dual, such as the Hall planes and some that are, such as the Hughes planes.
The Principle of Plane Duality says that dualizing any theorem in a self-dual projective plane C produces another theorem valid in C.
A duality is a map from a projective plane "C" = ("P", "L", I) to its dual plane "C"* = ("L", "P", I*) (see above) which preserves incidence. That is, a duality σ will map points to lines and lines to points ("P"σ = "L" and "L""σ" = "P") in such a way that if a point "Q" is on a line "m" (denoted by "Q" I "m") then "Q""σ" I* "m""σ" ⇔ "m""σ" I "Q""σ". A duality which is an isomorphism is called a correlation. If a correlation exists then the projective plane "C" is self-dual.
In the special case that the projective plane is of the PG(2,"K") type, with "K" a division ring, a duality is called a reciprocity. These planes are always self-dual. By the fundamental theorem of projective geometry a reciprocity is the composition of an automorphic function of "K" and a homography. If the automorphism involved is the identity, then the reciprocity is called a projective correlation.
A correlation of order two (an involution) is called a polarity. If a correlation φ is not a polarity then φ2 is a nontrivial collineation.
It can be shown that a projective plane has the same number of lines as it has points (infinite or finite). Thus, for every finite projective plane there is an integer "N" ≥ 2 such that the plane has
The number "N" is called the order of the projective plane.
The projective plane of order 2 is called the Fano plane. See also the article on finite geometry.
Using the vector space construction with finite fields there exists a projective plane of order "N" = "p""n", for each prime power "p""n". In fact, for all known finite projective planes, the order "N" is a prime power.
The existence of finite projective planes of other orders is an open question. The only general restriction known on the order is the Bruck-Ryser-Chowla theorem that if the order "N" is congruent to 1 or 2 mod 4, it must be the sum of two squares. This rules out "N" = 6. The next case "N" = 10 has been ruled out by massive computer calculations. Nothing more is known; in particular, the question of whether there exists a finite projective plane of order "N" = 12 is still open.
Another longstanding open problem is whether there exist finite projective planes of "prime" order which are not finite field planes (equivalently, whether there exists a non-Desarguesian projective plane of prime order).
A projective plane of order "N" is a Steiner S(2, "N" + 1, "N"2 + "N" + 1) system
(see Steiner system). Conversely, one can prove that all Steiner systems of this form (λ = 2) are projective planes.
The number of mutually orthogonal Latin squares of order "N" is at most "N" − 1. "N" − 1 exist if and only if there is a projective plane of order "N".
While the classification of all projective planes is far from complete, results are known for small orders:
Projective planes may be thought of as projective geometries of "geometric" dimension two. Higher-dimensional projective geometries can be defined in terms of incidence relations in a manner analogous to the definition of a projective plane. These turn out to be "tamer" than the projective planes since the extra degrees of freedom permit Desargues' theorem to be proved geometrically in the higher-dimensional geometry. This means that the coordinate "ring" associated to the geometry must be a division ring (skewfield) "K", and the projective geometry is isomorphic to the one constructed from the vector space "K""d"+1, i.e. PG("d","K"). As in the construction given earlier, the points of the "d"-dimensional projective space PG("d","K") are the lines through the origin in "K""d" + 1 and a line in PG("d","K") corresponds to a plane through the origin in "K""d" + 1. In fact, each "i-dimensional" object in PG("d","K"), with "i" < "d", is an ("i" + 1)-dimensional (algebraic) vector subspace of "K""d" + 1 ("goes through the origin"). The projective spaces in turn generalize to the Grassmannian spaces.
It can be shown that if Desargues' theorem holds in a projective space of dimension greater than two, then it must also hold in all planes that are contained in that space. Since there are projective planes in which Desargues' theorem fails (non-Desarguesian planes), these planes can not be embedded in a higher-dimensional projective space. Only the planes from the vector space construction PG(2,"K") can appear in projective spaces of higher dimension. Some disciplines in mathematics restrict the meaning of projective plane to only this type of projective plane since otherwise general statements about projective spaces would always have to mention the exceptions when the geometric dimension is two. | https://en.wikipedia.org/wiki?curid=24350 |
Pacific Beach, San Diego
Pacific Beach is a neighborhood in San Diego, bounded by La Jolla to the north, Mission Beach and Mission Bay to the south, Interstate 5 to the east and the Pacific Ocean to the west. While formerly largely populated by young people, surfers, and college students, because of rising property and rental costs the population is gradually becoming older and more affluent. "P.B.," as it is known as by local residents, is home to one of San Diego's more developed nightlife scenes, with a great variety of bars, eateries, and clothing stores located along Garnet Avenue and Mission Boulevard.
Pacific Beach's namesake stretches for miles from the Mission Bay jetty to the cliffs of La Jolla. The boardwalk, officially called Ocean Front Walk/Ocean Boulevard, is a pedestrian walkway that runs approximately 3.2 miles along the beach from the end of Law St. in the north down into Mission Beach, ending at the mouth of Mission Bay in the south. There are numerous local shops, bars, hotels, and restaurants along the boardwalk, and it is generally crowded with pedestrians, cyclists, rollerbladers, skateboarder and shoppers. Adjacent to the boardwalk is the Crystal Pier, a public pier and hotel at the west end of Garnet Avenue. San Diego City Council banned the use of all electric-motor scooters in December of 2019.
The streets in Pacific Beach were renamed several times before receiving their current designations in 1900. The primary north-south street running parallel to the beach is Mission Blvd., with the streets named after late 19th century federal officials, then incrementing in alphabetical order as they move further from the coast: Bayard, Cass, Dawes, Everts, Fanuel, Gresham, Haines, Ingraham, Jewell, Kendall, Lamont, Morrell, Noyes, Olney, Pendleton, Quincy, Randall, and San Joaquin. Mission Boulevard was formerly Allison Street, being the "A" street of the series. Ingraham was initially named Broadway (1887), then was changed to Izard (1900), back to Broadway (1907) and finally settled as Ingraham Street in 1913.
The east-west streets are mostly named after precious stones. Starting at the north end of Mission Blvd. and heading south, the streets are:
As with many California cities, the history of San Diego's development can be traced back to the completion of a cross-country railroad in 1885. The town developed during the boom years between 1886 and 1888 by D. C. Reed, A. G. Gassen, Charles W. Pauley, R. A. Thomas, and O. S. Hubbell. It was Hubbell who "cleared away the grainfields, pitched a tent, mapped out the lots, hired an auctioneer and started to work". A railway connected Pacific Beach with downtown San Diego starting in 1889, and was extended to La Jolla in 1894.
Early landmarks and attractions in Pacific Beach included an asbestos factory (established in 1888), a race track, and the San Diego College of Letters (1887-1891), none of which survive today. At the turn of the century, lemon growing and packing dominated the local economy. In 1910, the San Diego Army and Navy Academy, a preparatory school, was established in an old College building; in 1922 a public high school followed and a junior high in 1930. In 1927, Crystal Pier opened; the Roxy Movie theater opened in 1943 to cater to a population that grew five times during World War II. The postwar period saw the establishment of many hotels: the Bahia (1953), the Catamaran (1959), and Vacation Village (1965). High rise construction in nearby Mission Bay led to the establishment of a 30 foot height limitation for buildings in 1972, an ordinance still in effect. Prominent boardwalk Ocean Avenue was closed in 1982 and became a park.
In 1902, lots sold for between $350–700 for oceanfront property. By 1950, the population of Pacific Beach reached 30,000 and the average home sold for $12,000. Nonetheless, a small number of farms remained. Today, homes can sell for millions.
The United States Navy operated an anti-aircraft training center at Pacific Beach during World War II. During the 1960s, development continued to increase with the city's investment in Mission Bay Park, including the developments of the Islandia, Vacation Village and Hilton Hotels. In 1964 Sea World opened, which is located only a few miles from Pacific Beach.
The original name of this feature was "Bay Point" and today one may still find a USGS bench mark and associated RM (DC1025, DC1026 respectively) with that name there. The "Bay Point Formation" is the name of a local rock strata first found and described there.
Today, Pacific Beach is home to a younger crowd, including college students, single professionals, and families. The restaurant and nightlife culture has grown extensively, with Garnet Avenue becoming the major hub for places to eat, drink, and shop, and includes a range of bars, restaurants, pubs, and coffee houses.
Pacific Beach public schools are part of the San Diego Unified School District. They include Mission Bay Senior High School, Pacific Beach Middle School, Pacific Beach Elementary, Kate Sessions Elementary, Barnard Elementary, and Crown Point Junior Music Academy .
In addition to bordering the Pacific Ocean and Mission Bay Park, Pacific Beach includes Kate Sessions Park and the Pacific Beach Recreation Center. Kate Sessions Park has a playground, large lawn with ocean views, and a many acre unmaintained area used for hiking and mountain biking. Fanuel Street Park is a popular bay-front park with playground equipment suitable for toddler and school-age children. Rose Creek, which flows through Pacific Beach before emptying into Mission Bay, provides open space and a rich wetland area.
The nonprofit Pacific Beach Town Council promotes the area and organizes community events. The Pacific Beach Planning Group advises the city on land use and other issues. The Pacific Beach and Mission Bay Visitor Center provides information on the Pacific Beach Town Council, special events, lodging, dining, and Pacific Beach history. Service clubs include Kiwanis, Rotary, Lions Club, Girl Scouts, Pacific Beach Woman's Club, Surf Club, Friends of the PB Library, PB Garden Club, and Toastmasters.
Pacific Beach is serviced in print by the daily "San Diego Union Tribune" and the weekly "Beach & Bay Press".
Pacific Beach is one of the main centers of nightlife in San Diego. Garnet Avenue, between Ingraham Street and Mission Boulevard, is where many bars and restaurants are located. The nightlife in Pacific Beach caters to a younger crowd than the nightlife in downtown San Diego.
In John Dos Passos's "The 42nd Parallel" (1930), Fainy "Mac" McCreary briefly lives in a bungalow in Pacific Beach with his wife Maisie and their daughter Rose.
Robert Young, Producer of the Nationally Syndicated radio program "The Dr. Demento Show" | https://en.wikipedia.org/wiki?curid=24353 |
Pharmacology
Pharmacology is a branch of medicine and pharmaceutical sciences which is concerned with the study of drug or medication action, where a drug can be broadly or narrowly defined as any man-made, natural, or endogenous (from within the body) molecule which exerts a biochemical or physiological effect on the cell, tissue, organ, or organism (sometimes the word pharmacon is used as a term to encompass these endogenous and exogenous bioactive species). More specifically, it is the study of the interactions that occur between a living organism and chemicals that affect normal or abnormal biochemical function. If substances have medicinal properties, they are considered pharmaceuticals.
The field encompasses drug composition and properties, synthesis and drug design, molecular and cellular mechanisms, organ/systems mechanisms, signal transduction/cellular communication, molecular diagnostics, interactions, chemical biology, therapy, and medical applications and antipathogenic capabilities. The two main areas of pharmacology are pharmacodynamics and pharmacokinetics. Pharmacodynamics studies the effects of a drug on biological systems, and pharmacokinetics studies the effects of biological systems on a drug. In broad terms, pharmacodynamics discusses the chemicals with biological receptors, and pharmacokinetics discusses the absorption, distribution, metabolism, and excretion (ADME) of chemicals from the biological systems. Pharmacology is not synonymous with pharmacy and the two terms are frequently confused. Pharmacology, a biomedical science, deals with the research, discovery, and characterization of chemicals which show biological effects and the elucidation of cellular and organismal function in relation to these chemicals. In contrast, pharmacy, a health services profession, is concerned with the application of the principles learned from pharmacology in its clinical settings; whether it be in a dispensing or clinical care role. In either field, the primary contrast between the two is their distinctions between direct-patient care, pharmacy practice, and the science-oriented research field, driven by pharmacology.
The word "pharmacology" is derived from Greek , "pharmakon", "drug, poison, (paranormal)|, "-logia" "study of", "knowledge of" (cf. the etymology of "pharmacy"). Pharmakon is related to pharmakos, the ritualistic sacrifice or exile of a human scapegoat or victim in Ancient Greek religion.
The origins of clinical pharmacology date back to the Middle Ages, with pharmacognosy and Avicenna's "The Canon of Medicine", Peter of Spain's "Commentary on Isaac", and John of St Amand's "Commentary on the Antedotary of Nicholas". Early pharmacology focused on herbalism and natural substances, mainly plant extracts. Medicines were compiled in books called pharmacopoeias. Crude drugs have been used since prehistory as a preparation of substances from natural sources. However, the active ingredient of crude drugs are not purified and the substance is adulterated with other substances.
Traditional medicine varies between cultures and may be specific to a particular culture, such as in traditional Chinese, Mongolian, Tibetan and Korean medicine. However much of this has since been regarded as pseudoscience. Pharmacological substances known as entheogens may have spiritual and religious use and historical context.
In the 17th century, the English physician Nicholas Culpeper translated and used pharmacological texts. Culpeper detailed plants and the conditions they could treat. In the 18th century, much of clinical pharmacology was established by the work of William Withering. Pharmacology as a scientific discipline did not further advance until the mid-19th century amid the great biomedical resurgence of that period. Before the second half of the nineteenth century, the remarkable potency and specificity of the actions of drugs such as morphine, quinine and digitalis were explained vaguely and with reference to extraordinary chemical powers and affinities to certain organs or tissues. The first pharmacology department was set up by Rudolf Buchheim in 1847, in recognition of the need to understand how therapeutic drugs and poisons produced their effects. Subsequently, the first pharmacology department in England was set up in 1905 at University College London.
Pharmacology developed in the 19th century as a biomedical science that applied the principles of scientific experimentation to therapeutic contexts. The advancement of research techniques propelled pharmacological research and understanding. The development of the organ bath preparation, where tissue samples are connected to recording devices, such as a myograph, and physiological responses are recorded after drug application, allowed analysis of drugs' effects on tissues. The development of the ligand binding assay in 1945 allowed quantification of the binding affinity of drugs at chemical targets. Modern pharmacologists use techniques from genetics, molecular biology, biochemistry, and other advanced tools to transform information about molecular mechanisms and targets into therapies directed against disease, defects or pathogens, and create methods for preventative care, diagnostics, and ultimately personalized medicine.
The discipline of pharmacology can be divided into many sub disciplines each with a specific focus.
Pharmacology can also focus on specific systems comprising the body. Divisions related to bodily systems study the effects of drugs in different systems of the body. These include neuropharmacology, in the central and peripheral nervous systems; immunopharmacology in the immune system. Other divisions include cardiovascular, renal and endocrine pharmacology. Psychopharmacology, is the study of the effects of drugs on the psyche, mind and behavior, such as the behavioral effects of psychoactive drugs. It incorporates approaches and techniques from neuropharmacology, animal behavior and behavioral neuroscience, and is interested in the behavioral and neurobiological mechanisms of action of psychoactive drugs. The related field of neuropsychopharmacology focuses on the effects of drugs at the overlap between the nervous system and the psyche.
Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics, the quantification and analysis of metabolites produced by the body. It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. Pharmacometabolomics can be applied to measure metabolite levels following the administration of a drug, in order to monitor the effects of the drug on metabolic pathways. Pharmacomicrobiomics studies the effect of microbiome variations on drug disposition, action, and toxicity. Pharmacomicrobiomics is concerned with the interaction between drugs and the gut microbiome. Pharmacogenomics is the application of genomic technologies to drug discovery and further characterization of drugs related to an organism's entire genome. For pharmacology regarding individual genes, pharmacogenetics studies how genetic variation gives rise to differing responses to drugs. Pharmacoepigenetics studies the underlying epigenetic marking patterns that lead to variation in an individual's response to medical treatment.
Pharmacology can be applied within clinical sciences. Clinical pharmacology is the basic science of pharmacology focusing on the application of pharmacological principles and methods in the medical clinic and towards patient care and outcomes. An example of this is posology, which is the study of how medicines are dosed.
Pharmacology is closely related to toxicology. Both pharmacology and toxicology are scientific disciplines that focus on understanding the properties and actions of chemicals. However, pharmacology emphasizes the therapeutic effects of chemicals, usually drugs or compounds that could become drugs, whereas toxicology is the study of chemical's adverse effects and risk assessment.
Pharmacological knowledge is used to advise pharmacotherapy in medicine and pharmacy.
Drug discovery is the field of study concerned with creating new drugs. It encompasses the subfields of drug design and development. Drug discovery starts with drug design, which is the inventive process of finding new drugs. In the most basic sense, this involves the design of molecules that are complementary in shape and charge to a given biomolecular target. After a lead compound has been identified through drug discovery, drug development involves bringing the drug to the market. Drug discovery is related to pharmacoeconomics, which is the sub-discipline of health economics that considers the value of drugs Pharmacoeconomics evaluates the cost and benefits of drugs in order to guide optimal healthcare resource allocation. The techniques used for the discovery, formulation, manufacturing and quality control of drugs discovery is studied by pharmaceutical engineering, a branch of engineering. Safety pharmacology specialises in detecting and investigating potential undesirable effects of drugs.
Development of medication is a vital concern to medicine, but also has strong economical and political implications. To protect the consumer and prevent abuse, many governments regulate the manufacture, sale, and administration of medication. In the United States, the main body that regulates pharmaceuticals is the Food and Drug Administration; they enforce standards set by the United States Pharmacopoeia. In the European Union, the main body that regulates pharmaceuticals is the EMA, and they enforce standards set by the European Pharmacopoeia.
The metabolic stability and the reactivity of a library of candidate drug compounds have to be assessed for drug metabolism and toxicological studies. Many methods have been proposed for quantitative predictions in drug metabolism; one example of a recent computational method is SPORCalc. A slight alteration to the chemical structure of a medicinal compound could alter its medicinal properties, depending on how the alteration relates to the structure of the substrate or receptor site on which it acts: this is called the structural activity relationship (SAR). When a useful activity has been identified, chemists will make many similar compounds called analogues, to try to maximize the desired medicinal effect(s). This can take anywhere from a few years to a decade or more, and is very expensive. One must also determine how safe the medicine is to consume, its stability in the human body and the best form for delivery to the desired organ system, such as tablet or aerosol. After extensive testing, which can take up to six years, the new medicine is ready for marketing and selling.
Because of these long timescales, and because out of every 5000 potential new medicines typically only one will ever reach the open market, this is an expensive way of doing things, often costing over 1 billion dollars. To recoup this outlay pharmaceutical companies may do a number of things:
The inverse benefit law describes the relationship between a drugs therapeutic benefits and its marketing.
When designing drugs, the placebo effect must be considered to assess the drug's true therapeutic value.
Drug development uses techniques from medicinal chemistry to chemically design drugs. This overlaps with the biological approach of finding targets and physiological effects.
Pharmacology can be studied in relation to wider contexts than the physiology of individuals. For example, pharmacoepidemiology is the study of the effects of drugs in large numbers of people and relates to the broader fields of epidemiology and public health. Pharmacoenvironmentology or environmental pharmacology is a field intimately linked with ecology and public health. Human health and ecology are intimately related so environmental pharmacology studies the environmental effect of drugs and pharmaceuticals and personal care products in the environment.
Drugs may also have ethnocultural importance, so ethnopharmacology studies the ethnic and cultural aspects of pharmacology.
Photopharmacology is an emerging approach in medicine in which drugs are activated and deactivated with light. The energy of light is used to change for shape and chemical properties of the drug, resulting in different biological activity. This is done to ultimately achieve control when and where drugs are active in a reversible manner, to prevent side effects and pollution of drugs into the environment.
The study of chemicals requires intimate knowledge of the biological system affected. With the knowledge of cell biology and biochemistry increasing, the field of pharmacology has also changed substantially. It has become possible, through molecular analysis of receptors, to design chemicals that act on specific cellular signaling or metabolic pathways by affecting sites directly on cell-surface receptors (which modulate and mediate cellular signaling pathways controlling cellular function).
Chemicals can have pharmacologically relevant properties and effects. Pharmacokinetics describes the effect of the body on the chemical (e.g. half-life and volume of distribution), and pharmacodynamics describes the chemical's effect on the body (desired or toxic).
Pharmacology is typically studied with respect to particular systems, for example endogenous neurotransmitter systems. The major systems studied in pharmacology can be categorised by their ligands and include acetylcholine, adrenaline, glutamate, GABA, dopamine, histamine, serotonin, cannabinoid and opioid.
Molecular targets in pharmacology include receptors, enzymes and membrane transport proteins. Enzymes can be targeted with enzyme inhibitors. Receptors are typically categorised based on structure and function. Major receptor types studied in pharmacology include G protein coupled receptors, ligand gated ion channels and receptor tyrosine kinases.
Pharmacological models include the Hill equation, Cheng-Prusoff equation and Schild regression. Pharmacological theory often investigates the binding affinity of ligands to their receptors.
Medication is said to have a narrow or wide "therapeutic index," certain safety factor or "therapeutic window". This describes the ratio of desired effect to toxic effect. A compound with a narrow therapeutic index (close to one) exerts its desired effect at a dose close to its toxic dose. A compound with a wide therapeutic index (greater than five) exerts its desired effect at a dose substantially below its toxic dose. Those with a narrow margin are more difficult to dose and administer, and may require therapeutic drug monitoring (examples are warfarin, some antiepileptics, aminoglycoside antibiotics). Most anti-cancer drugs have a narrow therapeutic margin: toxic side-effects are almost always encountered at doses used to kill tumors.
The effect of drugs can be described with Loewe additivity.
Pharmacokinetics is the study of the bodily absorption, distribution, metabolism, and excretion of drugs.
When describing the pharmacokinetic properties of the chemical that is the active ingredient or active pharmaceutical ingredient (API), pharmacologists are often interested in "L-ADME":
Drug metabolism is assessed in pharmacokinetics and is important in drug research and prescribing.
In the United States, the Food and Drug Administration (FDA) is responsible for creating guidelines for the approval and use of drugs. The FDA requires that all approved drugs fulfill two requirements:
Gaining FDA approval usually takes several years. Testing done on animals must be extensive and must include several species to help in the evaluation of both the effectiveness and toxicity of the drug. The dosage of any drug approved for use is intended to fall within a range in which the drug produces a therapeutic effect or desired outcome.
The safety and effectiveness of prescription drugs in the U.S. are regulated by the federal Prescription Drug Marketing Act of 1987.
The Medicines and Healthcare products Regulatory Agency (MHRA) has a similar role in the UK.
Medicare Part D is a prescription drug plan in the U.S.
The Prescription Drug Marketing Act (PDMA) is an act related to drug policy.
Prescription drugs are drugs regulated by legislation.
The International Union of Basic and Clinical Pharmacology, Federation of European Pharmacological Societies and European Association for Clinical Pharmacology and Therapeutics are organisations representing standardisation and regulation of clinical and scientific pharmacology.
Systems for medical classification of drugs with pharmaceutical codes have been developed. These include the National Drug Code (NDC), administered by Food and Drug Administration.; Drug Identification Number (DIN), administered by Health Canada under the Food and Drugs Act; Hong Kong Drug Registration, administered by the Pharmaceutical Service of the Department of Health (Hong Kong) and National Pharmaceutical Product Index in South Africa. Hierarchical systems have also been developed, including the Anatomical Therapeutic Chemical Classification System (AT, or ATC/DDD), administered by World Health Organization; Generic Product Identifier (GPI), a hierarchical classification number published by MediSpan and SNOMED, C axis. Ingredients of drugs have been categorised by Unique Ingredient Identifier.
The study of pharmacology overlaps with biomedical sciences and is study of the effects of drugs on living organisms. Pharmacological research can lead to new drug discoveries, and promote a better understanding of human physiology. Students of pharmacology must have detailed working knowledge of aspects in physiology, pathology and chemistry. Modern pharmacology is interdisciplinary and relates to biophysical and computational sciences, and analytical chemistry. Whereas a pharmacy student will eventually work in a pharmacy dispensing medications, a pharmacologist will typically work within a laboratory setting. Pharmacological research is important in academic research (medical and non-medical), private industrial positions, science writing, scientific patents and law, consultation, biotech and pharmaceutical employment, the alcohol industry, food industry, forensics/law enforcement, public health, and environmental/ecological sciences. Pharmacology is often taught to pharmacy and medicine students as part of a Medical School curriculum. | https://en.wikipedia.org/wiki?curid=24354 |
Perth
Perth ( ) is the capital and largest city of the Australian state of Western Australia (WA). It is named after the city of Perth, Scotland and is Australia's fourth-most populous city, with a population of 2.14 million living in Greater Perth. Perth is part of the South West Land Division of Western Australia, with most of the metropolitan area on the Swan Coastal Plain, land area between the Indian Ocean and the Darling Scarp. The first areas settled were on the Swan River at Guildford, with the city's central business district and port (Fremantle) both later founded downriver.
Captain James Stirling founded Perth in 1829 as the administrative centre of the Swan River Colony. It gained city status (currently vested in the smaller City of Perth) in 1856 and was promoted to the status of a Lord Mayorality in 1929. The city inherited its name due to the influence of Sir George Murray, then Member of Parliament for Perthshire and Secretary of State for War and the Colonies. The city's population increased substantially as a result of the Western Australian gold rushes in the late 19th century. During Australia's involvement in World War II, Fremantle served as a base for submarines operating in the Pacific Theatre, and a US Navy Catalina flying boat fleet was based at Matilda Bay. An influx of immigrants after the war, predominantly from Britain, Italy, Greece, and Yugoslavia, led to rapid population growth. This was followed by a surge in economic activity flowing from several mining booms in the late 20th and early 21st centuries that saw Perth become the regional headquarters for several large mining operations.
As part of Perth's role as the capital of Western Australia, the state's Parliament and Supreme Court are in the city, as is Government House, the residence of the Governor of Western Australia. Perth came seventh in the Economist Intelligence Unit's August 2016 list of the world's most liveable cities and was classified by the Globalization and World Cities Research Network in 2010 as a Beta world city. It hosted the 1962 Commonwealth Games.
Perth is divided into 30 local government areas and 250 suburbs, stretching from Two Rocks in the north to Singleton in the south, and east inland to The Lakes. Outside of the main CBD, important urban centres within Perth include Armadale, Fremantle, Joondalup, Midland and Rockingham. Most of those were originally established as separate settlements and retained a distinct identity after being subsumed into the wider metropolitan area. Mandurah, Western Australia's second-largest city, has in recent years formed a conurbation with Perth along the coast, though for most purposes it is still considered a separate city.
Aboriginal Australians have inhabited the Perth area for at least 38,000 years, as evidenced by archaeological remains at Upper Swan. The Noongar people occupied the southwest corner of Western Australia and lived as hunter-gatherers. The wetlands on the Swan Coastal Plain were particularly important to them, both spiritually (featuring in local mythology) and as a source of food.
The Noongar people know the area where Perth now stands as Boorloo. Boorloo formed part of the territory of the Mooro, a Noongar clan, which at the time of British settlement had Yellagonga as their leader. The Mooro was one of several Noongar clans based around the Swan River known collectively as the Whadjuk. The Whadjuk themselves were one of a larger group of fourteen tribes that formed the south-west socio-linguistic block known as the Noongar (meaning "the people" in their language), also sometimes called the Bibbulmun. On 19 September 2006, the Federal Court of Australia brought down a judgment recognising Noongar native title over the Perth metropolitan area in the case of "Bennell v State of Western Australia" [2006] FCA 1243. The judgment was overturned on appeal.
The Dutch Captain Willem de Vlamingh and his crew made the first documented sighting of the present-day Perth region by Europeans on 10 January 1697. Other Europeans made subsequent sightings between this date and 1829, but as in the case of the sighting and observations made by Vlamingh, they adjudged the area inhospitable and unsuitable for the agriculture that would be needed to sustain a European-style settlement.
Although the Colony of New South Wales had established a convict-supported settlement at King George's Sound (later Albany) on the south coast of Western Australia in 1826 in response to rumours that the area would be annexed by France, Perth was the first full-scale settlement by Europeans in the western third of the continent. The British colony would be officially designated Western Australia in 1832 but was known informally for many years as the Swan River Colony after the area's major watercourse.
On 4 June 1829, newly arriving British colonists had their first view of the mainland, and Western Australia's founding has since been recognised by a public holiday on the first Monday in June each year. Captain James Stirling, aboard "Parmelia", said that Perth was "as beautiful as anything of this kind I had ever witnessed". On 12 August that year, Helen Dance, wife of the captain of the second ship, "Sulphur", cut down a tree to mark the founding of the town.
It is clear that Stirling had already selected the name "Perth" for the capital well before the town was proclaimed, as his proclamation of the colony, read in Fremantle on 18 June 1829, ended "given under my hand and Seal at Perth this 18th Day of June 1829. James Stirling Lieutenant Governor". The only contemporary information on the source of the name comes from Fremantle's diary entry for 12 August, which records that they "named the town Perth according to the wishes of Sir George Murray". Murray was born in Perth, Scotland, and was in 1829 Secretary of State for the Colonies and Member for Perthshire in the British House of Commons. The town was named after the Scottish Perth, in Murray's honour. Beginning in 1831, hostile encounters between the British settlers and the Noongar people – both large-scale land users, with conflicting land value systems – increased considerably as the colony grew. The hostile encounters between the two groups of people resulted in multiple events, including the execution of the Whadjuk elder Midgegooroo, the death of his son Yagan in 1833, and the Pinjarra massacre in 1834.
The relations between the Noongar people and the Europeans were strained due to these events. Because of the large number of buildings in and around "Boorloo", the local Whadjuk Noongar people were slowly dispossessed of their country. They were forced to camp around prescribed areas, including the swamps and lakes north of the settlement area including Third Swamp, known to them as "Boodjamooling". Boodjamooling continued to be a main campsite for the remaining Noongar people in the Perth region and was also used by travellers, itinerants, and homeless people. By the gold-rush days of the 1890s, they were joined by miners who were en route to the goldfields.
In 1850, at a time when penal transportation to Australia's eastern colonies had ceased, Western Australia was opened to convicts at the request of farming and business people due to a shortage of labour. Over the next eighteen years, 9,721 convicts arrived in Western Australia aboard 43 ships.
Queen Victoria announced the city status of Perth in 1856. Despite this proclamation, Perth was still a quiet town, described in 1870 by a Melbourne journalist as:"...a quiet little town of some 3000 inhabitants spread out in straggling allotments down to the water's edge, intermingled with gardens and shrubberies and half rural in its aspect ... The main streets are macadamised, but the outlying ones and most of the footpaths retain their native state from the loose sand — the all pervading element of Western Australia — productive of intense glare or much dust in the summer and dissolving into slush during the rainy season."
With the discovery of gold at Kalgoorlie and Coolgardie in the late 19th century, Western Australia experienced a mining boom, and Perth's population grew from approximately 8,500 in 1881 to 61,000 in 1901.
After a referendum in 1900, Western Australia joined the Federation of Australia in 1901. It was the last of the Australian colonies to agree to join the Federation, and did so only after the other colonies had offered several concessions, including the construction of a transcontinental railway line from Port Augusta in South Australia to Kalgoorlie to link Perth with the eastern states.
In 1927, Indigenous people were prohibited from entering large swathes of Perth under penalty of imprisonment, a ban that lasted until 1954.
In 1933, Western Australia voted in a referendum to leave the Australian Federation, with a majority of two to one in favour of secession. However, the state general election held at the same time as the referendum had voted out the incumbent "pro-independence" government, replacing it with a government that did not support the independence movement. Respecting the result of the referendum, the new government nonetheless petitioned the Imperial Parliament at Westminster. The House of Commons established a select committee to consider the issue but after 18 months of negotiations and lobbying, finally refused to consider the matter, declaring that it could not legally grant secession.
In 1962, Perth received global media attention when city residents lit their house lights and streetlights as American astronaut John Glenn passed overhead while orbiting the earth on Friendship 7. This led to it being nicknamed the "City of Light". The city repeated the act as Glenn passed overhead on the Space Shuttle in 1998.
Perth's development and relative prosperity, especially since the mid-1960s, has resulted from its role as the main service centre for the state's resource industries, which extract gold, iron ore, nickel, alumina, diamonds, mineral sands, coal, oil, and natural gas. Whilst most mineral and petroleum production takes place elsewhere in the state, the non-base services provide most of the employment and income to the people of Perth.
The central business district of Perth is bounded by the Swan River to the south and east, with Kings Park on the western end and the railway reserve as the northern border. A state and federally funded project named Perth City Link sank a section of the railway line to allow easy pedestrian access between Northbridge and the CBD. The Perth Arena is a building in the city link area that has received several architectural awards from institutions such as the Design Institute of Australia, the Australian Institute of Architects, and Colorbond. St Georges Terrace is the area's prominent street, with of office space in the CBD. Hay Street and Murray Street have most of the retail and entertainment facilities. The city's tallest building is Central Park, the eighth tallest building in Australia. The CBD until 2012 was the centre of a mining-induced boom, with several commercial and residential projects being built, including Brookfield Place, a office building for Anglo-Australian mining company BHP Billiton.
Perth's metropolitan area extends along the coast to Two Rocks in the north and Singleton to the south, a distance of approximately . From the coast in the west to Mundaring in the east is a distance of approximately . The Perth metropolitan area covers .
The metropolitan region is defined by the "Planning and Development Act 2005" to include 30 local government areas, with the outer extent being the City of Wanneroo and the City of Swan to the north, the Shire of Mundaring, City of Kalamunda and the City of Armadale to the east, the Shire of Serpentine-Jarrahdale to the southeast and the City of Rockingham to the southwest, and including Rottnest Island and Garden Island off the west coast. This extent correlates with the Metropolitan Region Scheme, and the Australian Bureau of Statistics' Perth (Major Statistical Division).
The metropolitan extent of Perth can be defined in other ways – the Australian Bureau of Statistics Greater Capital City Statistical Area, or Greater Perth in short, consists of that area, plus the City of Mandurah and the Pinjarra Level 2 Statistical Area of the Shire of Murray, while the "Regional Development Commissions Act 1993" includes the Shire of Serpentine-Jarrahdale in the Peel region.
Perth is on the Swan River, named for the native black swans by Willem de Vlamingh, captain of a Dutch expedition and namer of WA's Rottnest Island, who discovered the birds while exploring the area in 1697. This water body was known by Aboriginal inhabitants as "Derbarl Yerrigan". The city centre and most of the suburbs are on the sandy and relatively flat Swan Coastal Plain, which lies between the Darling Scarp and the Indian Ocean. The soils of this area are quite infertile.
Much of Perth was built on the Perth Wetlands, a series of freshwater wetlands running from Herdsman Lake in the west through to Claisebrook Cove in the east.
To the east, the city is bordered by a low escarpment called the Darling Scarp. Perth is on generally flat, rolling land, largely due to the high amount of sandy soils and deep bedrock. The Perth metropolitan area has two major river systems, one made up of the Swan and Canning Rivers, and one of the Serpentine and Murray Rivers, which discharge into the Peel Inlet at Mandurah.
Perth receives moderate, though highly seasonal, winter based rainfall. Summers are generally hot and dry, lasting from December to March, with February generally the hottest month. Winters are mild and wet, giving Perth a hot-summer Mediterranean climate (Köppen climate classification "Csa"). Perth has an average of 8.8 hours of sunshine per day, which equates to around 3200 hours of sunshine and 138.7 clear days annually, making it Australia's sunniest capital city.
Summers are dry but not completely devoid of rain, with sporadic rainfall in the form of short-lived thunderstorms, cold fronts and on occasions decaying tropical cyclones from Western Australia's northwest, which can bring heavy rain. Temperatures above are fairly common in the summer months. The highest temperature recorded in Perth was on 23 February 1991, although Perth Airport recorded on the same day. On most summer afternoons a sea breeze, known locally as the "Fremantle Doctor", blows from the southwest, providing relief from the hot northeasterly winds. Temperatures often fall below a few hours after the arrival of the wind change. In the summer, the 3 pm dewpoint averages at around .
Winters are wet but mild, with most of Perth's annual rainfall between May and September. Winters see significant rainfall as frontal systems move across the region, interspersed with clear and sunny days where minimum temperatures tend to drop below . The lowest temperature recorded in Perth was on 17 June 2006. The lowest temperature within the Perth metropolitan area was on the same day at Jandakot Airport, although temperatures at or below zero are rare occurrences. The lowest maximum temperature recorded in Perth is on 26 June 1956. Daytime maximums below occur approximately three and a half days per winter on average. It occasionally gets cold enough for frost to form. While snow has never been recorded in the Perth CBD, light snowfalls have been reported in outer suburbs of Perth in the Perth Hills around Kalamunda, Roleystone and Mundaring. The most recent snowfall was in 1968.
The rainfall pattern has changed in Perth and southwest Western Australia since the mid-1970s. A significant reduction in winter rainfall has been observed with a greater number of extreme rainfall events in the summer, such as the slow-moving storms on 8 February 1992 that brought of rain, heavy rainfall associated with a tropical low on 10 February 2017, which brought of rain, and the remnants of ex-Tropical Cyclone Joyce on 15 January 2018 with . Perth was also hit by a severe thunderstorm on 22 March 2010, which brought of rain and large hail and caused significant damage in the metropolitan area.
The average sea temperature ranges from in October to in March.
Perth is one of the most isolated major cities in the world. The nearest city with a population of more than 100,000 is Adelaide, over away. Perth is geographically closer to both East Timor (), and Jakarta, Indonesia (), than to Sydney ().
Perth is Australia's fourth-most-populous city, having overtaken Adelaide's population in 1984. In June 2018 there were an estimated 2,059,484 residents in the Greater Perth area, representing an increase of approximately 1.1% from the 2017 estimate of 2,037,902.
At the 2016 census, the most commonly nominated ancestries were:
Perth's population is notable for the high proportion of British and Irish born residents. At the 2016 Census, 166,965 England-born Perth residents were counted, ahead of even Sydney (151,614), despite the latter having well over twice the population.
The ethnic make-up of Perth changed in the second part of the 20th century when significant numbers of continental European immigrants arrived in the city. Prior to this, Perth's population had been almost completely Anglo-Celtic in ethnic origin. As Fremantle was the first landfall in Australia for many migrant ships coming from Europe in the 1950s and 1960s, Perth started to experience a diverse influx of people, including Italians, Greeks, Dutch, Germans, Turks, Croats, and Macedonians. The Italian influence in the Perth and Fremantle area has been substantial, evident in places like the "Cappuccino strip" in Fremantle featuring many Italian eateries and shops. In Fremantle, the traditional Italian blessing of the fleet festival is held every year at the start of the fishing season. In Northbridge every December is the San Nicola (Saint Nicholas) Festival, which involves a pageant followed by a concert, predominantly in Italian. Suburbs surrounding the Fremantle area, such as Spearwood and Hamilton Hill, also contain high concentrations of Italians, Croatians and Portuguese. Perth has also been home to a small Jewish community since 1829 – numbering 5,082 in 2006 – who have emigrated primarily from Eastern Europe and more recently from South Africa.
A more recent wave of arrivals includes White South Africans from South Africa. South Africans overtook those born in Italy as the fourth-largest foreign group in 2001. By 2016, there were 35,262 South Africans residing in Perth. Many Afrikaners and Anglo-Africans emigrated to Perth during the 1980s and 1990s, with the phrase "packing for Perth" becoming associated with South Africans who choose to emigrate abroad, sometimes regardless of the destination. As a result, the city has been described as "the Australian capital of South Africans in exile". The reason for Perth's popularity among white South Africans has often been attributed to the location, the vast amount of land, and the slightly warmer climate compared to other large Australian cities – Perth has a Mediterranean climate reminiscent of Cape Town.
Since the end of the White Australia policy in 1973, Asia has become an increasingly important source of migrants, with communities from Vietnam, Malaysia, Indonesia, Thailand, Singapore, Hong Kong, Mainland China, and India all now well-established. There were 99,229 persons of Chinese descent in Perth in 2016 – 5.5% of the city's population. These are supported by the Australian Eurasian Association of Western Australia, which also serves a community of Portuguese-Malacca Eurasian or Kristang immigrants.
Middle Eastern immigrants have a presence in Perth. They come from a variety of countries, including Saudi Arabia, Syria, Iran, Iraq, Israel, Lebanon, The United Arab Emirates, Oman, Yemen, Afghanistan, and Pakistan.
Perth also has one of the largest Latin American populations in Australia, with Brazilians and Chileans being the largest Latin American groups in Perth.
The Indian community includes a substantial number of Parsees who emigrated from Bombay – Perth being the closest Australian city to India – in 2016 those with Indian ancestry accounted for 3.6% of Perth's population. Perth is also home to the largest population of Anglo-Burmese in the world; many settled here following the independence of Burma in 1948 with immigration taking off after 1962. The city is now the cultural hub for Anglo-Burmese worldwide. There is also a substantial Anglo-Indian population in Perth, who also settled in the city following the independence of India.
1.6% of the population, or 31,214 people, identified as Indigenous Australians (Aboriginal Australians and Torres Strait Islanders) in 2016.
At the 2016 census, 73.5% of inhabitants spoke only English at home, with the next most common languages being Mandarin (2.3%), Italian (1.4%), Vietnamese (1.0%), Cantonese (1.0%) and Arabic (0.7%).
32.1% of the 2016 census respondents in Perth had no religion, as against 29.6% of national population. In 1911, the national figure was 0.4%.
Catholics are the largest single Christian denomination in the Greater Perth area at 22%. The Personal Ordinariate of Our Lady of the Southern Cross claims over 2,000 members. Perth is the seat of the Roman Catholic Archdiocese of Perth. Anglicans are 13.8% of the population. Perth is the seat of the Anglican Diocese of Perth.
Buddhism and Islam each claim more than 40,000 adherents. Over 39,000 members of the Uniting Church in Australia live in Perth. Perth has the third largest Jewish population in Australia, numbering approximately 20,000, with both Orthodox and Progressive synagogues and a Jewish Day School. The Bahá'í community in Perth numbers around 1,500. Hinduism has over 20,000 adherents in Perth; the Diwali (festival of lights) celebration in 2009 attracted over 20,000 visitors. There are Hindu temples in Canning Vale, Anketell and a Swaminarayan temple in Bennett Springs. Hinduism is the fastest growing religion in Australia. Perth is also home to 12,000 Latter-day Saints and the Perth Australia Temple of The Church of Jesus Christ of Latter-day Saints.
Perth, like the rest of Australia, is governed by three levels of government: local, state, and federal.
The Perth metropolitan area is divided into thirty local government bodies, including the City of Perth which administers Perth's central business district. The outer extent of the administrative region of Perth comprises the City of Wanneroo and the City of Swan to the north, the Shire of Mundaring, City of Kalamunda and the City of Armadale to the east, the Shire of Serpentine-Jarrahdale to the southeast and the City of Rockingham to the southwest, and including the islands of Rottnest Island and Garden Island off the west coast.
Perth houses the Parliament of Western Australia and the Governor of Western Australia. , 42 of the Legislative Assembly's 59 seats and 18 of the Legislative Council's 36 seats are based in Perth's metropolitan area.
The state's highest court, the Supreme Court, is located in Perth, along with the District and Family Courts. The Magistrates' Court has six metropolitan locations.
Perth is represented by 10 full seats and significant parts of three others in the Federal House of Representatives, with the seats of Canning, Pearce and Brand including some areas outside the metropolitan area.
The Federal Court of Australia and the Federal Circuit Court of Australia (previously the Federal Magistrates Court) occupy the Commonwealth Law Courts building on Victoria Avenue, which is also the location for annual Perth sittings of Australia's High Court.
By virtue of its population and role as the administrative centre for business and government, Perth dominates the Western Australian economy, despite the major mining, petroleum, and agricultural export industries being located elsewhere in the state. Perth's function as the state's capital city, its economic base and population size have also created development opportunities for many other businesses oriented to local or more diversified markets.
Perth's economy has been changing in favour of the service industries since the 1950s. Although one of the major sets of services it provides is related to the resources industry and, to a lesser extent, agriculture, most people in Perth are not connected to either; they have jobs that provide services to other people in Perth.
As a result of Perth's relative geographical isolation, it has never had the necessary conditions to develop significant manufacturing industries other than those serving the immediate needs of its residents, mining, agriculture and some specialised areas, such as, in recent times, niche shipbuilding and maintenance. It was simply cheaper to import all the needed manufactured goods from either the eastern states or overseas.
Industrial employment influenced the economic geography of Perth. After WWII, Perth experienced suburban expansion aided by high levels of car ownership. Workforce decentralisation and transport improvements made it possible for the establishment of small-scale manufacturing in the suburbs. Many firms took advantage of relatively cheap land to build spacious, single-storey plants in suburban locations with plentiful parking, easy access and minimal traffic congestion. "The former close ties of manufacturing with near-central and/or rail-side locations were loosened."
Industrial estates such as Kwinana, Welshpool and Kewdale were post-war additions contributing to the growth of manufacturing south of the river. The establishment of the Kwinana industrial area was supported by standardisation of the east–west rail gauge linking Perth with eastern Australia. Since the 1950s the area has been dominated by heavy industry, including an oil refinery, steel-rolling mill with a blast furnace, alumina refinery, power station and a nickel refinery. Another development, also linked with rail standardisation, was in 1968 when the Kewdale Freight Terminal was developed adjacent to the Welshpool industrial area, replacing the former Perth railway yards.
With significant population growth post-WWII, employment growth occurred not in manufacturing but in retail and wholesale trade, business services, health, education, community and personal services and in public administration. Increasingly it was these services sectors, concentrated around the Perth metropolitan area, that provided jobs.
Perth has also become a hub of technology focused startups since the early 2000s who provide a pool of highly skilled jobs to the Perth community. Companies such as Appbot, Agworld, Touchgram and Healthengine all hail from Perth and have made headlines internationally. Programs like StartupWA and incubators such as Spacecubed and Vocus Upstart are all focused on creating a thriving startup culture in Perth and growing the next generation of Perth-based employers.
Education is compulsory in Western Australia between the ages of six and seventeen, corresponding to primary and secondary school. Tertiary education is available through several universities and technical and further education (TAFE) colleges.
Students may attend either public schools, run by the state government's Department of Education, or private schools, usually associated with a religion.
The Western Australian Certificate of Education (WACE) is the credential given to students who have completed Years 11 and 12 of their secondary schooling.
In 2012 the minimum requirements for students to receive their WACE changed.
Perth is home to four public universities: the University of Western Australia, Curtin University, Murdoch University, and Edith Cowan University. There is also one private university, the University of Notre Dame Australia.
The University of Western Australia, which was founded in 1911, is renowned as one of Australia's leading research institutions. The university's monumental neo-classical architecture, most of which is carved from white limestone, is a notable tourist destination in the city. It is the only university in the state to be a member of the Group of Eight, as well as the Sandstone universities. It is also the state's only university to have produced a Nobel Laureate: Barry Marshall, who graduated with a Bachelor of Medicine, Bachelor of Surgery in 1975 and was awarded a joint Nobel Prize in physiology or medicine in 2005 with Robin Warren.
Curtin University, previously known as Western Australian Institute of Technology (1966-1986) and Curtin University of Technology (1986-2010), is Western Australia's largest university by student population.
Murdoch University was founded in 1973 and incorporates Western Australia's only veterinary school and Australia's only theology programme to be completely integrated into a secular university.
Edith Cowan University was established in 1991 from the existing Western Australian College of Advanced Education (WACAE) which itself was formed in the 1970s from the existing Teachers Colleges at Claremont, Churchlands, and Mount Lawley. It incorporates the Western Australian Academy of Performing Arts (WAAPA).
The University of Notre Dame Australia was established in 1990. Notre Dame was established as a Catholic university with its lead campus in Fremantle and a large campus in Sydney. Its campus is in the west end of Fremantle, using historic port buildings built in the 1890s, giving Notre Dame a distinct European university atmosphere.
Colleges of TAFE provide trade and vocational training, including certificate- and diploma-level courses. TAFE began as a system of technical colleges and schools under the Education Department, from which they were separated in the 1980s and ultimately formed into regional colleges. Two are in the Perth metropolitan area: North Metropolitan TAFE (formerly Central Institute of Technology and West Coast Institute of Training); and South Metropolitan TAFE (formerly Polytechnic West and Challenger Institute of Technology).
Perth is served by thirty digital free-to-air television channels:
ABC, SBS, Seven, Nine and Ten were also broadcast in an analogue format until 16 April 2013, when the analogue transmission was switched off. Community station Access 31 closed in August 2008. In April 2010 a new community station, West TV, began transmission (in digital format only).
Foxtel provides a subscription-based satellite and cable television service. Perth has its own local newsreaders on ABC (James McHale), Seven (Rick Ardon, Susannah Carr), Nine (Michael Thomson) and Ten (Narelda Jacobs).
Television shows produced in Perth include local editions of the current affair program "Today Tonight", and other types of programming such as "".
An annual telethon has been broadcast since 1968 to raise funds for charities including Princess Margaret Hospital for Children. The 24-hour Perth Telethon claims to be "the most successful fundraising event per capita in the world" and raised more than A$20 million in 2013, with a combined total of over A$153 million since 1968.
The main newspapers for Perth are "The West Australian" and "The Sunday Times". Localised free community papers cater for each local government area. There are also many advertising newspapers, such as "The Quokka". The local business paper is "Western Australian Business News".
Radio stations are on AM, FM and DAB+ frequencies. ABC stations include ABC News (585AM), 720 ABC Perth, Radio National (810AM), Classic FM (97.7FM) and Triple J (99.3FM). The six local commercial stations are Hit 92.9, Nova 93.7, Mix 94.5, 96fm, on FM and 882 6PR and 1080 6IX on AM. DAB+ has mostly the same as both FM and AM plus national stations from the ABC/SBS, Radar Radio and Novanation, along with local stations My Perth Digital, HotCountry Perth, and 98five Christian radio. Major community radio stations include RTRFM (92.1FM), Sonshine FM (98.5FM), SportFM (91.3FM) and Curtin FM (100.1FM).
Online news media covering the Perth area include TheWest.com.au backed by "The West Australian", Perth Now from the newsroom of The Sunday Times, WAToday from Fairfax Media and other outlets like TweetPerth on social media.
The Perth Cultural Centre is home to many of the city's major arts, cultural and educational institutions, including the Art Gallery of Western Australia, Western Australian Museum, State Library of Western Australia, State Records Office, and Perth Institute of Contemporary Arts (PICA). The State Theatre Centre of Western Australia is also located there, and is the home of the Black Swan State Theatre Company and the Perth Theatre Company. Other performing arts companies based in Perth include the West Australian Ballet, the West Australian Opera and the West Australian Symphony Orchestra, all of which present regular programmes. The Western Australian Youth Orchestras provide young musicians with performance opportunities in orchestral and other musical ensembles.
Perth is also home to the Western Australian Academy of Performing Arts at Edith Cowan University, from which many actors and broadcasters have launched their careers. The city's main performance venues include the Riverside Theatre within the Perth Convention Exhibition Centre, the Perth Concert Hall, the historic His Majesty's Theatre, the Regal Theatre in Subiaco and the Astor Theatre in Mount Lawley. The largest performance area within the State Theatre Centre, the Heath Ledger Theatre, is named in honour of Perth-born film actor Heath Ledger. Perth Arena can be configured as an entertainment or sporting arena, and concerts are also hosted at other sporting venues, including Optus Stadium, HBF Stadium, and nib Stadium. Outdoor concert venues include Quarry Amphitheatre, Supreme Court Gardens, Kings Park and Russell Square.
Perth has inspired various artistic and cultural works. John Boyle O'Reilly, a Fenian convict transported to Western Australia, published "Moondyne" in 1879, the most famous early novel about the Swan River Colony. Perth is also the setting for various works by novelist Tim Winton, most notably "Cloudstreet" (1991). Songs that refer to the city include "I Love Perth" (1996) by Pavement, "Perth" (2011) by Bon Iver, and "Perth" (2015) by Beirut. Films shot or set in Perth include "Japanese Story" (2003), "These Final Hours" (2013), "Kill Me Three Times" (2014) and "Paper Planes" (2015).
Due to Perth's relative isolation from other Australian cities, overseas performing artists sometimes exclude it from their Australian tour schedules. This isolation, however, has helped foster a strong local music scene, with . Famous musical performers from Perth include the late AC/DC frontman Bon Scott, whose heritage-listed grave at Fremantle Cemetery is reportedly the most visited grave in Australia. Perth-born performer and artist Rolf Harris became known by the nickname "The Boy From Bassendean". Further notable music acts from Perth include The Triffids, The Scientists, The Drones, Tame Impala, and Karnivool.
Other performers born and raised in Perth include
Judy Davis and Melissa George. Performers raised in Perth include Tim Minchin, Lisa McCune, Troye Sivan and Isla Fisher. Performers that studied in Perth at the Western Australian Academy of Performing Arts include Hugh Jackman and Lisa McCune.
A number of annual events are held in Perth. The Perth International Arts Festival is a large cultural festival that has been held annually since 1953, and has since been joined by the Winter Arts festival, Perth Fringe Festival, and Perth Writers Festival. Perth also hosts annual music festivals including Listen Out, Origin and St Jerome's Laneway Festival. The Perth International Comedy Festival features a variety of local and international comedic talent, with performances held at the Astor Theatre and nearby venues in Mount Lawley, and regular night food markets throughout the summer months across Perth and its surrounding suburbs. Sculpture by the Sea showcases a range of local and international sculptors' creations along Cottesloe Beach. There is also a wide variety of public art and sculptures on display across the city, throughout the year.
Tourism in Perth is an important part of the state's economy, with approximately 2.8 million domestic visitors and 0.7 million international visitors in the year ending March 2012. Tourist attractions are generally focused around the city centre, Fremantle, the coast, and the Swan River.
In addition to the Perth Cultural Centre, there are dozens of museums across the city. The Scitech Discovery Centre in is an interactive science museum, with regularly changing exhibitions on a large range of science and technology based subjects. Scitech also conducts live science demonstration shows, and operates the adjacent "Horizon" planetarium. The Western Australian Maritime Museum in Fremantle displays maritime objects from all eras. It houses "Australia II", the yacht that won the 1983 America's Cup, as well as a former Royal Australian Navy submarine. Also in Fremantle is the Army Museum of Western Australia, situated within a historic artillery barracks. The museum consists of several galleries that reflect the Army's involvement in Western Australia and the military service of Western Australians. The museum holds numerous items of significance, including three Victoria Crosses. Aviation history is represented by the Aviation Heritage Museum in Bull Creek, with its significant collection of aircraft, including a Lancaster bomber and a Catalina of the type operated from the Swan River during WWII. There are many heritage sites in Perth's CBD, Fremantle, and other parts of the metropolitan areas. Some of the oldest remaining buildings, dating back to the 1830s, include the Round House in Fremantle, the Old Mill in South Perth, and the Old Court House in the city centre. Registers of important buildings are maintained by the Heritage Council of Western Australia and local governments. A late heritage building is the Perth Mint. Yagan Square connects Northbridge and the Perth CBD, with a 45-metre-high digital tower and the 9-metre statue "Wirin" designed by Noongar artist Tjyllyungoo. Elizabeth Quay is also a notable attraction in Perth, featuring Swan Bells and a panoramic view of Swan River.
Retail shopping in the Perth CBD is focused around Murray Street and Hay Street. Both these streets are pedestrian malls between William Street and Barrack Street. Forrest Place is another pedestrian mall, connecting the Murray Street mall to Wellington Street and the Perth railway station. A number of arcades run between Hay Street and Murray Street, including the Piccadilly Arcade, which housed the Piccadilly Cinema until it closed in late 2013. Other shopping precincts include Watertown in West Perth, featuring factory outlets for major brands, the historically significant Fremantle Markets, which date to 1897, and the Midland townsite on Great Eastern Highway, combining historic development around the Town Hall and Post Office buildings with the modern Midland Gate shopping centre further east. Joondalup's central business district is largely a shopping and retail area lined with townhouses and apartments, and also features Lakeside Joondalup Shopping City. Joondalup was granted the status of "tourism precinct" by the State Government in 2009, allowing for extended retail trading hours.
The Swan Valley, with fertile soil, uncommon in the Perth region, features numerous wineries, such as the large complex at Houghtons, the state's biggest producer, Sandalfords and many smaller operators, including microbreweries and rum distilleries. The Swan Valley also contains specialised food producers, many restaurants and cafes, and roadside local-produce stalls that sell seasonal fruit throughout the year. Tourist Drive 203 is a circular route in the Swan Valley, passing by many attractions on West Swan Road and Great Northern Highway.
Kings Park, in central Perth between the CBD and the University of Western Australia, is one of the world's largest inner-city parks, at . It has many landmarks and attractions, including the State War Memorial Precinct on Mount Eliza, Western Australian Botanic Garden, and children's playgrounds. Other features include DNA Tower, a high double helix staircase that resembles the deoxyribonucleic acid (DNA) molecule, and Jacob's Ladder, comprising 242 steps that lead down to Mounts Bay Road.
Hyde Park is another inner-city park north of the CBD. It was gazetted as a public park in 1897, created from of a chain of wetlands known as Third Swamp. Avon Valley, John Forrest and Yanchep national parks are areas of protected bushland at the northern and eastern edges of the metropolitan area. Within the city's northern suburbs is Whiteman Park, a bushland area, with bushwalking trails, bike paths, sports facilities, playgrounds, a vintage tramway, a light railway on a track, motor and tractor museums, and Caversham Wildlife Park.
Perth Zoo, in South Perth, houses a variety of Australian and exotic animals from around the globe. The zoo is home to highly successful breeding programs for orangutans and giraffes, and participates in captive breeding and reintroduction efforts for a number of Western Australian species, including the numbat, the dibbler, the chuditch, and the western swamp tortoise.
More wildlife can be observed at the Aquarium of Western Australia in Hillarys, Australia's largest aquarium, specialising in marine animals that inhabit the western coast of Australia. The northern Perth section of the coastline is known as Sunset Coast; it includes numerous beaches and the Marmion Marine Park, a protected area inhabited by tropical fish, Australian sea lions and bottlenose dolphins, and traversed by humpback whales. Tourist Drive 204, also known as Sunset Coast Tourist Drive, is a designated route from North Fremantle to Iluka along coastal roads.
The climate of Perth allows for extensive outdoor sporting activity, and this is reflected in the wide variety of sports available to residents of the city. Perth was host to the 1962 Commonwealth Games and the 1987 America's Cup defence (based at Fremantle). Australian rules football is the most popular spectator sport in Perth – nearly 23% of Western Australians attended a match at least once in 2009–2010. The two Australian Football League teams located in Perth, the West Coast Eagles and the Fremantle Football Club, have two of the largest fan bases in the country. The Eagles, the older club, is one of the most successful teams in the league, and one of the largest sporting clubs in Australia.The next level of football is the Western Australian Football League, comprising nine clubs each having a League, Reserves and Colts team. Each of these clubs has a junior football system for ages 7 to 17. The next level of Australian rules football is the Western Australian Amateur Football League, comprising 68 clubs servicing senior footballers within the metropolitan area. Other popular sports include cricket, basketball, soccer, and rugby union.
Perth has hosted numerous state and international sporting events. Ongoing international events include the Hopman Cup during the first week of January at the Perth Arena, and the Perth International golf tournament at Lake Karrinyup Country Club. In addition to these Perth has hosted the Rally Australia of the World Rally Championships from 1989 to 2006, international Rugby Union games, including qualifying matches for 2003 Rugby World Cup. The 1991 and 1998 FINA World Championships were held in Perth.
Four races (2006, 2007, 2008 and 2010) in the Red Bull Air Race World Championship have been held on a stretch of the Swan River called Perth Water, using Langley Park as a temporary air field. Several motorsport facilities exist in Perth including Perth Motorplex, catering to drag racing and speedway, and Wanneroo Raceway for circuit racing and drifting, which hosts a V8 Supercars round. Perth also has two thoroughbred racing facilities: Ascot, home of the Railway Stakes and Perth Cup; and Belmont Park.
The WACA Ground opened in the 1890s and has hosted Test cricket since 1970. The Western Australian Athletics Stadium opened in 2009.
Perth has ten large hospitals with emergency departments. , Royal Perth Hospital in the city centre is the largest, with others spread around the metropolitan area: Armadale Kelmscott District Memorial Hospital, Joondalup Health Campus, King Edward Memorial Hospital for Women in Subiaco, Rockingham General Hospital, Sir Charles Gairdner Hospital in Nedlands, St John of God Murdoch and Subiaco Hospitals, Midland Health Campus in Midland, and Fiona Stanley Hospital in Murdoch. Perth Children's Hospital is the state's only specialist children's hospital, and Graylands Hospital is the only public stand-alone psychiatric teaching hospital. Most of these are public hospitals, with some operating under public-private partnerships. St John of God Murdoch and Subiaco Hospitals, and Hollywood Hospital are large privately owned and operated hospitals.
A number of other public and private hospitals operate in Perth.
Perth is served by Perth Airport in the city's east for regional, domestic and international flights and Jandakot Airport in the city's southern suburbs for general aviation and charter flights.
Perth has a road network with three freeways and nine metropolitan highways. The Northbridge tunnel, part of the Graham Farmer Freeway, is the only significant road tunnel in Perth.
Perth metropolitan public transport, including trains, buses and ferries, are provided by Transperth, with links to rural areas provided by Transwa. There are 70 railway stations and 15 bus stations in the metropolitan area.
Perth provides zero-fare bus and train trips around the city centre (the "Free Transit Zone"), including four high-frequency CAT bus routes.
The "Indian Pacific" passenger rail service connects Perth with Adelaide and Sydney once per week in each direction. The "Prospector" passenger rail service connects Perth with Kalgoorlie via several Wheatbelt towns, while the "Australind" connects to Bunbury, and the "AvonLink" connects to Northam.
Rail freight terminates at the Kewdale Rail Terminal, south-east of the city centre.
Perth's main container and passenger port is at Fremantle, south west at the mouth of the Swan River. The Fremantle Outer Harbour at Cockburn Sound is one of Australia's major bulk cargo ports.
Perth's electricity is predominantly generated, supplied, and retailed by three Western Australian Government corporations. Verve Energy operates coal and gas power generation stations, as well as wind farms and other power sources. The physical network is maintained by Western Power, while Synergy, the state's largest energy retailer, sells electricity to residential and business customers.
Alinta Energy, which was previously a government owned company, had a monopoly in the domestic gas market since the 1990s. However, in 2013 Kleenheat Gas began operating in the market, allowing consumers to choose their gas retailer.
The Water Corporation is the dominant supplier of water, as well as wastewater and drainage services, in Perth and throughout Western Australia. It is also owned by the state government.
Perth's water supply has traditionally relied on both groundwater and rain-fed dams. Reduced rainfall in the region over recent decades had greatly lowered inflow to reservoirs and affected groundwater levels. Coupled with the city's relatively high growth rate, this led to concerns that Perth could run out of water in the near future. The Western Australian Government responded by building desalination plants, and introducing mandatory household sprinkler restrictions. The Kwinana Desalination Plant was opened in 2006, and Southern Seawater Desalination Plant at Binningup (on the coast between Mandurah and Bunbury) began operating in 2011. A trial winter (1 June – 31 August) sprinkler ban was introduced in 2009 by the State Government, a move which the Government later announced would be made permanent. | https://en.wikipedia.org/wiki?curid=24355 |
Human pathogen
A human pathogen is a pathogen (microbe or microorganism such as a virus, bacterium, prion, or fungus) that causes disease in humans.
The human physiological defense against common pathogens (such as "Pneumocystis") is mainly the responsibility of the immune system with help by some of the body's normal flora and fauna. However, if the immune system or "good" microbiota are damaged in any way (such as by chemotherapy, human immunodeficiency virus (HIV), or antibiotics being taken to kill other pathogens), pathogenic bacteria that were being held at bay can proliferate and cause harm to the host. Such cases are called opportunistic infections.
Some pathogens (such as the bacterium "Yersinia pestis", which may have caused the Black Plague, the "Variola" virus, and the malaria protozoa) have been responsible for massive numbers of casualties and have had numerous effects on afflicted groups. Of particular note in modern times is HIV, which is known to have infected several million humans globally, along with the influenza virus. Today, while many medical advances have been made to safeguard against infection by pathogens, through the use of vaccination, antibiotics, and fungicide, pathogens continue to threaten human life. Social advances such as food safety, hygiene, and water treatment have reduced the threat from some pathogens.
Pathogenic viruses are mainly those of the families of: "Adenoviridae, Picornaviridae, Herpesviridae, Hepadnaviridae, Coronaviridae, Flaviviridae, Retroviridae, Orthomyxoviridae, Paramyxoviridae, Papovaviridae, Polyomavirus, Poxviridae, Rhabdoviridae", and "Togaviridae". Some notable pathogenic viruses cause smallpox, influenza, mumps, measles, chickenpox, ebola, and rubella. Viruses typically range between 20 and 300 nanometers in length.
Although the vast majority of bacteria are harmless or beneficial to one's body, a few pathogenic bacteria can cause infectious diseases. The most common bacterial disease is tuberculosis, caused by the bacterium "Mycobacterium tuberculosis", which affects about 2 million people mostly in sub-Saharan Africa. Pathogenic bacteria contribute to other globally important diseases, such as pneumonia, which can be caused by bacteria such as "Streptococcus" and "Pseudomonas", and foodborne illnesses, which can be caused by bacteria such as "Shigella", "Campylobacter", and "Salmonella". Pathogenic bacteria also cause infections such as tetanus, typhoid fever, diphtheria, syphilis, and Hansen's disease. They typically range between 1 and 5 micrometers in length.
Fungi comprise a eukaryotic kingdom of microbes that are usually saprophytes, but can cause diseases in humans. Life-threatening fungal infections in humans most often occur in immunocompromised patients or vulnerable people with a weakened immune system, although fungi are common problems in the immunocompetent population as the causative agents of skin, nail, or yeast infections. Most antibiotics that function on bacterial pathogens cannot be used to treat fungal infections because fungi and their hosts both have eukaryotic cells. Most clinical fungicides belong to the azole group. The typical fungal spore size is 1-40 micrometers in length.
Some eukaryotic organisms, such as protists and helminths, cause disease. One of the best known diseases caused by protists in the genus "Plasmodium" is malaria. These can range from 3-200 micrometers in length.
Prions are infectious pathogens that do not contain nucleic acids. Prions are abnormal proteins whose presence causes some diseases such as scrapie, bovine spongiform encephalopathy (mad cow disease), and Creutzfeldt–Jakob disease. The discovery of prion as a new class of pathogen allowed Stanley B. Prusiner to receive the Nobel Prize in Physiology or Medicine in 1997.
Animal pathogens are disease-causing agents of wild and domestic animal species, at times including humans.
Virulence (the tendency of a pathogen to cause damage to a host's fitness) evolves when that pathogen can spread from a diseased host, despite that host being very debilitated. An example is the malaria parasite, which can spread from a person near death, by hitching a ride to a healthy person on a mosquito that has bitten the diseased person. This is called horizontal transmission in contrast to vertical transmission, which tends to evolve symbiosis (after a period of high morbidity and mortality in the population) by linking the pathogen's evolutionary success to the evolutionary success of the host organism.
Evolutionary medicine has found that under horizontal transmission, the host population might never develop tolerance to the pathogen.
Transmission of pathogens occurs through many different routes, including airborne, direct or indirect contact, sexual contact, through blood, breast milk, or other body fluids, and through the fecal-oral route. One of the primary pathways by which food or water become contaminated is from the release of untreated sewage into a drinking water supply or onto cropland, with the result that people who eat or drink contaminated sources become infected. In developing countries, most sewage is discharged into the environment or on cropland; even in developed countries, periodic system failures result in sanitary sewer overflows. | https://en.wikipedia.org/wiki?curid=24358 |
People Against Gangsterism and Drugs
People Against Gangsterism and Drugs (PAGAD) is a group formed in 1996 in the Cape Flats area of Cape Town, South Africa. The organisation came to prominence for acts of vigilante violence against gangsters, including arson and murder.
PAGAD was originally initiated by a handful of PAC and community members from a Cape Town townships who decided to organize public demonstrations to pressure the government to fight the illegal drug trade and gangsterism more effectively. However, PAGAD increasingly took matters into their own hands, believing the police were not taking enough action against gangs. Initially the community and police were hesitant to act against PAGAD activities, recognising the need for community action against crime in the gang-ridden communities of the Cape Flats.
Notorious gangsters were initially asked by PAGAD members to stop their criminal activities or be subject to "popular justice". A common PAGAD modus operandi was to set fire to drug dealers' houses and kill gangsters. PAGAD's campaign came to prominence in 1996 when the leader of the Hard Livings gang, Rashaad Staggie, was beaten and burnt to death by a mob during a march to his home in Salt River. South Africa's police quickly came to regard PAGAD as part of the problem rather than a partner in the fight against crime, and they were eventually designated a terrorist organization by the South African government.
Changes within the organisation following the incidences of 1996 increased the influence of more highly politicised and organisationally experienced people within it associated with radical Islamic groups such as Qibla. This caused a series of changes such as the emergence of new leadership and the development of tighter organisational structures. This succeeded in transforming PAGAD from a relatively non-religious popular mass movement into a smaller, better organised but also a religiously radical isolated group.
The threat of growing vigilantism in 2000 led the Western Cape provincial government to declare a "war on gangs" that became a key priority of the ANC provincial government at the time.
Although PAGAD's leadership denied involvement, PAGAD's G-Force, operating in small cells, was believed responsible for killing a large number of gang leaders, and also for a bout of urban terrorism—particularly bombings—in Cape Town. The bombings started in 1998, and included nine bombings in 2000. In addition to targeting gang leaders, bombing targets included South African authorities, Muslims, synagogues, gay nightclubs, tourist attractions, and Western-associated restaurants. The most prominent attack during this time was the bombing on 25 August 1998 of the Cape Town Planet Hollywood.
In September 2000, magistrate Pieter Theron, who was presiding in a case involving PAGAD members, was murdered in a drive-by shooting.
PAGAD's leaders have become known for making anti-semitic statements. A 1997 incendiary bomb attack on a Jewish bookshop owner was found by police to have been committed with the same material PAGAD has used in other attacks. In 1998, Ebrahim Moosa, a University of Cape Town academic who had been critical of PAGAD, decided to take a post in the United States after his home was bombed.
Violent acts such as bombings and vigilantism in Cape Town subsided in 2002, and the police have not attributed any such acts to PAGAD since the November 2002 bombing of the Bishop Lavis offices of the Serious Crimes Unit in the Western Cape. In 2002, PAGAD leader Abdus Salaam Ebrahim was convicted of public violence and imprisoned for seven years. Although a number of other PAGAD members were arrested and convicted of related crimes, none were convicted of the Cape Town bombings.
Today, PAGAD maintains a small and less visible presence in the Cape Town Cape Muslim community.
In the run up to the 2014 South African general elections the organisation has been growing hosted motorcades and marches in Mitchell’s Plain in February–March 2014. One of PAGAD's largest marches in 2014 was joined by the EFF, a far left political party who expressed their support for the organisation. | https://en.wikipedia.org/wiki?curid=24363 |
PDP-8
The PDP-8 is a 12-bit minicomputer that was produced by Digital Equipment Corporation (DEC). It was the first commercially successful minicomputer, with over 50,000 units being sold over the model's lifetime. Its basic design follows the pioneering LINC but has a smaller instruction set, which is an expanded version of the PDP-5 instruction set. Similar machines from DEC are the PDP-12 which is a modernized version of the PDP-8 and LINC concepts, and the PDP-14 industrial controller system.
The earliest PDP-8 model, informally known as a "Straight-8", was introduced on 22 March 1965 priced at $18,500 (). It used diode–transistor logic packaged on flip chip cards in a machine about the size of a small household refrigerator. It was the first computer to be sold for under $20,000, making it the best-selling computer in history at that time. The Straight-8 was supplanted in 1966 by the PDP-8/S, which was available in desktop and rack-mount models. Using a one-bit serial arithmetic logic unit (ALU) allowed the PDP-8/S to be smaller and less expensive, although slower than the original PDP-8. A basic 8/S sold for under $10,000, the first machine to reach that milestone.
Later systems (the PDP-8/I and /L, the PDP-8/E, /F, and /M, and the PDP-8/A) returned to a faster, fully parallel implementation but use much less costly transistor–transistor logic (TTL) MSI logic. Most surviving PDP-8s are from this era. The PDP-8/E is common, and well-regarded because many types of I/O devices were available for it. The last commercial PDP-8 models introduced in 1979 are called "CMOS-8s", based on CMOS microprocessors. They were not priced competitively, and the offering failed. Intersil sold the integrated circuits commercially through 1982 as the Intersil 6100 family. By virtue of their CMOS technology they had low power requirements and were used in some embedded military systems.
The chief engineer who designed the initial version of the PDP-8 was Edson de Castro, who later founded Data General.
The PDP-8 combines low cost, simplicity, expandability, and careful engineering for value. The greatest historical significance was that the PDP-8's low cost and high volume made a computer available to many new customers for many new uses. Its continuing significance is as a historical example of value-engineered computer design.
The low complexity brought other costs. It made programming cumbersome, as is seen in the examples in this article and from the discussion of "pages" and "fields". Much of one's code performed the required mechanics, as opposed to setting out the algorithm. For example, subtracting a number involves computing its two's complement then adding it; writing a conditional jump involves writing a conditional skip around the jump, the skip coding the condition negative to the one desired. Some ambitious programming projects failed to fit in memory or developed design defects that could not be solved. For example, as noted below, inadvertent recursion of a subroutine produces defects that are difficult to trace to the subroutine in question.
As design advances reduced the costs of logic and memory, the programmer's time became relatively more important. Subsequent computer designs emphasized ease of programming, typically using larger and more intuitive instruction sets.
Eventually, most machine code was generated by compilers and report generators. The reduced instruction set computer returned full-circle to the PDP-8's emphasis on a simple instruction set and achieving multiple actions in a single instruction cycle, in order to maximize execution speed, although the newer computers have much longer instruction words.
The PDP-8 used ideas from several 12-bit predecessors, most notably the LINC designed by W.A. Clark and C.E. Molnar, who were inspired by Seymour Cray's CDC 160 minicomputer.
The PDP-8 uses 12 bits for its word size and arithmetic (on unsigned integers from 0 to 4095 or signed integers from -2048 to +2047). However, software can do multiple-precision arithmetic. An interpreter was available for floating point operations, for example, that used a 36-bit floating point representation with a two-word (24-bit) significand and one-word exponent. Subject to speed and memory limitations, the PDP-8 can perform calculations similar to more expensive contemporary electronic computers, such as the IBM 1130 and various models of the IBM System/360, while being easier to interface with external devices.
The memory address space is also 12 bits, so the PDP-8's basic configuration has a main memory of 4,096 (212) twelve-bit words. An optional memory-expansion unit can switch banks of memories using an IOT instruction. The memory is magnetic-core memory with a cycle time of 1.5 microseconds (0.667 MHz), so that a typical two-cycle (Fetch, Execute) memory-reference instruction runs at a speed of 0.333 MIPS. The 1974 Pocket Reference Card for the PDP-8/E gives a basic instruction time of 1.2 microseconds, or 2.6 microseconds for instructions that reference memory.
The PDP-8 was designed in part to handle contemporary telecommunications and text. Six-bit character codes were in widespread use at the time, and the PDP-8's twelve-bit words can efficiently store two such characters. In addition, a six-bit teleprinter code called the teletypesetting or TTS code was in widespread use by the news wire services, and an early application for the PDP-8 was typesetting using this code. Later, 7-bit ASCII and 8-bit UTF-8 character codes were developed in part as a response to the limitations of five- and six-bit character codes.
PDP-8 instructions have a 3-bit opcode, so there are only eight instructions. The assembler provides more instruction mnemonics to a programmer by translating I/O and operate-mode instructions to combinations of the op-codes and instruction fields. It also has only three programmer-visible registers: A 12-bit accumulator (AC), a program counter (PC), and a carry flag called the "link register" (L).
For input and output, the PDP-8 has a single interrupt shared by all devices, an I/O bus accessed by I/O instructions and a direct memory access (DMA) channel. The programmed I/O bus typically runs low to medium-speed peripherals, such as printers, teletypes, paper tape punches and readers, while DMA is used for cathode ray tube screens with a light pen, analog-to-digital converters, digital-to-analog converters, tape drives, and disk drives.
To save money, the design used inexpensive main memory for many purposes that are served by more expensive flip-flop registers in other computers, such as auxiliary counters and subroutine linkage.
Basic models use software to do multiplication and division. For faster math, the Extended Arithmetic Element (EAE) provides multiply and divide instructions with an additional register, the Multiplier/Quotient (MQ) register. The EAE was an option on the original PDP-8, the 8/I, and the 8/E, but it is an integral part of the Intersil 6100 microprocessor.
The PDP-8 is optimized for simplicity of design. Compared to more complex machines, unnecessary features were removed and logic is shared when possible. Instructions use autoincrement, autoclear, and indirect access to increase the software's speed, reduce memory use, and substitute inexpensive memory for expensive registers.
The electronics of a basic PDP-8 CPU has only four 12-bit registers: the accumulator, program counter, memory-buffer register, and memory-address register. To save money, these served multiple purposes at different points in the operating cycle. For example, the memory buffer register provides arithmetic operands, is part of the instruction register, and stores data to rewrite the core memory. (This restores the core data destroyed by the read.)
Because of their simplicity, early PDP-8 models were less expensive than most other commercially available computers. However, they used costly production methods often used for prototypes. They used thousands of very small, standardized logic-modules, with gold connectors, integrated by a costly, complex wire-wrapped backplane in a large cabinet.
In the later 8/S model, two different logic voltages increased the fan-out of the inexpensive diode–transistor logic. The 8/S also reduced the number of logic gates by using a serial, single-bit-wide data path to do arithmetic. The CPU of the PDP-8/S has only about 519 logic gates. In comparison, small microcontrollers (as of 2008) usually have 15,000 or more. The reductions in the electronics permitted a much smaller case, about the size of a bread-box.
The even later PDP-8/E is a larger, more capable computer, but further reengineered for better value. It employs faster transistor–transistor logic, in integrated circuits. The core memory was redesigned. It allows expansion with less expense because it uses the OMNIBUS in place of the wire-wrapped backplane on earlier models. (A personal account of the development of the PDP-8/E can be read on the Engineering and Technology History Wiki.)
The total sales figure for the PDP-8 family has been estimated at over 300,000 machines. The following models were manufactured:
The PDP-8 is readily emulated, as its instruction set is much simpler than modern architectures. Enthusiasts have created entire PDP-8s using single FPGA devices.
Several software simulations of a PDP-8 are available on the Internet, as well as open-source hardware re-implementations. The best of these correctly execute DEC's operating systems and diagnostic software. The software simulations often simulate late-model PDP-8s with all possible peripherals. Even these use only a tiny fraction of the capacity of a modern personal computer.
The I/O systems underwent huge changes during the PDP-8 era. Early PDP-8 models use a front panel interface, a paper-tape reader and a teletype printer with an optional paper-tape punch. Over time, I/O systems such as magnetic tape, RS-232 and current loop dumb terminals, punched card readers, and fixed-head disks were added. Toward the end of the PDP-8 era, floppy disks and moving-head cartridge disk drives were popular I/O devices. Modern enthusiasts have created standard PC style IDE hard disk adapters for real and simulated PDP-8 computers.
Several types of I/O are supported:
A simplified, inexpensive form of DMA called "three-cycle data break" is supported; this requires the assistance of the processor. The "data break" method moves some of common logic needed to implement DMA I/O from each I/O device into one common copy of the logic within the processor. "Data break" places the processor in charge of maintaining the DMA address and word count registers. In three successive memory cycles, the processor updates the word count, updates the transfer address, and stores or retrieves the actual I/O data word.
One-cycle data break effectively triples the DMA transfer rate because only the target data needed to be transferred to and from the core memory. However, the I/O devices need more electronic logic to manage their own word count and transfer address registers. By the time the PDP-8/E was introduced, electronic logic had become less expensive and "one-cycle data break" became more popular.
Early PDP-8 systems did not have an operating system, just a front panel with run and halt switches. Software development systems for the PDP-8 series began with the most basic front-panel entry of raw binary machine code (booting entry).
In the middle era, various paper tape "operating systems" were developed. Many utility programs became available on paper tape. PAL-8 assembly language source code was often stored on paper tape, read into memory, and saved to paper tape. PAL assembled from paper tape into memory. Paper tape versions of a number of programming languages were available, including DEC's FOCAL interpreter and a 4K FORTRAN compiler and runtime.
Toward the end of the PDP-8 era, operating systems such as OS/8 and COS-310 allowed a traditional line mode editor and command-line compiler development system using languages such as PAL-III assembly language, FORTRAN, BASIC, and DIBOL.
Fairly modern and advanced real-time operating system (RTOS) and preemptive multitasking multi-user systems were available: a real-time system (RTS-8) was available as were multiuser commercial systems (COS-300 and COS-310) and a dedicated single-user word-processing system (WPS-8).
A time-sharing system, TSS-8, was also available. TSS-8 allows multiple users to log into the system via 110-baud terminals, and edit, compile and debug programs. Languages include a special version of BASIC, a FORTRAN subset similar to FORTRAN-1 (no user-written subroutines or functions), an ALGOL subset, FOCAL, and an assembler called PAL-D.
A fair amount of user-donated software for the PDP-8 was available from DECUS, the Digital Equipment Corporation User Society, and often came with full source listings and documentation.
The three high-order bits of the 12-bit instruction word (labelled bits 0 through 2) are the operation code. For the six operations that refer to memory, bits 5 through 11 provide a 7-bit address. Bit 4, if set, says to complete the address using the 5 high-order bits of the program counter (PC) register, meaning that the addressed location was within the same 128 words as the instruction. If bit 4 is clear, zeroes are used, so the addressed location is within the first 128 words of memory. Bit 3 specifies indirection; if set, the address obtained as described so far points to a 12-bit value in memory that gives the actual effective address for the instruction; this allows operands to be anywhere in memory at the expense of an additional word. The JMP instruction does not operate on a memory word, except if indirection is specified, but has the same bit fields.
This use of the instruction word divides the 4,096-word memory into 128-word pages; bit 4 of the instruction selects either the current page or page 0 (addresses 0000–0177 in octal). Memory in page 0 is at a premium, since variables placed here can be addressed directly from any page. (Moreover, address 0000 is where any interrupt service routine must start, and addresses 0010–0017 have the special property of auto-incrementing preceding any indirect reference through them.)
The standard assembler places constant values for arithmetic in the current page. Likewise, cross-page jumps and subroutine calls use an indirect address in the current page.
It was important to write routines to fit within 128-word pages, or to arrange routines to minimize page transitions, as references and jumps outside the current page require an extra word. Consequently, much time was spent cleverly conserving one or several words. Programmers deliberately placed code at the end of a page to achieve a free transition to the next page as PC was incremented.
The PDP-8 processor defined few of the IOT instructions, but simply provided a framework. Most IOT instructions were defined by the individual I/O devices.
Bits 3 through 8 of an IOT instruction select an I/O device. Some of these device addresses are standardized by convention:
Instructions for device 0 affect the processor as a whole. For example, ION (6001) enables interrupt processing, and IOFF (6002) disables it.
Bits 9 through 11 of an IOT instruction select the function(s) the device performs. Simple devices (such as the paper tape reader and punch and the console keyboard and printer) use the bits in standard ways:
These operations take place in a well-defined order that gives useful results if more than one bit is set.
More complicated devices, such as disk drives, use these 3 bits in device-specific fashions. Typically, a device decodes the 3 bits to give 8 possible function codes.
Many operations are achieved using OPR, including most of the conditionals. OPR does not address a memory location; conditional execution is achieved by conditionally skipping one instruction, which is typically a JMP.
The OPR instruction was said to be "microcoded." This did not mean what the word means today (that a lower-level program fetched and interpreted the OPR instruction), but meant that each bit of the instruction word specifies a certain action, and the programmer could achieve several actions in a single instruction cycle by setting multiple bits. In use, a programmer can write several instruction mnemonics alongside one another, and the assembler combines them with OR to devise the actual instruction word. Many I/O devices support "microcoded" IOT instructions.
Microcoded actions take place in a well-defined sequence designed to maximize the utility of many combinations.
The OPR instructions come in Groups. Bits 3, 8 and 11 identify the Group of an OPR instruction, so it is impossible to combine the microcoded actions from different groups.
In most cases, the operations are sequenced so that they can be combined in the most useful ways. For example, combining CLA (CLear Accumulator), CLL (CLear Link), and IAC (Increment ACcumulator) first clears the AC and Link, then increments the accumulator, leaving it set to 1. Adding RAL to the mix (so CLA CLL IAC RAL) causes the accumulator to be cleared, incremented, then rotated left, leaving it set to 2. In this way, small integer constants were placed in the accumulator with a single instruction.
The combination CMA IAC, which the assembler lets you abbreviate as CIA, produces the arithmetic inverse of AC: the twos-complement negation. Since there is no subtraction instruction, only the twos-complement add (TAD), computing the difference of two operands, requires first negating the subtrahend.
A Group 1 OPR instruction that has none of the microprogrammed bits set performs no action. The programmer can write NOP (No Operation) to assemble such an instruction.
When bit 8 is clear, a skip is performed if any of the specified conditions are true. For example, "SMA SZA", opcode 7540, skips if AC ≤ 0.
A Group 2 OPR instruction that has none of the microprogrammed bits set is another No-Op instruction.
When bit 8 is set, the Group 2, Or skip condition is inverted, via De Morgan's laws: the skip is "not" performed if any of the group 2, Or conditions are true, meaning that "all" of the specified skip conditions must be true. For example, "SPA SNA", opcode 7550, skips if AC > 0. If none of bits 5–7 are set, then the skip is unconditional.
Unused bit combinations of OPR are defined as a third Group of microprogrammed actions mostly affecting the MQ (Multiplier/Quotient) register.
Typically CLA and MQA were combined to transfer MQ into AC. Another useful combination is MQA and MQL, to exchange the two registers.
Three bits specified a multiply/divide instruction to perform:
A 12-bit word can have 4,096 different values, and this is the maximum number of words the original PDP-8 can address indirectly through a word pointer. As programs became more complex and the price of memory fell, it became desirable to expand this limit.
To maintain compatibility with pre-existing programs, new hardware outside the original design added high-order bits to the effective addresses generated by the program. The Memory Extension Controller expands the addressable memory by a factor of 8, to a total of 32,768 words. This expansion was thought sufficient because, with core memory then costing about 50 cents a word, a full 32K of memory would equal the cost of the CPU.
Each 4K of memory is called a field. The Memory Extension Controller contains two three-bit registers: the DF (Data Field) and the IF (Instruction Field). These registers specify a field for each memory reference of the CPU, allowing a total of 15 bits of address. The IF register specifies the field for instruction fetches and direct memory references; the DF register specifies the field for indirect data accesses. A program running in one field can reference data in the same field by direct addressing, and reference data in another field by indirect addressing.
A set of I/O instructions in the range 6200 through 6277 is handled by the Memory Extension Controller and give access to the DF and IF registers. The 62X1 instruction (CDF, Change Data Field) set the data field to X. Similarly 62X2 (CIF) set the instruction field, and 62X3 set both. Pre-existing programs would never execute CIF or CDF; the DF and IF registers would both point to the same field, a single field to which these programs were limited. The effect of the CIF instruction was deferred to coincide with the next JMP or JMS instruction, so that executing CIF would not cause a jump.
It was more complicated for multiple-field programs to deal with field boundaries and the DF and IF registers than it would have been if they could simply generate 15-bit addresses, but the design provided backward compatibility and is consistent with the 12-bit architecture used throughout the PDP-8. Compare the later Intel 8086, whose 16-bit memory addresses are expanded to 20 bits by combining them with the contents of a specified or implied segment register.
The extended memory scheme let existing programs handle increased memory with minimal changes. For example, 4K FOCAL normally had about 3K of code with only 1K left over for user program and data. With a few patches, FOCAL could use a second 4K field for user program and data. Moreover, additional 4K fields could be allocated to separate users, turning 4K FOCAL into a multi-user timesharing system.
On the PDP-8/E and later models, the Memory Extension Controller was enhanced to enable machine virtualization. A program written to use a PDP-8's entire resources can coexist with other such programs on the same PDP-8 under the control of a virtual machine manager. The manager can make all I/O instructions (including those that operated on the Memory Extension Controller) cause a trap (an interrupt handled by the manager). In this way, the manager can map memory references, map data or instruction fields, and redirect I/O to different devices. Each original program has complete access to a "virtual machine" provided by the manager.
New I/O instructions to the Memory Extension Controller retrieve the current value of the data and instruction fields, letting software save and restore most of the machine state across a trap. However, a program can not sense whether the CPU is in the process of deferring the effect of a CIF instruction (whether it has executed a CIF and not yet executed the matching jump instruction). The manager has to include a complete PDP-8 emulator (not difficult for an 8-instruction machine). Whenever a CIF instruction traps to the manager, it has to emulate the instructions up to the next jump. Fortunately, as a jump usually is the next instruction after CIF, this emulation does not slow programs down much, but it is a large workaround to a seemingly small design deficiency.
By the time of the PDP-8/A, memory prices had fallen enough that memory exceeding 32K was desirable. The 8/A added a new set of instructions for handling more than eight fields of memory. The field number could now be placed in the AC, rather than hard-coded into the instruction. However, by this time, the PDP-8 was in decline, so very little standard software was modified to use these new features.
The following examples show code in PDP-8 assembly language as one might write for the PAL-III assembler.
The following piece of code shows what is needed just to compare two numbers:
As shown, much of the text of a typical PDP-8 program focuses not on the author's intended algorithm but on low-level mechanics. An additional readability problem is that in conditional jumps such as the one shown above, the conditional instruction (which skips around the JMP) highlights the opposite of the condition of interest.
This complete PDP-8 assembly language program outputs "Hello, world!" to the teleprinter.
The PDP-8 processor does not implement a stack upon which to store registers or other context when a subroutine is called or an interrupt occurs. (A stack can be implemented in software, as demonstrated in the next section.) Instead, the JMS instruction simply stores the updated PC (pointing past JMS, to the return address) at the effective address and jumps to the effective address plus one. The subroutine returned to its caller using an indirect JMP instruction that addresses the subroutine's first word.
For example, here is "Hello, World!" re-written to use a subroutine. When the JMS instruction jumps to the subroutine, it modifies the 0 coded at location OUT1:
The fact that the JMS instruction uses the word just before the code of the subroutine to deposit the return address prevents reentrancy and recursion without additional work by the programmer. It also makes it difficult to use ROM with the PDP-8 because read-write return-address storage is commingled with read-only code storage in the address space. Programs intended to be placed into ROMs approach this problem in several ways:
The use of the JMS instruction makes debugging difficult. If a programmer makes the mistake of having a subroutine call itself, directly or by an intermediate subroutine, then the return address for the outer call is destroyed by the return address of the subsequent call, leading to an infinite loop. If one module is coded with an incorrect or obsolete address for a subroutine, it would not just fail to execute the entire code sequence of the subroutine, it might modify a word of the subroutine's code, depositing a return address that the processor might interpret as an instruction during a subsequent correct call to the subroutine. Both types of error might become evident during the execution of code that was written correctly.
Though the PDP-8 does not have a hardware stack, stacks can be implemented in software.
Here are example PUSH and POP subroutines, simplified to omit issues such as testing for stack overflow and underflow:
And here is "Hello World" with this "stack" implemented, and "OUT" subroutine:
Another possible subroutine for the PDP-8 is a linked list.
There is a single interrupt line on the PDP-8 I/O bus. The processor handles any interrupt by disabling further interrupts and executing a codice_1 to location 0000. As it is difficult to write reentrant subroutines, it is difficult to nest interrupts and this is usually not done; each interrupt runs to completion and re-enables interrupts just before executing the codice_2 instruction that returns from the interrupt.
Because there is only a single interrupt line on the I/O bus, the occurrence of an interrupt does not inform the processor of the source of the interrupt. Instead, the interrupt service routine has to serially poll each active I/O device to see if it is the source. The code that does this is called a "skip chain" because it consists of a series of PDP-8 "test and skip if flag set" I/O instructions. (It was not unheard-of for a skip chain to reach its end without finding any device in need of service.) The relative interrupt priority of the I/O devices is determined by their position in the skip chain: If several devices interrupt, the device tested earlier in the skip chain is serviced first.
An engineering textbook popular in the 1980s, "The Art of Digital Design" by David Winkel and Franklin Prosser, contains an example problem spanning several chapters in which the authors demonstrate the process of designing a computer that is compatible with the PDP-8/I. The function of every component is explained. Although it is not a production design, as it uses more modern SSI and MSI components, the exercise provides a detailed description of the computer's operation. | https://en.wikipedia.org/wiki?curid=24364 |
Porsche
Dr.-Ing. h.c. F. Porsche AG, usually shortened to Porsche AG (; see below), is a German automobile manufacturer specializing in high-performance sports cars, SUVs and sedans. The headquarters of Porsche AG is in Stuttgart, and is owned by Volkswagen AG, a controlling stake of which is owned by Porsche Automobil Holding SE. Porsche's current lineup includes the 718 Boxster/Cayman, 911, Panamera, Macan, Cayenne and Taycan.
Ferdinand Porsche founded the company called "Dr. Ing. h. c. F. Porsche GmbH" in 1931, with main offices at Kronenstraße 24 in the centre of Stuttgart. Initially, the company offered motor vehicle development work and consulting, but did not build any cars under its own name. One of the first assignments the new company received was from the German government to design a car for the people, that is a "Volkswagen". This resulted in the Volkswagen Beetle, one of the most successful car designs of all time. The Porsche 64 was developed in 1939 using many components from the Beetle.
During World War II, Volkswagen production turned to the military version of the Volkswagen Beetle, the Kübelwagen, 52,000 produced, and Schwimmwagen, 15,584 produced. Porsche produced several designs for heavy tanks during the war, losing out to Henschel & Son in both contracts that ultimately led to the Tiger I and the Tiger II. However, not all this work was wasted, as the chassis Porsche designed for the Tiger I was used as the base for the Elefant tank destroyer. Porsche also developed the Maus super-heavy tank in the closing stages of the war, producing two prototypes. Ferdinand Porsche's biographer, Fabian Müller, wrote that Porsche had thousands of people forcibly brought to work at their factories during the war. The workers wore the letter "P" on their clothing at all times. It stood not for "Porsche," but for "Poland."
At the end of World War II in 1945, the Volkswagen factory at KdF-Stadt fell to the British. Ferdinand lost his position as Chairman of the Board of Management of Volkswagen, and Ivan Hirst, a British Army Major, was put in charge of the factory. (In Wolfsburg, the Volkswagen company magazine dubbed him "The British Major who saved Volkswagen".) On 15 December of that year, Ferdinand was arrested for war crimes, but not tried. During his 20-month imprisonment, Ferdinand Porsche's son, Ferry Porsche, decided to build his own car, because he could not find an existing one that he wanted to buy. He also had to steer the company through some of its most difficult days until his father's release in August 1947. The first models of what was to become the 356 were built in a small sawmill in Gmünd, Austria. The prototype car was shown to German auto dealers, and when pre-orders reached a set threshold, production (with aluminum body) was begun by Porsche Konstruktionen GesmbH founded by Ferry and Louise. Many regard the 356 as the first Porsche simply because it was the first model "sold" by the fledgling company. After the production of 356 was taken over by the father's Dr. Ing. h.c. F. Porsche GmbH in Stuttgart in 1950, Porsche commissioned a Zuffenhausen-based company, "Reutter Karosserie", which had previously collaborated with the firm on Volkswagen Beetle prototypes, to produce the 356's steel body. In 1952, Porsche constructed an assembly plant (Werk 2) across the street from "Reutter Karosserie"; the main road in front of Werk 1, the oldest Porsche building, is now known as Porschestrasse. The 356 was road certified in 1948.
Porsche's company logo stems from the coat of arms of the Free People's State of Württemberg of Weimar Germany of 1918-1933, which had Stuttgart as its capital. (The "Bundesland" of Württemberg-Hohenzollern used the same arms from 1945-1952, while Stuttgart during these years operated as the capital of adjacent Württemberg-Baden.) The arms of Stuttgart appear in the middle of the logo as an inescutcheon, since the company had its headquarters in Stuttgart. The heraldic symbols, combined with the texts "Porsche" and "Stuttgart", do not form a conventional coat of arms, since heraldic achievements never spell out the name of the armiger nor the armiger's home-town in the shield.
Württemberg-Baden and Württemberg-Hohenzollern both became part of the present land of Baden-Württemberg in 1952 after the political consolidation of West Germany in 1949, but the old design of the arms of Württemberg lives on in the Porsche logo. On 30 January 1951, not long before the formation of Baden-Württemberg, Ferdinand Porsche died from complications following a stroke.
In post-war Germany, parts were generally in short supply, so the 356 automobile used components from the Volkswagen Beetle, including the engine case from its internal combustion engine, transmission, and several parts used in the suspension. The 356, however, had several evolutionary stages, A, B, and C, while in production, and most Volkswagen-sourced parts were replaced by Porsche-made parts. Beginning in 1954 the 356s engines started utilizing engine cases designed specifically for the 356. The sleek bodywork was designed by Erwin Komenda, who also had designed the body of the Beetle. Porsche's signature designs have, from the beginning, featured air-cooled rear-engine configurations (like the Beetle), rare for other car manufacturers, but producing automobiles that are very well balanced.
In 1964, after a fair amount of success in motor-racing with various models including the 550 Spyder, and with the 356 needing a major re-design, the company launched the Porsche 911: another air-cooled, rear-engined sports car, this time with a six-cylinder "boxer" engine. The team to lay out the body shell design was led by Ferry Porsche's eldest son, Ferdinand Alexander Porsche (F. A.). The design phase for the 911 caused internal problems with Erwin Komenda, who led the body design department until then. F. A. Porsche complained Komenda made unauthorized changes to the design. Company leader Ferry Porsche took his son's drawings to neighbouring chassis manufacturer Reuter. Reuter's workshop was later acquired by Porsche (so-called Werk 2). Afterward Reuter became a seat manufacturer, today known as Keiper-Recaro.
The design office gave sequential numbers to every project (See Porsche type numbers), but the designated 901 nomenclature contravened Peugeot's trademarks on all 'x0x' names, so it was adjusted to 911. Racing models adhered to the "correct" numbering sequence: 904, 906, 908. The 911 has become Porsche's most well-known model – successful on the race-track, in rallies, and in terms of road car sales. It remains in production; however, after several generations of revision, current-model 911s share only the basic mechanical configuration of a rear-engined, six-cylinder coupé, and basic styling cues with the original car. A cost-reduced model with the same body, but with a 356-derived four-cylinder engine, was sold as the 912.
In 1972, the company's legal form was changed from "Kommanditgesellschaft" (KG), or limited partnership, to Aktiengesellschaft (AG), or public limited company, because Ferry Porsche came to believe the scale of the company outgrew a "family operation", after learning about Soichiro Honda's "no family members in the company" policy at Honda. This led to the establishment of an Executive Board with members from outside the Porsche family, and a Supervisory Board consisting largely of family members. With this change, most family members in the operation of the company, including F. A. Porsche and Ferdinand Piëch, departed from the company.
F. A. Porsche founded his own design company, Porsche Design, which is renowned for exclusive sunglasses, watches, furniture, and many other luxury articles. Louise's son and Ferry's nephew Ferdinand Piëch, who was responsible for mechanical development of Porsche's production and racing cars (including the very successful 911, 908 and 917 models), formed his own engineering bureau, and developed a five-cylinder-inline diesel engine for Mercedes-Benz. A short time later he moved to Audi (used to be a division, then a subsidiary, of Volkswagen), and pursued his career through the entire company, ultimately becoming the Chairman of Volkswagen Group.
The first Chief Executive Officer (CEO) of Porsche AG was Dr. Ernst Fuhrmann, who had been working in the company's engine development division. Fuhrmann was responsible for the so-called Fuhrmann-engine, used in the 356 Carrera models as well as the 550 Spyder, having four overhead camshafts instead of a central camshaft with pushrods, as in the Volkswagen-derived serial engines. He planned to cease the 911 during the 1970s and replace it with the V8-front engined grand sportswagon 928. As we know today, the 911 outlived the 928 by far. Fuhrmann was replaced in the early 1980s by Peter W. Schutz, an American manager and self-proclaimed 911 aficionado. He was then replaced in 1988 by the former manager of German computer company Nixdorf Computer AG, Arno Bohn, who made some costly miscalculations that led to his dismissal soon after, along with that of the development director, Dr. Ulrich Bez, who was formerly responsible for BMW's Z1 model, and was CEO of Aston Martin from 2000 to 2013.
In 1990, Porsche drew up a memorandum of understanding with Toyota to learn and benefit from Japanese lean manufacturing methods. In 2004 it was reported that Toyota was assisting Porsche with hybrid technology.
Following the dismissal of Bohn, Heinz Branitzki, a longtime Porsche employee, was appointed as interim CEO. Branitzki served in that position until Wendelin Wiedeking became CEO in 1993. Wiedeking took over the chairmanship of the board at a time when Porsche appeared vulnerable to a takeover by a larger company. During his long tenure, Wiedeking transformed Porsche into a very efficient and profitable company.
Ferdinand Porsche's nephew, Ferdinand Piëch, was chairman and CEO of the Volkswagen Group from 1993 to 2002, and is chairman of the Volkswagen AG Supervisory Board since then. With 12.8 percent of the Porsche SE voting shares, he also remains the second largest individual shareholder of Porsche SE after his cousin, F. A. Porsche, which had 13.6 percent.
Porsche's 2002 introduction of the Cayenne also marked the unveiling of a new production facility in Leipzig, Saxony, which once accounted for nearly half of Porsche's annual output. In 2004, production of the Carrera GT commenced in Leipzig, and at EUR 450,000 ($440,000 in the United States) it was the most expensive production model Porsche ever built.
In mid-2006, after years of the Boxster (and later the Cayenne) as the best selling Porsche in North America, the 911 regained its position as Porsche's best-seller in the region. The Cayenne and 911 have cycled as the top-selling model since. In Germany, the 911 outsells the Boxster/Cayman and Cayenne.
In May 2011, Porsche Cars North America announced plans to spend $80–$100 million, but will receive about $15 million in economic incentives to move their North American headquarters from Sandy Springs, a suburb of Atlanta, to Aerotropolis, Atlanta, a new mixed-use development on the site of the old Ford Hapeville plant adjacent to Atlanta's airport. Designed by architectural firm HOK, the headquarters will include a new office building and test track. The facility will be known by its new address, One Porsche Drive.
In October 2017, Porsche Cars North America announced the launch of introduced Porsche Passport, a new sports car and SUV subscription program. This new offering allows consumers to access Porsche vehicles through subscribing to the service, rather than owning or leasing a vehicle. The Porsche Passport service is available initially in Atlanta.
During the COVID-19 pandemic, in March 2020, Porsche suspended its manufacturing in Europe for two weeks, "By taking this step, the sports car manufacturer is responding to the significant acceleration in the rate of infection caused by the coronavirus and the resultant measures implemented by the relevant authorities."
The company has always had a close relationship with, initially, the Volkswagen (VW) marque, and later, the Volkswagen Group (which also owns Audi AG), because the first Volkswagen Beetle was designed by Ferdinand Porsche.
The two companies collaborated in 1969 to make the VW-Porsche 914 and 914-6, whereby the 914-6 had a Porsche engine, and the 914 had a Volkswagen engine. Further collaboration in 1976 resulted in the Porsche 912E (US only) and the Porsche 924, which used many Audi components, and was built at Audi's Neckarsulm factory, which had been NSU's. Porsche 944s were also built there, although they used far fewer Volkswagen components. The Cayenne, introduced in 2002, shares its chassis with the Volkswagen Touareg and the Audi Q7, which is built at the Volkswagen Group factory in Bratislava, Slovakia.
Porsche SE was created in June 2007 by renaming the old Dr. Ing. h.c. F. Porsche AG, and became a holding company for the families' stake in Porsche Zwischenholding GmbH (50.1%) (which in turn held 100% of the old Porsche AG) and Volkswagen AG (50.7%). At the same time, the new Dr. Ing. h.c. F. Porsche AG (Porsche AG) was created for the car manufacturing business.
In August 2009, Porsche SE and Volkswagen AG reached an agreement that the car manufacturing operations of the two companies would merge in 2011, to form an "Integrated Automotive Group". The management of Volkswagen AG agreed to 50.76% of Volkswagen AG being owned by Porsche SE in return for Volkswagen AG management taking Porsche SE management positions (in order for Volkswagen management to remain in control), and for Volkswagen AG acquiring ownership of Porsche AG.
As of the end of 2015, the 52.2% control interest in VW AG is the predominant investment by Porsche SE, and Volkswagen AG in turn controls brands and companies such as Volkswagen, Audi, SEAT, Škoda, Bentley, Bugatti, Lamborghini, Porsche AG, Ducati, VW Commercial Vehicles, Scania, MAN, as well as Volkswagen Financial Services.
Dr. Ing. h.c. F. Porsche AG (which stands for "Doktor Ingenieur honoris causa Ferdinand Porsche Aktiengesellschaft"), as a 100% subsidiary of VW AG, is responsible for the actual production and manufacture of the Porsche automobile line. The company currently produces Porsche 911, Boxster and Cayman sports cars, the Cayenne and Macan sport utility vehicles and the four-door Panamera.
Porsche AG has a 29% share in German engineering and design consultancy Bertrandt AG and 81.8% of Mieschke Hofmann und Partner. In 2018, Porsche acquired a 10% minority shareholding stake of the Croatian electric sportscar manufacturer Rimac Automobili to form a development partnership.
Wholly owned subsidiaries of Porsche AG include Porsche Consulting GmbH.
The headquarters and main factory are located in Zuffenhausen, a district in Stuttgart, but the Cayenne and Panamera models are manufactured in Leipzig, Germany, and parts for the SUV are also assembled in the Volkswagen Touareg factory in Bratislava, Slovakia. Boxster and Cayman production was outsourced to Valmet Automotive in Finland from 1997 to 2011, and in 2012 production moved to Germany. Since 2011, the area of the Zuffenhausen plant has more than doubled, from 284,000 to 614,000 square metres, as a result of purchasing the former Layher, Deltona and Daimler sites, among others.
In 2015, Porsche reported selling a total of 218,983 cars, 28,953 (13.22%) as domestic German sales, and 190,030 (86.78%) internationally.
The company has been highly successful in recent times, and indeed claims to have the highest profit per unit sold of any car company in the world. Table of profits (in millions of euros) and number of cars produced. Figures from 2008/9 onwards were not reported as part of Porsche SE.
On May 11, 2017, Porsche built the one-millionth 911. An Irish green Carrera S was built for the celebration, and it will be taken on a global tour before becoming a permanent exhibit at the Porsche Museum in Stuttgart.
Of the 246,375 cars produced in the 2017 financial year, 32,197 were 911 models, 25,114 were Boxster and Cayman cars, 63,913 were Cayennes, 27,942 were Panameras and 97,202 were Macans.
Of the 268,691 cars produced in 2018, 36,236 were 911 models, 23,658 were 718 Boxster and Cayman cars, 79,111 were Cayennes, 35,493 were Panameras, 93,953 were Macans and 240 Taycan pre-series vehicles.
Porsche set a record for a U.S. sales month in November 2016, with over 5,500 sales, well on-pace to its best year ever.
The current Porsche model range includes sports cars from the Boxster roadster to their most famous product, the 911. The Cayman is a coupé otherwise similar to the Boxster. The Cayenne is Porsche's mid-size luxury sport utility vehicle (SUV). A high performance luxury saloon/sedan, the Panamera, was launched in 2009.
In 2010 Porsche launched the Cayenne S Hybrid and announced the Panamera S Hybrid, and launched the Porsche 918 sports car in 2014, which also features a hybrid system. Also a plug-in hybrid model called the Panamera S E-Hybrid was released in October 2013 in the United States and during the fourth quarter of 2013 in several European countries.
Porsche developed a prototype electric Porsche Boxster called the Boxster E in 2011 and a hybrid version of the 911 called the GT3 R Hybrid, developed with Williams Grand Prix Engineering in 2010.
In July 2014 Porsche announced the launch by the end of 2014 of the Porsche Cayenne S E-Hybrid a plug-in hybrid, which will displace the Cayenne S Hybrid from the lineup. The S E-Hybrid will be the first plug-in hybrid in the premium SUV segment and will allow Porsche to become the first automaker with three production plug-in hybrid models.
In July 2017, Porsche installed its first 350 kW, 800V charging station, which the upcoming Porsche Mission E will use. As of 2017, the Porsche charging station is the fastest electric vehicle charging station in the world, being able to charge a Porsche Mission E up to 80% within 15 minutes. Porsche is also currently working with other manufacturers to make Porsche charging stations compatible with other electric vehicles.
In August 2018, Porsche announced that the formerly named Mission E electric car will be named ""Taycan"" meaning 'leaping horse'. The prototype electric car is expected to be revealed in 2019 after its completion.
See Porsche PFM 3200.
Porsche has a record 19 outright wins at the 24 Hours of Le Mans. Porsche is currently the world's largest race car manufacturer. In 2006, Porsche built 195 race cars for various international motor sports events. In 2007, Porsche is expected to construct no fewer than 275 dedicated race cars (7 RS Spyder LMP2 prototypes, 37 GT2 spec 911 GT3-RSRs, and 231 911 GT3 Cup vehicles).
In keeping with the family name of founder Ferdinand Porsche, the company's name is pronounced in German, which corresponds to in English, homophonous with the feminine name "Portia". However, in English it is often pronounced as a single syllable —without a final . In German orthography, word-final is not silent but is instead an unstressed schwa.
In a survey conducted by the Luxury Institute in New York, Porsche was awarded the title of "the most prestigious automobile brand". Five hundred households with a gross annual income of at least $200,000 and a net worth of at least $720,000 participated.
Porsche won the J.D. Power and Associates Initial Quality Study (IQS) in 2006, 2009, 2010, and 2014.
Porsche's 2003 SUV, the Cayenne, received generally favorable commentary.
In 2015, US News ranked the Macan as the best luxury compact SUV in its class.
A Canadian study in 2011 revealed that 97.4 percent of Porsches from the last 25 years are still on the road.
In 2014, the Cayman and Boxster made the "Consumer Reports" list for most reliable vehicles on the road.
Porsche's 911 has been officially named by the Technischer Überwachungsverein (Technical Inspection Association) as Germany's most reliable car. | https://en.wikipedia.org/wiki?curid=24365 |
Porsche 924
The Porsche 924 is a sports car produced by Porsche AG of Germany from 1976 to 1988. A two-door, 2+2 coupé, the 924 was intended to replace the Porsche 914 as the company's entry-level model.
Although the water-cooled, front-engined 928 gran turismo was designed first, the 924 was the first road-going Porsche to have a front engine rear wheel drive configuration. It was also the first Porsche to be offered with a fully automatic transmission.
The 924 made its public debut in November 1975. It was criticised by enthusiasts for its mediocre performance, but was a sales success with just over 150,000 produced during a 1976–1988 production run, and an important profits generator for the company. The closely related 944 introduced in the U.S. market in 1983 was meant to replace the 924, but 924 production continued through 1985, followed by a 944-engined 924S through 1988.
The 924 was originally a joint project of Volkswagen and Porsche created by the Vertriebsgesellschaft (VG), the joint sales and marketing company funded by Porsche and VW to market and sell sports cars (Ludvigsen: "Porsche, Excellence was Expected"). For Volkswagen, it was intended to be that company's flagship coupé sports car and was dubbed "Project 425" during its development. For Porsche, it was to be its entry-level sports car replacing the 914. At the time, Volkswagen lacked a significant internal research and design division for developing sports cars; further, Porsche had been doing the bulk of the company's development work anyway, per a deal that went back to the 1940s. In keeping with this history, Porsche was contracted to develop a new sporting vehicle with the caveat that this vehicle must work with an existing VW/Audi inline-four engine. Porsche chose a rear-wheel drive layout and a rear-mounted transaxle for the design to help provide 48/52 front/rear weight distribution; this slight rear weight bias aided both traction and brake balance.
The 1973 oil crisis, a series of automobile-related regulatory changes enacted during the 1970s and a change of directors at Volkswagen made the case for a Volkswagen sports car less striking and the 425 project was put on hold. After serious deliberation at VW, the project was scrapped entirely after a decision was made to move forward with the cheaper, more practical, Golf-based Scirocco model instead. Porsche, which needed a model to replace the 914, made a deal with Volkswagen leadership to buy the design back. The 914 was discontinued before the 924 entered production, which resulted in the reintroduction of the Porsche 912 to the North American market as the 912E for one year to fill the gap.
The deal specified that the car would be built at the ex-NSU factory in Neckarsulm located north of the Porsche headquarters in Stuttgart, Volkswagen becoming the subcontractor. Hence, Volkswagen employees would do the actual production line work (supervised by Porsche's own production specialists) and that Porsche would own the design. It made its debut at a November 1975 press launch at the harbour at La Grande Motte, Camargue in the south of France rather than a motor show. The relative cheapness of building the car made it both profitable and fairly easy for Porsche to finance. While criticised for its performance, it nevertheless became one of Porsche's best-selling models.
The original design used an Audi-sourced four-speed manual transmission from a front wheel drive car but now placed and used as a rear transaxle. It was mated to VW's EA831 2.0 L I4 engine, variants of which were used in the Audi 100 and the Volkswagen LT van (common belief is that 'the engine originated in the LT van', but it first appeared in the Audi car and in 924 form has a Porsche-designed cylinder head). The Audi engine, equipped with a Weber/Holley carburetor, was also used in the 1977–1979 AMC Gremlin, Concord, and Spirit, as well as the AMC postal jeeps. The 924 engine used Bosch K-Jetronic fuel injection, producing in North American trim. This was brought up to in mid-1977 with the introduction of a catalytic converter, which reduced the need for power-robbing smog equipment. The four-speed manual was the only transmission available for the initial 1976 model, later this was replaced by a five-speed dog-leg unit. An Audi three-speed automatic was offered starting with the 1977.5 model. In 1980, the five-speed transmission was changed to a conventional H-pattern, with reverse now on the right beneath fifth gear.
In 1980, the model received some minor changes including a three-way catalyst and slightly higher compression, which brought power up to . Nonetheless, the strong Deutschemark and US inflation severely hampered sales, as a well equipped 924 now easily could cost twice as much as the considerably more powerful Nissan 280ZX.
European models, which did not require any emissions equipment, made . They also differed visually from the US spec model by not having the US cars' low-speed impact bumpers and the round reflectors plus side-marker lamps on each end of the body.
The 924 was sold in Japan at Mizwa Motors dealerships that specialize in North American and European vehicles, with left hand drive for its entire generation. Sales were helped by the fact that it was in compliance with Japanese Government dimension regulations with regards to its engine displacement and exterior dimensions.
A five-speed transmission, available in normally aspirated cars (type 016) starting in 1979 and standard on all turbos (type G31), was a dog-leg shift pattern Porsche unit, with first gear below reverse on the left side. This was robust, but expensive due to some 915 internal parts, and was replaced for 1980 with a normal H-pattern Audi five-speed on all non-turbo cars. This lighter duty design was originally not used on the more powerful 924 Turbo. The brakes were solid discs at the front and drums at the rear. The car was criticized in "Car and Driver" magazine for this braking arrangement, which was viewed as a step backward from the 914's standard four-wheel disc brakes. However, four-wheel disc brakes, five stud hubs and alloys from the 924 Turbo were available on the base 924 as an "S" package starting with the 1980 model year. Also, standard brakes could be optioned on the turbo as a cost-saving measure.
The overall styling was created by Dutch designer Harm Lagaay, a member of the Porsche styling team, with the folding headlights, sloping bonnet line and grille-less nose giving the car its popular wedge shape. The car went on sale in the US in July 1976 as a 1977 model with a base price of $9,395. Porsche made small improvements to the 924 each model year between 1977 and 1985, but nothing major was changed on non-turbo cars. Turbo charged variants received many different, non-VW sourced parts, throughout the drive train, and when optioned with the M471 disc brake package and forged 16" wheels, the car was twice as expensive as a standard model. Its appearance has been credited as the inspiration for the second generation Mazda RX-7.
J. Pasha, writing in "Excellence" magazine, at the time, described the 924 as "the best handling Porsche in stock form".
While the car was praised for its styling, handling, fuel economy, and reliability, it was harshly written up in the automotive press for its very poor performance, especially with the US spec cars. With only 95–110 hp, rapid acceleration was simply not an option, but the Porsche name carried with it higher expectations. When the 924 Turbo models came out, "Car and Driver" magazine proclaimed the car "Fast...at Last!" The later 924S had performance on par with the Turbo, but with much improved reliability, and at less cost. The '81 and '82 Turbos and the associated special variants are garnering interest in collector circles; and while many still exist, excellent examples of the cars are quite scarce as of today.
The 924 was discontinued in 1988, with Porsche concentrating on producing the faster 944 as its entry-level model.
* includes 3000 special edition "Martini" cars
† includes 1002 special edition "Le Mans" cars
‡ includes 1015 special edition "50 Jahre Porsche/Weissach" cars.
* sum total of cars brought into US and Japan
† For year 1979 Porsche 924 Turbo. Few of them were made and due to the age of the vehicle they became very rare. In 2009 there were less than 10 right hand drive 1979 Porsche 924 Turbo S1s reported worldwide.
^ cars brought only into Italy
There was also a sport package for the 924S, available for the ROW and US market for which production data is stated below.
Porsche executives soon recognized the need for a higher-performance version of the 924 that could bridge the gap between the basic 924s and the 911s. Having already found the benefits of turbochargers on several race cars and the 1975 911 Turbo, Porsche chose to use this technology for the 924, eventually introducing the 924 Turbo as a 1978 model.
Porsche started with the same Audi-sourced VW EA831 2.0 L I4, designed an all new cylinder head (which was hand assembled at Stuttgart), dropped the compression to 7.5:1 and engineered a KKK K-26 turbocharger for it. With of boost, output increased to at 5,500 rpm and of torque at 3,500 rpm. The 924 Turbo's engine assembly weighed about more, so front spring rates and anti-roll bars were revised. Weight distribution was now 49/51 compared to the original 924 figure of 48/52 front to rear.
In order to help make the car more functional, as well as to distinguish it from the naturally aspirated version, Porsche added an NACA duct in the hood and air intakes in the badge panel in the nose, 15-inch spoke-style alloy wheels, four-wheel disc brakes with five-stud hubs and a five-speed transmission. Forged 16-inch flat wheels of the style used on the 928 were optional, but fitment specification was that of the 911 which the 924 shared wheel offsets with. Internally, Porsche called it the "931" (left hand drive) and "932" (right hand drive), much like the 911 Carrera Turbo, which had been "Type 930". These designations are commonly used by 924 aficionados.
The turbocharged VW EA831 engine allowed the 924's performance to come surprisingly close to that of the 911 SC (), thanks in part to a lighter curb weight, but it also brought reliability problems. This was in part due to the fact that the general public did not know how to operate, or care for, what is by today's standards a primitive turbo setup.
A turbocharger cooled only by engine oil led to short component life and turbo-related seal and seat problems. To fix the problems, Porsche released a revised 924 Turbo Series 2 (although badging still read "924 turbo") in 1979. By using a smaller turbocharger running at increased boost, slightly higher compression of 8:1 and an improved fuel injection system with DITC ignition triggered by the flywheel, reliability improved and power rose to .
In North America, the 924 Turbo arrived in late 1979 for the 1980 model year. It was saddled with extra weight, due to the federally mandated large bumpers and other safety equipment, and less power due to stringent emissions controls. Power was , nearly twenty percent down on the European model. For the 1981 model year, power increased slightly to and the transmission was switched to one with a regular H-pattern layout.
In 1979, Porsche unveiled a concept version of the 924 at the Frankfurt Auto show wearing Carrera badges. One year later, in 1980, Porsche released the 924 Carrera GT, making clear their intention to enter the 924 in competition. By adding an intercooler and increasing compression to 8.5:1, as well as various other little changes, Porsche was able to develop the 924 Turbo into the race car they had wanted, dubbing it the "924 Carrera GT". 406 examples (including prototypes) of the Carrera GT were built to qualify it for Group 4 racing requirements. Of the 400 roadgoing examples, 75 were made in right hand drive for the UK market. In 1981 Porsche released the limited production 924 Carrera GTS. 59 GTS models were built, all in left hand drive, with 15 of the 59 being raced prepped Clubsport versions.
Visually, the Carrera GT differed from the standard 924 Turbo in that it had polyurethane plastic front and rear flared guards, a polyurethane plastic front spoiler, a top mounted air scoop for the intercooler, a much larger rubber rear spoiler and a flush mounted front windscreen. It featured Pirelli P6 tires as standard, and Pirelli P7 tires were available as an option along with a limited slip differential. It lost the 924 Turbo's NACA duct in the hood but retained the air intakes in the badge panel. This more aggressive styling was later used for as motivation for the 944. The later Carrera GTS differed stylistically from the GT with fixed headlamps under Perspex covers (instead of the GT's pop up units). GTS models were also lighter than their GT counterparts at , and Clubsport versions were even lighter at .
In order to comply with the homologation regulations, the 924 Carrera GT and later 924 Carrera GTS were offered as road cars, producing 210 and 245 hp (157 and 183 kW) respectively. Clubsport versions of the GTS were also available with , and factory included Matter roll cage and race seats. 924 Carrera GT variations were known by model numbers 937 (left hand drive) and 938 (right hand drive).
The ultimate development of the 924 in its race trim was the 924 Carrera GTR race car, which produced from a highly modified version of the 2.0 L I4 used in all 924s, and weighed in at . This allowed for a 0–60 mph (97 km/h) time of 4.7 seconds and a top speed of . In 1980, Porsche entered three 924 GTRs at the 24 hours of Le Mans, which went on to finish 6th, 12th and 13th overall. Also building a 924GTR rally race car, and two other GTRs (Miller and BF Goodrich). 17 (some sources say 19) Carrera GTRs were built in total.
Lastly, in 1981, Porsche entered one of two specially built 924 Carrera GTPs (the "944GTP Le Mans") in which Porsche Motorsports introduced a new prototype highly modified 2.5 liter I4 engine. This engine sported four valves per cylinder, dual over head camshafts, twin balance shafts and a single turbocharger K28 to produce . This last variant managed a seventh place overall finish and spent the least time out of any other car in the pits. This new 2.5 liter configuration engine is the predecessor of the 944 platforms and the later 1987–88 944S 16V M44/40 power-plant.
Production of the 924 Turbo ceased in 1982 except for the Italian market which lasted until 1984. This was due to the restrictions on engines larger than two liters, putting the forthcoming 2.5 liter 944 into a much higher tax category.
In 1984, VW decided to stop manufacturing the engine blocks used in the 2.0 L 924, leaving Porsche with a predicament. The 924 was considerably cheaper than its 944 stablemate, and dropping the model left Porsche without an affordable entry-level option. The decision was made to equip the narrower bodied 924 with a slightly detuned version of the 944's 163 bhp 2.5 litre straight four, upgrading the suspension and adding 5 lug wheels and 944 style brakes, but retaining the 924's early interior. The result was 1986's 148 bhp 924S. Porsche also decided to re-introduce the 924 to the American market with an initial price tag of under $20,000.
In 1988, the 924S' final year of production, power increased to matching that of the previous year's Le Mans spec cars and the base model 944 (itself detuned by for 1988). This was achieved using different pistons which raised the S' compression ratio from 9.7:1 to 10.2:1, the knock-on effect being an increase in the octane rating, up from 91 RON to 95. This made the 924S slightly faster than the base 944 due to its lighter weight and more aerodynamic body. The 1988 model also gained three point safety belts in the rear seats.
With unfavourable exchange rates in the late 1980s, Porsche decided to focus its efforts on its more upmarket models, dropping the 924S for 1989 and the base 944 later that same year.
The 1988 924S SE (US) and "Le Mans" (ROW) were Club Sport editions aimed at autocross (US term for autotests to UK readers) and club racers.
The final 924S RHD 'run-out' versions in 1988 for the UK (just 37 white and 37 black vehicles) had "Le Mans" logos with stripes on their flanks. Officially known at Porsche as the "Sportliches Sondermodell" (loosely translates as Sporting Special Model) their options package list M-755 was more complete than the Special Edition M-756 for the US.
Only 980 Club Sport option cars were built in total.
500 units M-756 for US black only,
250 GER 200 black and 50 white cars,
230 ROW 113 black and 117 white; totalling 480 units M-755.
ROW "Le Mans" Edition M-755:
Only on the final 74, GB supplied, RHD cars were the exterior side stripes broken by scripted ‘Le Mans’ logos on the lower part of the door, while the rims of the holes in each wheel were either in the Ochre (white cars) or Turquoise (black cars). Inside, all the cars featured cloth-upholstered "Turbo" sports seats, with the cloth door panels also colour-coded. They had the steering wheel and all the 74 British M-755 cars came with a engine plus an electric tilt/removable sunroof fitted as standard. They were lowered 10 mm (0.39 in) at the front and 15 mm (0.59 in) at the rear, and fitted with stiffer springs and gas-filled shock absorbers all round. They also had 'Sport' anti-roll bars with diameters of 21.5 mm (0.85 in) at the front but 20 mm (0.79 in), (rather than 14 mm (0.55 in)), at the rear. Wheels were ‘telephone dial’ cast alloy 6J x 15s at the front and 7J x 15s (at the rear).
ROW M-755 Paint finishes and interiors were also only offered in two colour choices – Alpine White with Ochre/Grey detailing and upholstery – or Black with Turquoise detailing and grey/turquoise upholstery.
On ROW cars there was no Le Mans logo, nor striping and the phone dial wheels in white or black matching color had outer rims of respectively ochre or turquoise. ROW Upholstery was the grey/ochre striped flannel cloth with ochre piping for Alpine White cars, or grey/turquoise flannel with turquoise piping for Black cars.
US market SE:
Black only paint scheme with optional SE Edition decal. Equipped with manual steering, manual windows and door locks, sunroof delete, radio delete, AC delete, cruise delete, passenger side door mirror delete, wider 15x7 phone dial alloys for the rear while retaining 15x6 in front, and the M030 package which included stiffer springs and Koni shocks. These cars also had the same "Sport" anti-roll bars 21.5mm front and 20mm rear and stiffer springs as the ROW and UK cars. The cars had a unique lightweight gray knit cloth upholstery (which deteriorated very quickly) with maroon pinstriping, and maroon carpeting. The sunroof, A/C, cruise, power steering, passenger door mirror, and radio could be added back optionally.
The 924 has its own racing series in the UK run by the BRSCC and Porsche Racing Drivers Association. The Porsche 924 Championship was started in 1992 by Jeff May who was championship coordinator until his death on 10 November 2003. Jeff May was also one of the founding members of Porsche Club Great Britain.
In the United States, the 924S is also eligible to race in the 944-Spec racing class. | https://en.wikipedia.org/wiki?curid=24366 |
Pain
Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain's widely used definition defines pain as "an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage". In medical diagnosis, pain is regarded as a symptom of an underlying condition.
Pain motivates the individual to withdraw from damaging situations, to protect a damaged body part while it heals, and to avoid similar experiences in the future. Most pain resolves once the noxious stimulus is removed and the body has healed, but it may persist despite removal of the stimulus and apparent healing of the body. Sometimes pain arises in the absence of any detectable stimulus, damage or disease.
Pain is the most common reason for physician consultation in most developed countries. It is a major symptom in many medical conditions, and can interfere with a person's quality of life and general functioning. Simple pain medications are useful in 20% to 70% of cases. Psychological factors such as social support, hypnotic suggestion, cognitive behavioral therapy, excitement, or distraction can affect pain's intensity or unpleasantness. In some debates regarding physician-assisted suicide or euthanasia, pain has been used as an argument to permit people who are terminally ill to end their lives.
Pain is usually transitory, lasting only until the noxious stimulus is removed or the underlying damage or pathology has healed, but some painful conditions, such as rheumatoid arthritis, peripheral neuropathy, cancer and idiopathic pain, may persist for years. Pain that lasts a long time is called "chronic" or persistent, and pain that resolves quickly is called "acute". Traditionally, the distinction between "acute" and "chronic" pain has relied upon an arbitrary interval of time between onset and resolution; the two most commonly used markers being 3 months and 6 months since the onset of pain, though some theorists and researchers have placed the transition from acute to chronic pain at 12 months. Others apply "acute" to pain that lasts less than 30 days, "chronic" to pain of more than six months' duration, and "subacute" to pain that lasts from one to six months. A popular alternative definition of "chronic pain", involving no arbitrarily fixed durations, is "pain that extends beyond the expected period of healing". Chronic pain may be classified as cancer pain or else as benign.
Allodynia is pain experienced in response to a normally painless stimulus. It has no biological function and is classified by stimuli into dynamic mechanical, punctate and static. In osteoarthritis, NGF has been identified as being involved in allodynia. The extent and intensity of sensation can be assessed through locating trigger points and the region of sensation, as well as utilising phantom maps.
Phantom pain is pain felt in a part of the body that has been amputated, or from which the brain no longer receives signals. It is a type of neuropathic pain.
The prevalence of phantom pain in upper limb amputees is nearly 82%, and in lower limb amputees is 54%. One study found that eight days after amputation, 72% of patients had phantom limb pain, and six months later, 67% reported it. Some amputees experience continuous pain that varies in intensity or quality; others experience several bouts of pain per day, or it may reoccur less often. It is often described as shooting, crushing, burning or cramping. If the pain is continuous for a long period, parts of the intact body may become sensitized, so that touching them evokes pain in the phantom limb. Phantom limb pain may accompany urination or defecation.
Local anesthetic injections into the nerves or sensitive areas of the stump may relieve pain for days, weeks, or sometimes permanently, despite the drug wearing off in a matter of hours; and small injections of hypertonic saline into the soft tissue between vertebrae produces local pain that radiates into the phantom limb for ten minutes or so and may be followed by hours, weeks or even longer of partial or total relief from phantom pain. Vigorous vibration or electrical stimulation of the stump, or current from electrodes surgically implanted onto the spinal cord, all produce relief in some patients.
Mirror box therapy produces the illusion of movement and touch in a phantom limb which in turn may cause a reduction in pain.
Paraplegia, the loss of sensation and voluntary motor control after serious spinal cord damage, may be accompanied by girdle pain at the level of the spinal cord damage, visceral pain evoked by a filling bladder or bowel, or, in five to ten per cent of paraplegics, phantom body pain in areas of complete sensory loss. This phantom body pain is initially described as burning or tingling but may evolve into severe crushing or pinching pain, or the sensation of fire running down the legs or of a knife twisting in the flesh. Onset may be immediate or may not occur until years after the disabling injury. Surgical treatment rarely provides lasting relief.
Breakthrough pain is transitory pain that comes on suddenly and is not alleviated by the patient's regular pain management. It is common in cancer patients who often have background pain that is generally well-controlled by medications, but who also sometimes experience bouts of severe pain that from time to time "breaks through" the medication. The characteristics of breakthrough cancer pain vary from person to person and according to the cause. Management of breakthrough pain can entail intensive use of opioids, including fentanyl.
The ability to experience pain is essential for protection from injury, and recognition of the presence of injury. Episodic analgesia may occur under special circumstances, such as in the excitement of sport or war: a soldier on the battlefield may feel no pain for many hours from a traumatic amputation or other severe injury.
Although unpleasantness is an essential part of the IASP definition of pain, it is possible to induce a state described as intense pain devoid of unpleasantness in some patients, with morphine injection or psychosurgery. Such patients report that they have pain but are not bothered by it; they recognize the sensation of pain but suffer little, or not at all. Indifference to pain can also rarely be present from birth; these people have normal nerves on medical investigations, and find pain unpleasant, but do not avoid repetition of the pain stimulus.
Insensitivity to pain may also result from abnormalities in the nervous system. This is usually the result of acquired damage to the nerves, such as spinal cord injury, diabetes mellitus (diabetic neuropathy), or leprosy in countries where that disease is prevalent. These individuals are at risk of tissue damage and infection due to undiscovered injuries. People with diabetes-related nerve damage, for instance, sustain poorly-healing foot ulcers as a result of decreased sensation.
A much smaller number of people are insensitive to pain due to an inborn abnormality of the nervous system, known as "congenital insensitivity to pain". Children with this condition incur carelessly-repeated damage to their tongues, eyes, joints, skin, and muscles. Some die before adulthood, and others have a reduced life expectancy. Most people with congenital insensitivity to pain have one of five hereditary sensory and autonomic neuropathies (which includes familial dysautonomia and congenital insensitivity to pain with anhidrosis). These conditions feature decreased sensitivity to pain together with other neurological abnormalities, particularly of the autonomic nervous system. A very rare syndrome with isolated congenital insensitivity to pain has been linked with mutations in the "SCN9A" gene, which codes for a sodium channel (Nav1.7) necessary in conducting pain nerve stimuli.
Experimental subjects challenged by acute pain and patients in chronic pain experience impairments in attention control, working memory, mental flexibility, problem solving, and information processing speed. Acute and chronic pain are also associated with increased depression, anxiety, fear, and anger.
Although pain is considered to be aversive and unpleasant and is therefore usually avoided, a meta-analysis which summarized and evaluated numerous studies from various psychological disciplines, found a reduction in negative affect. Across studies, participants that were subjected to acute physical pain in the laboratory subsequently reported feeling better than those in non-painful control conditions, a finding which was also reflected in physiological parameters. A potential mechanism to explain this effect is provided by the opponent-process theory.
Before the relatively recent discovery of neurons and their role in pain, various different body functions were proposed to account for pain. There were several competing early theories of pain among the ancient Greeks: Hippocrates believed that it was due to an imbalance in vital fluids. In the 11th century, Avicenna theorized that there were a number of feeling senses including touch, pain and titillation.
In 1644, René Descartes theorized that pain was a disturbance that passed down along nerve fibers until the disturbance reached the brain. Descartes's work, along with Avicenna's, prefigured the 19th-century development of specificity theory. Specificity theory saw pain as "a specific sensation, with its own sensory apparatus independent of touch and other senses". Another theory that came to prominence in the 18th and 19th centuries was intensive theory, which conceived of pain not as a unique sensory modality, but an emotional state produced by stronger than normal stimuli such as intense light, pressure or temperature. By the mid-1890s, specificity was backed mostly by physiologists and physicians, and the intensive theory was mostly backed by psychologists. However, after a series of clinical observations by Henry Head and experiments by Max von Frey, the psychologists migrated to specificity almost en masse, and by century's end, most textbooks on physiology and psychology were presenting pain specificity as fact.
Wilhelm Erb's (1874) "intensive" theory, that a pain signal can be generated by intense enough stimulation of "any" sensory receptor, has been soundly disproved. Some sensory fibers do not differentiate between noxious and non-noxious stimuli, while others, nociceptors, respond only to noxious, high intensity stimuli. At the peripheral end of the nociceptor, noxious stimuli generate currents that, above a given threshold, send signals along the nerve fiber to the spinal cord. The "specificity" (whether it responds to thermal, chemical or mechanical features of its environment) of a nociceptor is determined by which ion channels it expresses at its peripheral end. Dozens of different types of nociceptor ion channels have so far been identified, and their exact functions are still being determined.
The pain signal travels from the periphery to the spinal cord along an A-delta or C fiber. Because the A-delta fiber is thicker than the C fiber, and is thinly sheathed in an electrically insulating material (myelin), it carries its signal faster (5–30 m/s) than the unmyelinated C fiber (0.5–2 m/s). Pain evoked by the A-delta fibers is described as sharp and is felt first. This is followed by a duller pain, often described as burning, carried by the C fibers. These "first order" neurons enter the spinal cord via Lissauer's tract.
These A-delta and C fibers connect with "second order" nerve fibers in the central gelatinous substance of the spinal cord (laminae II and III of the dorsal horns). The second order fibers then cross the cord via the anterior white commissure and ascend in the spinothalamic tract. Before reaching the brain, the spinothalamic tract splits into the lateral, neospinothalamic tract and the medial, paleospinothalamic tract.
Second order, spinal cord fibers dedicated to carrying A-delta fiber pain signals, and others that carry both A-delta and C fiber pain signals to the thalamus have been identified. Other spinal cord fibers, known as wide dynamic range neurons, respond to A-delta and C fibers, but also to the large A-beta fibers that carry touch, pressure and vibration signals. Pain-related activity in the thalamus spreads to the insular cortex (thought to embody, among other things, the feeling that distinguishes pain from other homeostatic emotions such as itch and nausea) and anterior cingulate cortex (thought to embody, among other things, the affective/motivational element, the unpleasantness of pain). Pain that is distinctly located also activates primary and secondary somatosensory cortex.
In 1955, DC Sinclair and G Weddell developed peripheral pattern theory, based on a 1934 suggestion by John Paul Nafe. They proposed that all skin fiber endings (with the exception of those innervating hair cells) are identical, and that pain is produced by intense stimulation of these fibers. Another 20th-century theory was gate control theory, introduced by Ronald Melzack and Patrick Wall in the 1965 "Science" article "Pain Mechanisms: A New Theory". The authors proposed that both thin (pain) and large diameter (touch, pressure, vibration) nerve fibers carry information from the site of injury to two destinations in the dorsal horn of the spinal cord, and that the more large fiber activity relative to thin fiber activity at the inhibitory cell, the less pain is felt.
In 1968 Ronald Melzack and Kenneth Casey described chronic pain in terms of its three dimensions:
They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational dimension) are not simply determined by the magnitude of the painful stimulus, but "higher" cognitive activities can influence perceived intensity and unpleasantness. Cognitive activities "may affect both sensory and affective experience or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears to block both dimensions of pain, while suggestion and placebos may modulate the affective-motivational dimension and leave the sensory-discriminative dimension relatively undisturbed." (p. 432) The paper ends with a call to action: "Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435)
Pain is part of the body's defense system, producing a reflexive retraction from the painful stimulus, and tendencies to protect the affected body part while it heals, and avoid that harmful situation in the future. It is an important part of animal life, vital to healthy survival. People with congenital insensitivity to pain have reduced life expectancy.
In "", biologist Richard Dawkins addresses the question of why pain should have the quality of being painful. He describes the alternative as a mental raising of a "red flag". To argue why that red flag might be insufficient, Dawkins argues that drives must compete with one other within living beings. The most "fit" creature would be the one whose pains are well balanced. Those pains which mean certain death when ignored will become the most powerfully felt. The relative intensities of pain, then, may resemble the relative importance of that risk to our ancestors. This resemblance will not be perfect, however, because natural selection can be a poor designer. This may have maladaptive results such as supernormal stimuli.
Pain, however, does not only wave a "red flag" within living beings but may also act as a warning sign and a call for help to other living beings. Especially in humans who readily helped each other in case of sickness or injury throughout their evolutionary history, pain might be shaped by natural selection to be a credible and convincing signal of need for relief, help, and care.
Idiopathic pain (pain that persists after the trauma or pathology has healed, or that arises without any apparent cause) may be an exception to the idea that pain is helpful to survival, although some psychodynamic psychologists argue that such pain is psychogenic, enlisted as a protective distraction to keep dangerous emotions unconscious.
In pain science, thresholds are measured by gradually increasing the intensity of a stimulus in a procedure called "quantitative sensory testing" which involves such stimuli as electric current, thermal (heat or cold), mechanical (pressure, touch, vibration), ischemic, or chemical stimuli applied to the subject to evoke a response. The "pain perception threshold" is the point at which the subject begins to feel pain, and the "pain threshold intensity" is the stimulus intensity at which the stimulus begins to hurt. The "pain tolerance threshold" is reached when the subject acts to stop the pain.
A person's self-report is the most reliable measure of pain. Some health care professionals may underestimate pain severity. A definition of pain widely employed in nursing, emphasizing its subjective nature and the importance of believing patient reports, was introduced by Margo McCaffery in 1968: "Pain is whatever the experiencing person says it is, existing whenever he says it does". To assess intensity, the patient may be asked to locate their pain on a scale of 0 to 10, with 0 being no pain at all, and 10 the worst pain they have ever felt. Quality can be established by having the patient complete the McGill Pain Questionnaire indicating which words best describe their pain.
The visual analogue scale is a common, reproducible tool in the assessment of pain and pain relief. The scale is a continuous line anchored by verbal descriptors, one for each extreme of pain where a higher score indicates greater pain intensity. It is usually 10 cm in length with no intermediate descriptors as to avoid marking of scores around a preferred numeric value. When applied as a pain descriptor, these anchors are often 'no pain' and 'worst imaginable pain". Cut-offs for pain classification have been recommended as no pain (0-4mm), mild pain (5-44mm), moderate pain (45-74mm) and severe pain (75-100mm).
The Multidimensional Pain Inventory (MPI) is a questionnaire designed to assess the psychosocial state of a person with chronic pain. Combining the MPI characterization of the person with their IASP five-category pain profile is recommended for deriving the most useful case description.
Non-verbal people cannot use words to tell others that they are experiencing pain. However, they may be able to communicate through other means, such as blinking, pointing, or nodding.
With a non-communicative person, observation becomes critical, and specific behaviors can be monitored as pain indicators. Behaviors such as facial grimacing and guarding (trying to protect part of the body from being bumped or touched) indicate pain, as well as an increase or decrease in vocalizations, changes in routine behavior patterns and mental status changes. Patients experiencing pain may exhibit withdrawn social behavior and possibly experience a decreased appetite and decreased nutritional intake. A change in condition that deviates from baseline, such as moaning with movement or when manipulating a body part, and limited range of motion are also potential pain indicators. In patients who possess language but are incapable of expressing themselves effectively, such as those with dementia, an increase in confusion or display of aggressive behaviors or agitation may signal that discomfort exists, and further assessment is necessary. Changes in behavior may be noticed by caregivers who are familiar with the person's normal behavior.
Infants do feel pain, but lack the language needed to report it, and so communicate distress by crying. A non-verbal pain assessment should be conducted involving the parents, who will notice changes in the infant which may not be obvious to the health care provider. Pre-term babies are more sensitive to painful stimuli than those carried to full term.
Another approach, when pain is suspected, is to give the person treatment for pain, and then watch to see whether the suspected indicators of pain subside.
The way in which one experiences and responds to pain is related to sociocultural characteristics, such as gender, ethnicity, and age. An aging adult may not respond to pain in the same way that a younger person might. Their ability to recognize pain may be blunted by illness or the use of medication. Depression may also keep older adult from reporting they are in pain. Decline in self-care may also indicate the older adult is experiencing pain. They may be reluctant to report pain because they do not want to be perceived as weak, or may feel it is impolite or shameful to complain, or they may feel the pain is a form of deserved punishment.
Cultural barriers may also affect the likelihood of reporting pain. Sufferers may feel that certain treatments go against their religious beliefs. They may not report pain because they feel it is a sign that death is near. Many people fear the stigma of addiction, and avoid pain treatment so as not to be prescribed potentially addicting drugs. Many Asians do not want to lose respect in society by admitting they are in pain and need help, believing the pain should be borne in silence, while other cultures feel they should report pain immediately to receive immediate relief.
Gender can also be a perceived factor in reporting pain. Gender differences can be the result of social and cultural expectations, with women expected to be more emotional and show pain, and men more stoic. As a result, female pain is often stigmatized, leading to less urgent treatment of women based on social expectations of their ability to accurately report it. This leads to extended emergency room wait times for women and frequent dismissal of their ability to accurately report pain.
Pain is a symptom of many medical conditions. Knowing the time of onset, location, intensity, pattern of occurrence (continuous, intermittent, etc.), exacerbating and relieving factors, and quality (burning, sharp, etc.) of the pain will help the examining physician to accurately diagnose the problem. For example, chest pain described as extreme heaviness may indicate myocardial infarction, while chest pain described as tearing may indicate aortic dissection.
Functional magnetic resonance imaging brain scanning has been used to measure pain, and correlates well with self-reported pain.
Nociceptive pain is caused by stimulation of sensory nerve fibers that respond to stimuli approaching or exceeding harmful intensity (nociceptors), and may be classified according to the mode of noxious stimulation. The most common categories are "thermal" (e.g. heat or cold), "mechanical" (e.g. crushing, tearing, shearing, etc.) and "chemical" (e.g. iodine in a cut or chemicals released during inflammation). Some nociceptors respond to more than one of these modalities and are consequently designated polymodal.
Nociceptive pain may also be classed according to the site of origin and divided into "visceral", "deep somatic" and "superficial somatic" pain. Visceral structures (e.g., the heart, liver and intestines) are highly sensitive to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant, usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep, squeezing, and dull. "Deep somatic" pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones. "Superficial somatic" pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp, well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds and minor (first degree) burns.
Neuropathic pain is caused by damage or disease affecting any part of the nervous system involved in bodily feelings (the somatosensory system). Neuropathic pain may be divided into peripheral, central, or mixed (peripheral and central) neuropathic pain. Peripheral neuropathic pain is often described as "burning", "tingling", "electrical", "stabbing", or "pins and needles". Bumping the "funny bone" elicits acute peripheral neuropathic pain.
Nociplastic pain is pain characterized by a changed nociception (but without evidence of real or threatened tissue damage, or without disease or damage in the somatosensory system).
This applies, for example, to fibromyalgia patients.
Psychogenic pain, also called "psychalgia" or "somatoform pain", is pain caused, increased, or prolonged by mental, emotional, or behavioral factors. Headache, back pain, and stomach pain are sometimes diagnosed as psychogenic. Sufferers are often stigmatized, because both medical professionals and the general public tend to think that pain from a psychological source is not "real". However, specialists consider that it is no less actual or hurtful than pain from any other source.
People with long-term pain frequently display psychological disturbance, with elevated scores on the Minnesota Multiphasic Personality Inventory scales of hysteria, depression and hypochondriasis (the "neurotic triad"). Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical evidence points the other direction, to chronic pain causing neuroticism. When long-term pain is relieved by therapeutic intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in chronic pain patients, also shows improvement once pain has resolved.
Inadequate treatment of pain is widespread throughout surgical wards, intensive care units, and accident and emergency departments, in general practice, in the management of all forms of chronic pain including cancer pain, and in end of life care. This neglect extends to all ages, from newborns to medically frail elderly. African and Hispanic Americans are more likely than others to suffer unnecessarily while in the care of a physician; and women's pain is more likely to be undertreated than men's.
The International Association for the Study of Pain advocates that the relief of pain should be recognized as a human right, that chronic pain should be considered a disease in its own right, and that pain medicine should have the full status of a medical specialty. It is a specialty only in China and Australia at this time. Elsewhere, pain medicine is a subspecialty under disciplines such as anesthesiology, physiatry, neurology, palliative medicine and psychiatry. In 2011, Human Rights Watch alerted that tens of millions of people worldwide are still denied access to inexpensive medications for severe pain.
Acute pain is usually managed with medications such as analgesics and anesthetics. Caffeine when added to pain medications such as ibuprofen, may provide some additional benefit. Ketamine can be used instead of opioids for short term pain. Management of chronic pain, however, is more difficult, and may require the coordinated efforts of a pain management team, which typically includes medical practitioners, clinical pharmacists, clinical psychologists, physiotherapists, occupational therapists, physician assistants, and nurse practitioners. | https://en.wikipedia.org/wiki?curid=24373 |
List of pacifist organisations
A Pacifist organization promotes the pacifist principle of renouncing war and violence for political ends. They are distinguished from organizations concerned only with removing nuclear weapons from war, though those organization may call for suspension of hostilities as well. Other organizations include those that deal with other concerns, but have a strong pacifist element.
Pacifist organizations:
"Nuclear pacifist" organizations :
Organizations that cite pacifism as an aim : | https://en.wikipedia.org/wiki?curid=24374 |
Porsche 944
The Porsche 944 is a sports car manufactured by German automobile manufacturer Porsche from 1982 to 1991. A front-engine, rear-wheel drive mid-level model based on the 924 platform, the 944 was available in coupé or cabriolet body styles, with either naturally aspirated or turbocharged engines.
The 944 was to continue production in the 1990s but major revisions planned for a 944 "S3" model eventually morphed into the 968, which became its replacement. Over 163,000 cars were produced in total, making it the most successful sports car in Porsche's history until the introductions of the Boxster and 997 Carrera.
The 924 had originally been a project of VW-Porsche, a joint Porsche/Volkswagen company incorporated to develop and produce the 914 which was sold in Europe badged as both a Porsche and a Volkswagen. In 1972, a replacement for the Volkswagen version of the 914, code named EA-425 began development. The model was to be sold as an Audi as part of the VW-Audi-Porsche marketing arrangement. Porsche was to manufacture its own version of the car. At one point, Volkswagen head Rudolf Leidig declared the EX-425 was going to be a Volkswagen exclusively, thus denying Porsche's version of the 914's replacement. Although testing had begun in the Spring of 1974, Volkswagen cancelled the EX-425 program, the reason being significant financial losses due to declining sales and rising development costs for new vehicles as well as the departure of Leidig. The recently introduced Volkswagen Scirocco was expected to fill the sports coupé market segment and the unfinished project was handed over to Audi to serve as the replacement for the Audi 100.
The cancellation of the EX-425 program led Porsche to market an entry level car to replace the 912E, which was a US-only stop-gap model for 1976, and their version of the 914, which was discontinued in 1975. Porsche purchased the design and the finished development mule with a Bosch K-Jetronic mechanical fuel injection system from Volkswagen. The vehicle, dubbed the 924, received positive reviews, but was criticised by Porsche enthusiasts for its Audi-sourced 2.0 L engine. In 1979, Porsche introduced a Turbocharged version of the 924 to increase performance, but this model carried a high price. Rather than scrapping the model from its line-up, Porsche decided to develop the 944, as they had done with generations of the 911; although model numbers would change, the 924 would provide the basis for this new mid level model.
The prototype of this mid level model debuted at LeMans in 1981, an unusual strategy implemented by Porsche at the time. Called the 924 GTP LeMans, the car was based on the 924 Carrera GT LeMans that competed in the event prior to the GTP's introduction. The most noticeable change in the new race car was the departure from the Audi sourced 2.0 L inline-4 engine in favour of the 2.5 L engine developed by Porsche. The new engine was mounted at an angle of 45 degree to the right and utilised a dual overhead camshaft along with counter rotating balance shafts, an unusual and unique feature for its time that provided better weight distribution and ensured smooth power delivery by eliminating inherent vibrations resulting in the engine lasting longer. A single KKK turbocharger producing enabled the engine to generate a maximum power output of at 6,800 rpm. The engine also utilised Bosch's prototype Motronic engine management system to control ignition timing, fuel injection and boost pressure. The new race car proved to be much more fuel efficient than its predecessor, stopping only 21 times in 24 hours for fuel. The 924 GTP managed seventh position overall behind the race winning 936 in 1981 before being retired and stored in the Porsche museum. In 1982, Porsche debuted the production road legal version of the race car, called the 944, the car utilised many technologies that its race bred sibling had used, including the balance shafts and the engine management system, but power was toned down for safety purposes.
The new all-alloy inline-four engine, with a bore of and stroke of , was in essence, half of the 928's 5.0 L V8 engine, although very few parts were actually interchangeable. Not typical in luxury sports cars, the four-cylinder engine was chosen for fuel efficiency and size, because it had to be fitted from below on the Neckarsulm production line. To overcome roughness caused by the unbalanced secondary forces that are typical of inline four-cylinder engines, Porsche included two counter-rotating balance shafts running at twice the engine speed. Invented in 1904 by British engineer Frederick Lanchester, and further developed and patented in 1975 by Mitsubishi Motors, balance shafts carry eccentric weights which produce inertial forces that balance out the unbalanced secondary forces, making a four-cylinder engine feel as smooth as a six-cylinder engine. Porsche spent some time trying to develop their own system, but when they realised that they could not improve on the system developed by Mitsubishi, they chose to pay the licensing fees rather than come up with a variation just different enough to circumvent the patent. The licensing fees were about US$7–8 per car, which translated to about US$100 () for the consumer to pay. The engine was factory-rated at in its U.S. configuration. Revised bodywork with wider wheel arches, similar to that of the 924 Carrera GT, a fresh interior and upgrades to the braking and suspension systems rounded out the major changes.
Porsche introduced the 944 for the 1982 model year. It was slightly faster (despite having a poorer drag coefficient), was better equipped and more refined than the 924; it had better handling and stopping power, and was more comfortable to drive. The factory-claimed a 0–97 km/h (60 mph) acceleration time of less than 9 seconds (8.3 seconds according to "Porsche the Ultimate Guide" By Scott Faragher). The car had a nearly even front to rear weight distribution (50.7% front/49.3% rear) courtesy of the rear transaxle balancing out the engine in the front. North American-market cars had bigger bumpers and the front bumper had a larger rubber portion, replacing the auxiliary lights as required by the North American laws.
In mid-1985, the 944 underwent its first significant changes, these included: new dashboard and door panels, embedded radio antenna, upgraded alternator (from 90 amp to 115 amp), increased oil sump capacity, new front and rear cast alloy control arms and semi-trailing arms, larger fuel tank, optional heated and powered seats, Porsche HiFi sound system, and revisions in the mounting of the transaxle to reduce noise and vibration. The front windshield was now a flush-mounted unit. The "cookie cutter" style wheels used in the early 944s were upgraded to new "phone dial" style wheels (Fuchs wheels remained an option).
For the 1987 model year, the 944 Motronic DME was updated, and newly incorporated elements included anti-lock braking system and airbags. Because of the ABS, the wheel offset was changed to and Fuchs wheels were no longer available as an option.
In early 1989 before the release of the 944S2, Porsche upgraded the 944's engine from the 2.5 L four cylinder engine to a 2.7 L engine having a bore of and stroke of , with a rated power output of (versus for the 1988 2.5 L version) and a significant increase in torque. In addition to the increase in displacement, the new engine featured a siamesed-cylinder block design and a different cylinder head which incorporated larger valves.
In 1983, American tuning company Callaway Cars began offering a turbocharged package for the US-Spec 944 in collaboration with Porsche. The standard 2.5 L Inline-4 engine was not suitable for forced induction because of the higher compression ratio of 9.5:1 which made the engine prone to failure when subject to forced induction along with the complex Bosch Motronic engine management system. Callaway engineers overcame this problem by increasing the volume of the engine's combustion chambers by milling away metal from both piston heads and chamber walls and by tweaking the Motronic system so it would ensure optimum fuel injection to the turbocharged engine along with installing their own Microfueler unit. This step was highly effective, but required disassembly of the entire engine, leading to the high cost of the package. The resulting engine's compression ratio was of 8.0:1 which was less than the standard engine but ensured linear power delivery. In order to ensure that there were no serious engine breakdowns, Callaway installed an ubiquous internal waste gate recommending the use of 91-octane fuel in order for increased engine reliability. In addition to that, an IHI RHB6 turbocharger was installed on the right hand side of the engine along with a new free flow exhaust system incorporating a larger exhaust pipe for optimum performance. The small turbocharger eliminated turbo-lag thus ensuring linear levels of boost. The turbocharger produced 10 psi of boost, however a boost adjuster knob located on the dashboard was optional. With these modifications, the engine generated a power output of at 6,000 rpm and at 4,000 rpm as opposed of the standard car's at 5,500 rpm. Performance increased over the standard car as well, with a acceleration time of 5.9 seconds and a top speed of . Callaway quoted that the acceleration times would even be lower if the rev limiter was removed. Only 20 cars were produced making it one of the rarest Porsche 944s produced.
For the 1986 model year, Porsche introduced the 944 Turbo, known internally as the 951. The Turbo had a turbocharged and intercooled version of the standard 944's engine that generated ( in the US) at 6,000 rpm. In 1987, "Car and Driver" tested the 944 Turbo and achieved a time of 5.9 seconds. The Turbo was the first Porsche production car utilising a ceramic port liner to retain exhaust gas temperature along with new forged pistons and was also the first vehicle to produce an identical power output with or without a catalytic converter. The Turbo also featured several other changes, such as improved aerodynamics, notably an integrated front bumper. This featured the widest turn signals (indicators) fitted to any production car, a strengthened gearbox with a different final drive ratio, standard external oil coolers for both the engine and transmission, standard 16 inch wheels (optional forged Fuchs wheels), and a slightly stiffer suspension (progressive springs) to handle the extra weight. The Turbo's front and rear brakes were borrowed from the 911, with Brembo 4-piston fixed calipers and 12-inch discs. ABS also came standard on US models. Engine component revisions, more than thirty in all, were made to the 951 to compensate for increased internal loads and heat.
Changes occurred for the 1987 model year. Interior wise, the North American variant of the 1987 944 Turbo became the first production car in the world to be equipped with driver and passenger side air bags as standard equipment. A low oil level light was added to the dash as well as a speedometer as opposed to the speedometer on the 1986 model year cars. Also included was the deletion of the transmission oil cooler, and a change in suspension control arms to reduce the car's scrub radius. The engine remained the same M44/51 inline-4 as in the 1986 model.
In 1988, Porsche introduced the 944 Turbo S with a more powerful engine (designation number M44/52) rated at a maximum power output of at 6,000 rpm and of torque at 4,000 rpm (the engine in the standard 944 Turbo generated and ). This higher output was achieved by using a larger KKK K26-8 turbocharger housing and revised engine mapping which allowed maintaining maximum boost until 5,800 rpm, compared to the standard 944 Turbo, the boost would decrease from at 3,000 rpm to at 5,800 rpm. In June 1988, "Car and Driver" tested the 944 Turbo S (with the advantage of shorter final drive gear) and achieved a acceleration time of 5.5 seconds and a quarter-mile time of 13.9 seconds at . Top speed was factory rated at .
The 944 Turbo S' suspension had the "M030" option consisting of Koni adjustable shocks at the front and rear, with ride height adjusting threaded collars on the front struts, progressive rate springs, larger hollow rear anti-roll/torsion bars, harder durometer suspension bushings, larger hollow anti-roll/torsion bars at the front, and chassis stiffening brackets in the front frame rails. The air conditioning dryer lines were routed so as to clear the front frame brace on the driver's side. The 944 Turbo S wheels, known as the Club Sport design, were 16-inch Fuchs forged and flat-dished, similar to the Design 90 wheel. Wheel widths were at the front, and at the rear with a offset; sizes of the Z-rated tyres were 225/50 in the front and 245/45 in the rear. The front and rear fender edges were rolled to accommodate the larger wheels. The manual transmission (case code designation: AOR) featured a higher friction clutch disc setup, an external cooler, and a limited-slip differential with a 40% lockup setting. The Turbo S' front brakes were borrowed from the 928 S4, with larger Brembo GT 4-piston fixed calipers and 12-inch discs; rear Brembo brakes remained the same as a standard Turbo. ABS also came standard.
The 944 Turbo S' interior featured power seats for both driver and passenger, where the majority of the factory-built Turbo S models sported a "Burgundy plaid" (Silver Rose edition) exterior colour but other interior/exterior colours were available. A 10-speaker sound system and equalizer + amp was a common option with the Turbo S and S/SE prototypes. Only the earlier 1986, prototypes featured a "special wishes custom interior" options package.
In 1989 and later production years, the 'S' designation was dropped from the 944 Turbo S, and all of the turbocharged iterations of the 944 featured the Turbo S enhancements as standard, however the "M030" suspension and the Club Sport wheels were not part of that standard. The 944 Turbo S was the fastest production four cylinder car of its time.
For the 1987 model year, the 944 S (the S being the abbreviation of Super) was introduced. The 944 S featured a high performance naturally aspirated, dual-overhead-cam 16-valve version of the 2.5 L engine (M44/40) featuring a self-adjusting timing belt tensioner. This marked the first use of four-valves-per-cylinder heads and DOHC in the 944, derived from the 928 S4 featuring a redesigned camshaft drive, a magnesium intake tract/passages, magnesium valve cover, larger capacity oil sump, and revised exhaust system. The alternator capacity was 115 amps. The wheel bearings were also strengthened and the brake servo action was made more powerful. Floating 944 calipers were standard, but the rear wheel brake circuit pressure regulator from the 944 turbo was used. Small '16 Ventiler' script badges were added on the sides in front of the body protection mouldings. Performance figures included 0- being achieved in 6.5 seconds (Best) and a top speed due to a curb weight. It also featured an improved programmed Bosch Digital Motronic 2 Computer/DME with dual knock sensors for improved fuel performance for the higher 10.9:1 compression ratio cylinder head. Like the 944 Turbo, the 944 S received progressive springs for improved handling, larger front and rear anti-roll bars, revised transmission and gearing to better suit the 2.5 L DOHC engine's higher 6,800 rpm rev limit. Dual safety air bags, limited-slip differential, and ABS braking system were optional on the 944 S.
A Club Sport touring package (M637) was available as was the lightweight 16-inch CS/Sport Fuchs 16x7 and 16x9 forged alloy wheels. This version was raced in Canada, Europe and in the IMSA Firehawk Cup Series held in the U.S. Production was only during 1987 and 1988. It was superseded in 1989 by the 'S2' version. The 1987 944 S' power-to-weight ratio was such that it was able to accelerate from 0 to 100 km/h in 6.5 seconds thus matching the acceleration of its newer larger displacement 3.0 L 944 S2 sibling.
In 1989 the 944 S2 was introduced, powered by a normally aspirated, dual-overhead-cam 16-valve 3.0 L version of the 944 S' engine. With a bore of and a stroke of , it was the largest production 4-cylinder engine of its time. The 944 S2 also received a revised transmission and gearing to better suit the 3.0 L M44/41 powerplant. The 944 S2 had the same rounded nose and a rear valance found on the Turbo model. Quoted performance figures included a 0–97 km/h acceleration time of 6.0 seconds (0–100 km/h being achieved in 6.8 seconds) and a top speed of for the cars with a manual transmission. A Club Sport touring package (M637) was also available. Dual air bags (left hand drive models), limited-slip differential and ABS were optional. Design 90 16-inch cast alloy wheels were standard equipment.
In 1989, Porsche introduced the 944 S2 Cabriolet, the first 944 to feature a convertible body style. The 944 S2's body was manufactured by ASC (American Sunroof Company) in Weinsberg, Germany. The first year of production included 16 944 S2 Cabriolet manufactured for the U.S. market. For the 1990 model year, Porsche produced 3,938 cars for all markets including right-hand drive units for the United Kingdom, Australia and South Africa.
In February 1991, Porsche unveiled the 944 Turbo Cabriolet, which combined the Turbo S' engine with the cabriolet body style also built by ASC. Porsche initially announced that 600 cars would be made; ultimately 625 were built, 100 of which were right-hand drive for the United Kingdom, Japanese, Australian, and South African markets. None were imported to the U.S. and The Americas.
In early 1990, Porsche engineers began working on what they had intended to be the third evolution of the 944, the S3. As they progressed with the development process, they realised that so many parts were being changed that they had produced an almost entirely new vehicle. Porsche consequently shifted development from the 944 S/S2 to the car that would replace the 944 entirely, the 968. The 944's final year of production was 1991 with over 4,000 cars built and sold. In 1992, the 968 debuted and was sold alongside the 928 until 1995, when both water-cooled front engine models were discontinued without a direct successor.
In February 1992, a verbal agreement was given to Porsche UK from Stuttgart for the production of a prototype “Sports Equipment” 944 S2 Model with following approval to construct 15 vehicles for the UK market from the last 944 S2 coupés produced. A unique 30mm lower fully adjustable Koni Suspension with springs from the Turbo was used in combination with upgraded 31mm front stabiliser bar & adjustable rear bar. Engine output was increased to with re-map to improve torque above 4,250rpm, as well as a unique sports exhaust system. Cosmetically the “SE” was fitted with Porsche colour matched “Porsche Sport” steering wheel, Bi-plane rear spoiler, SE side decals and rear badging. The modifications resulted in improved acceleration in higher rev range, flatter cornering, more precise steering, improved responsiveness, confidence inspiring handling leading to an overall sharper response. The 944 S2 SE prototypes are regarded as the inspiration and in part development for the later 968 Club Sport.
A grand total 163,192 cars in the 944 family were produced between 1982 and 1991. This made it the most successful sports car in Porsche's history until the introduction of the Boxster/Cayman and 997 Carrera.
A total of 113,070 944s were made between 1982 and 1989, with 56,921 being exported to the United States. A project joint venture with Porsche and Callaway resulted in 20 specially built turbo 944's built for the US market in 1983.
A total of 25,245 944 Turbos were made, with 13,982 being exported to the United States.
† - Includes 251 Turbo Cabriolet. A different source, Jerry Sloniger's
article in the October 1991 issue of "Excellence", indicates that the
factory built 525, of which 255 were exported to markets outside Germany.
< >"CUP" designates a cup car which is a special edition race car.
A total of 12,936 944 S models were produced from 1987 to 1988, with 8,815 being exported to the United States. In 1985 a Prototype 944 S Cabriolet 'Studie' built by Braun was powered by the 2.5 L 16 valve which developed 185 hp, forerunner of the later production 944 S and S2 Cabriolet models.
A total of around 14,071 944 S2's were made between 1989 and 1991, with 3,650 being exported to the United States.
In 1989 only 16 concept prototype 944 S2 Cabriolets were manufactured making it one of the rarest Porsche cars ever produced.
Porsche began a race series for the top-of-the line 944 Turbo in the mid-1980s. There were five championship series: one in France, one in Germany, one in South Africa, one in Canada, and one in the United States. Each had a different number of cars competing. The Turbo Cup cars developed for the series had substantial upgrades over their road going counterparts. These included a larger KKK K26-8 turbocharger, a magnesium intake manifold and oil pan, a reinforced transmission, clutch, differential and axles along with removal of A/C, power seats, leather upholstery, sun visors, power windows, power steering, rear wiper, headlight washers, fender liners, storage pockets, and rear trunk release, upgraded struts, shocks, springs, suspension mounts, as well as an adjustable ABS system, bigger brakes with racing pads, magnesium wheels, a transmission oil cooler, and a lightweight battery. This yielded weight savings of approximately and improvements in performance of the car as the Turbo Cup cars had a 0– acceleration time of 5.3 seconds and a top speed of nearly . 192 Turbo Cup cars were made.
The 944 was on "Car and Driver's" Ten Best list from 1983 through 1985, and the "Turbo" made the list for 1986.
In 1984, "Car and Driver" named the 944 the Best Handling Production Car in America. | https://en.wikipedia.org/wiki?curid=24375 |
Porsche 968
The Porsche 968 is a sports car manufactured by German automobile manufacturer Porsche AG from 1991 to 1995. It was the final evolution of a series of water-cooled front-engine rear wheel drive models begun almost 20 years earlier with the 924, taking over the entry-level position in the company lineup from the 944 with which it shared about 20% of its parts. The 968 was Porsche's last new front-engined vehicle before the introduction of the Cayenne SUV in 2003.
Porsche's 944 model debuted for the 1982 model year as an evolution of the 924, was updated as "944S" in 1987 and as "944S2" in 1989. Porsche was in significant financial crisis at the time with less interest in its sports cars from customers, especially in the US. The virtually unchanged design of the 944, which was derived from the 924, was showing its age and sales of the model declined. Porsche hence found the need to develop a new entry level model. Shortly after the start of production of the S2 variant, Porsche engineers began working on another set of significant upgrades for the model, as executives were planning a final "S3" variant of the 944 with a design language in line with the models in its lineup in order to save development costs. During the development phase, 80% of the 944's mechanical components were either significantly modified or completely replaced by the engineers, leaving so little of the outgoing S2 model that Porsche management chose to introduce the variant as a new model, calling it the 968. In addition to the numerous mechanical upgrades, the new model also received significantly evolved styling both inside and out, with a more modern, streamlined look and more standard luxury amenities than on the 944. To save production costs, production was moved from the Audi plant in Neckarsulm (where the 924 and 944 had been manufactured under contract to Porsche), to Porsche's own factory in Zuffenhausen.
The 968 was mainly a restyled evolution of the 944 with design links visible to its predecessor. Design work was done by Harm Lagaay who had designed the 924 and the 944 as well. The front of the car largely resembled the top-of-the line 928, sharing its exposed pop-up headlamps and the integrated front bumper. This frontal design would eventually appear on the 911 (993) two years later.
The rear of the 968 was also redesigned, featuring fully coloured rounded taillamps special bulbs were used in the taillamps which either illuminated a small area in amber colour when the turn signals were activated or in white when the car was reversing. PORSCHE badging was fitted between the taillights just below the model type number. The rear apron was integrated into the smoothened rear bumper.
While the exterior of the car was rounded and smoothed, the interior was largely unchanged and mostly shared with the preceding 944 S2 with the exception of switches and control knobs. The 968 also featured numerous small equipment and detail upgrades from the 944, including a Fuba roof-mounted antenna, updated single lens tail lamps, "Cup" style 16-inch alloy wheels, a wider selection of interior and exterior colours, a slightly updated "B" pillar and rear quarter window to accommodate adhesive installation to replace the older rubber gasket installation.
Like its predecessor, the 968 was offered in coupe and convertible bodystyles. The 968 was powered by an updated version of the 944's Inline-four engine, now displacing 3.0 L with a 104 mm bore and a 88 mm stroke and rated at at 6,200 rpm and of torque at 4,100 rpm. Modifications to the engine include a higher 11.0:1 compression ratio, lighter crankshaft, crankcase and pistons along with revised intake valves and intake manifold. Changes to the 968's powertrain also included the addition of Porsche's then-new VarioCam variable valve timing system, newly optimized induction and exhaust systems, a dual-mass flywheel, and updated engine management electronics. The 968's engine was the fourth-largest four-cylinder engine ever offered in a production car at that time. A new 6-speed manual transmission replaced the 944's old 5-speed, and Porsche's dual-mode 4-speed Tiptronic automatic became an available option. Both the VarioCam timing system and Tiptronic transmission were very recent developments for Porsche. The Tiptronic transmission had debuted for the first time only 3 years prior to the debut of the 968, on the 1989 Type 964 911. The VarioCam timing system was first introduced on the 968 and would later become a feature of the Type 993 air-cooled six-cylinder engine.
Much of the 968's chassis was carried over from the 944 S2, which in itself shared many components with the 944 Turbo (internally numbered "951") due to lack of development funds at the time. Borrowed components include the Brembo-sourced four-piston brake calipers on all four wheels with ventilated brake rotors, ABS, aluminium semi-trailing arms and aluminum front A-arms, used in a Macpherson strut arrangement. The steel unibody structure was also very similar to that of the previous models. Porsche maintained that 80% of the car was new.
The 968 can attain a top speed of when equipped with the manual transmission and has a 0– acceleration time of 6.5 seconds.
For the 1993 model year, the 968 received minor changes which included a pollen filter to increase the cleanliness of the air being channeled through the air conditioner and the introduction of special packages. The seat package included heated driver and front passenger seats, the sound package included an additional amplifier in the coupe and two additional speakers installed at the rear in the convertible while the suspension package included larger 17 inch wheels and an improved braking system with cross-drilled brake discs.
From 1993 through 1995, Porsche offered a lighter-weight "Club Sport" version of the 968 designed for enthusiasts seeking increased track performance. Much of the 968's luxury-oriented equipment was removed or taken off the options list; less sound deadening material was used, power windows were replaced with crank-driven units, upgraded stereo systems, A/C and sunroof were still optional as on the standard Coupe and Convertible models. In addition, Porsche installed manually adjustable lightweight Recaro racing seats rather than the standard power-operated leather buckets (also manufactured by Recaro), a revised suspension system optimised and lowered by 20 mm for possible track use, 17-inch wheels (also slightly wider to accommodate wider tyres) rather than the 16-inch as found on the Coupe and wider tyres, 225 front and 255 rear rather than 205 and 225 respectively. The four-spoke airbag steering wheel was replaced with a thicker-rimmed three-spoke sports steering wheel with no airbag, heated washer jets were replaced with non heated, vanity covers in the engine bay were deleted, as was the rear wiper. The Club Sport has no rear seats, unlike the 2+2 Coupé.
Club Sport models were only available in Grand Prix White, black, Speed yellow, Guards red, Riviera blue or Maritime blue exterior colours. Seat backs were colour-coded to the body. Club Sport decals were standard in either black, red or white but there was a 'delete' option.
All Club Sports had black interiors with the 944 S2 door cards. Due to the reduction in the number of electrical items the wiring loom was reduced in complexity which saved weight and also the battery was replaced with a smaller one, again reducing weight. With the no frills approach meaning less weight, as well as the optimising of the suspension, Porsche could focus media attention on the Club Sport variants fast road and track abilities. This helped to slightly bolster the flagging sales figures in the mid-1990s. The Club Sport variant achieved a 'Performance Car Of The Year' award in 1993 from Performance Car magazine in the UK. Club Sport models were only officially available in the UK, Europe, Japan & Australia, although "grey market" cars found their way elsewhere. The declared weight of the 968 CS is , ~ lighter than the regular 968. Acceleration from a standstill to takes 5.6 seconds and top speed is .
A UK-only version called "968 Sport", was offered in 1994 and 1995, and was essentially a Club Sport model (and was produced on the same production line with similar chassis numbers) with power windows, electric release boot, central locking, cloth "comfort seats" (different from both the standard and the Club Sport). With the added electrics the larger wiring loom was used. The Sport Variant also got back the two rear seats, again in the cloth material specific to the Sport. At £29,975, the 968 Sport was priced £5,500 lower than the standard 968, but had most of the latter's desirable "luxuries" and consequently outsold it by a large margin (306 of the 968 Sport models compared to 40 standard 968 coupés).
In 1993, Porsche Motorsports at Weissach briefly produced a turbocharged 968 Turbo S, a fairly odd naming choice for Porsche which usually reserves the added "S" moniker for models that have been tuned for more power over a "lesser" counterpart, such as with the 911 Turbo. The 968 Turbo S shared the same body and interior as the Club Sport and visually can be identified by the NACA bonnet hood scoops, adjustable rear wing, three-piece speedline wheels and deeper front spoiler. The car had the suspension lowered by and was lighter than the standard 968. The 968 Turbo S was powered by a 3.0 L engine with a 8-valve SOHC cylinder head (from the 944 Turbo S) and 944S2 style engine block. Tests conducted in 1993 returned a 0 to time of 4.7 seconds and a top speed of . The engine generated at 5,600 rpm with a maximum torque of at 3,000 rpm. Only 14 were produced in total and only for sale in mainland Europe.
Between 1992 and 1994, Porsche Motorsports Research and Development built and provided a full "Race" version (stripped out 968 Turbo S) for Porsche's customer race teams. The 968 Turbo RS was available in two variations; a version using the K27 turbocharger from the Turbo S, which was built to the German ADAC GT specification (ballast added to bring the car up to the 1,350 kg minimum weight limit), and an international spec version which used a KKK L41 turbocharger with the engine rated at and a reduced weight of 1,212 kg (2672 lbs). The interior of the Turbo RS features a single racing bucket seat with six point harness along with a welded in roll cage required for it to be eligible. Other modifications included a modified 6-speed manual transmission having altered gear ratios and a racing clutch along with racing suspension. Only 4 were ever produced as privateer racing teams showed much interest in the 911 Carrera RS 3.8 race car offered at the same time. These are the rarest 968s ever produced.
In the ADAC GT Cup, the Joest team achieved fourth place in the Avus race in 1993 with the Turbo RS driven by Manuel Reuter. In the BPR, the car was driven at the 4-hour race by Dijon in 1994 to sixth place which was its best result in the race series. The Seikel Motorsport team used a 968 Turbo RS at the 1994 24 Hours of Le Mans, driven by John Nielsen, Thomas Bscher and Lindsay Owen-Jones. After 84 laps, the team had to end the race prematurely after an accident.
The 968 was Porsche's last front-engine vehicle of any type before the introduction of the Cayenne SUV in 2003. Its discontinuation in 1995 due to poor sales coincided with that of the 928, Porsche's only other front-engine car at the time. The 968 was also the last Porsche sold with a four-cylinder engine prior to the introduction of the 718 Boxster in 2016.
While lacking the wider ranging appeal of the 911, the 968 developed its own following. This is likely due to the 968's unique combination of speed and practicality, and low production numbers. | https://en.wikipedia.org/wiki?curid=24376 |
Porsche 912
The Porsche 912 is a sports car by Porsche AG of Stuttgart, Germany produced for the 1965 through 1969 model years. The 912 is an entry-level variant of the 911. Like the 911, the 912 was offered in Coupé and Targa body styles. The 912 is a nimble-handling compact 2+2 fitted with a 1.6-liter air cooled 4-cylinder flat-4 from the last of the 356s though slightly detuned to 102 SAE horsepower at 5800 rpm. The 912 is capable of up to fuel economy. This combination is possible because of the high-efficiency boxer engine, low drag, and low weight. Priced at $4,700, the 912 initially outsold the 911, boosting the manufacturer's total production until success of the 911 was assured. More than 32,000 912s were built from April 1965 to July 1969.
The 4-cylinder 914 superseded the 912 as Porsche's entry-level model for the 1970 through 1975 model years. In 1976, The 912 enjoyed a one-year revival with the U.S.-only 912E powered by the 914-derived 2.0-liter VW "Type 4" with Bosch L-Jetronic fuel injection delivering 90 SAE horsepower at 4900 rpm. Just 2,092 912E cCoupéoupés were built from May 1975 to July 1976.
In the early 1960s, Porsche was planning to discontinue the Type 356, which would leave them with the newly-introduced Type 911 as their only product. Concerned that the considerable price increase of a 911 with flat opposed six-cylinder powerplant over the 356 would cost the company sales and narrow brand appeal, in 1963 Porsche executives decided to introduce a new four-cylinder entry-level model. Like the 911 (original internal factory designation "901"), the four-cylinder 912 was originally known at Zuffenhausen by a number with a zero in the middle, but the "902" designation was never used publicly. ("912" as project number was used after 1968 to indicate the 12 cylinder flat opposed engine developed for Porsche 917 racing car)
In 1963, Porsche assigned Dan Schwartz, later Chief Departmental Manager for Development, Mechanics, a project to oversee design and construction of a new horizontally-opposed four-cylinder engine for the 902, utilizing components from the new 901 six-cylinder engine, that would produce higher performance than their 356SC engine, and be less costly and complex than their Carrera 2 engine. Another option explored by Claus von Rücker was to increase displacement of the 356 Type 616 engine to 1.8-liters, add Kugelfischer fuel injection, and modify both valve and cooling systems. Considering performance, cost, and scheduling, Porsche discontinued both of these design projects, and instead developed a third option, to tailor the 1.6-liter Type 616 engine to the 902.
Before 911 production commenced in 1964, the Porsche Vehicle Research Department had set aside chassis numbers 13328, 13329, 13330, 13352, and 13386 through 13397 for research testing of the 902; research vehicle Serial Number 13394 is the oldest 902 known to exist today. In production form, the Type 912 combined a 911 chassis / bodyshell with the 1.6L, four-cylinder, push-rod Type 616/36 engine, based upon the Type 616/16 engine used in the Type 356SC of 1964-1965. With a lower compression ratio and new Solex carburetors, the Type 616/36 engine produced five less horsepower than the 616/16, but delivered about the same maximum torque at 3,500 rpm versus 4,200 rpm for the 616/16. Compared to the 911, the resulting production Type 912 vehicle demonstrated superior weight distribution, handling, and range. To bring 912 pricing close to the 356, Porsche also deleted some features standard on the 911. As production of the 356 concluded in 1965, on April 5, 1965 Porsche officially began production of the 912 coupé. Styling, performance, quality construction, reliability, and price made the 912 a very attractive buy to both new and old customers, and it substantially outsold the 911 during the first few years of production. Porsche produced nearly 30,000 912 coupé units and about 2500 912 Targa body style units (Porsche's patented variation of a cabriolet) during a five-year manufacturing run.
Production of the Targa, complete with removable roof and heavy transparent plastic rear windows openable with a zipper (later called 'Version I' by Porsche and the 'soft-window Targa' by enthusiasts), commenced in December 1966 as a 1967 model.
In January 1968, Porsche also made available a Targa 'Version II' option ('hard window Targa') with fixed glass rear window, transforming the Targa into a coupé with removable roof.
The 912 was also made in a special version for the German autobahn police (polizei); the 100,000th Porsche car was a 912 Targa for the police of Baden-Württemberg, the home state of Porsche. In the April 1967 edition, the Porsche factory's Christophorus Magazine noted: "On 21 December 1966, Porsche celebrated a particularly proud anniversary. The 100,000th Porsche, a 912 Targa outfitted for the police, was delivered." Porsche executives decided that after the 1969 model year, continuation of 912 production would not be viable, due to both internal and external factors. First, production facilities used for the 912 were reallocated to a new 914-6, a six-cylinder high performance version of the Porsche 914, Porsche-Volkswagen joint effort vehicle. Second, the 911 platform had returned to Porsche's traditional three performance-level ladder, including a most powerful 911S, a fuel-injected 911E, and a base model 911T, with pricing largely in line with market expectations. Third, more stringent United States engine emission control regulations also had a bearing on the decision; Ferry Porsche stated "It would have taken some trouble to prepare the 912 for the new exhaust rules, and with the arrival of the 914 we would have had three different engines to keep current. That was too many."
After a six-year absence, the 912 was re-introduced to North America for the 1976 model year as the 912E (internal factory designation 923) to fill the entry-level position left vacant by the discontinuation of the 914, while the new 924 – another Porsche-Volkswagen joint effort vehicle and the 914's official replacement – was being finalized and put into production. During the production run of May 1975 to July 1976, Porsche manufactured 2,092 of the 912E (E=Einspritzung), targeted only to the US market. By comparison, 10,677 (4,784 US) 911's were built for the 1976 model year. At $10,845 MSRP, the 912E was $3,000 less than the 911S.
The VW "Type 4" engine was originally made for the VW 411/412 (1.7 liters). The 912E uses a Porsche-designed revision of the engine (2.0 liters) with a longer 71mm stroke crankshaft, new rod bearings and new pistons to increase the cylinder bore to 94mm. The 912E's Bosch L-Jetronic / Air Flow Controlled system was later adapted for the 911. The cost for a good rebuild of the 911 flat six is $10,000 while the cost of the 912E flat four rebuild is less than half that. The 912E is an excellent long distance touring car with its 20+ gallon fuel tank, 30 mpg and 600-mile range.
The 912E has the same chassis as the 911 and therefore handles much like the 911. But with less power and less weight behind the rear axle, the 912E seems more forgiving and less prone to sudden oversteer than the 911. The E was the only 912 offered with a corrosion-resistant galvanized chassis. and is the most comfortable version of the 912. The interior is the same as the 911, though some pieces were extra cost options including two of the five gauges. 14-inch Fuchs alloy wheels was a popular option; "Cookie-Cutter" alloy wheels were also available (it’s rare that you’ll see a 912E with the standard 15-inch steel wheels). Other options were electric sunroof, 923/02 anti-slip differential, electric antenna (located on the passenger side front fender), power door mirrors, power windows, headlight washers, H1 headlamps. Air conditioning was a popular dealer-installed option. As a stopgap, the 912E was the single instance of "planned obsolescence" in Porsche history. Only 2,092 were built, but this plus year-only status and the desirable qualities inherited from contemporary 911s have since made the 912E one of the more collectible four-cylinder Porsches.
Based on 912 Registry member Aric Gless's research, over half of the 2,092 cars are still in use. The Prototyp Museum collection in Hamburg Germany includes a 912E pre-series vehicle constructed utilizing a 911 Chassis No. 911 520 1617 and four-cylinder VW-Porsche 90HP 2.0L Type 4 similar to the late-model 2.0L 914/4.
"Road & Track" said, “The 912E will obviously find favor with those who prefer a slightly more practical and tractable Porsche. It’s a car with almost all the sporting virtues of the more expensive 911S, yet its simpler pushrod 4-cyl. engine should make for better fuel economy and less expensive maintenance than the 911’s six” "The fittings are simpler in this model although in terms of materials, trim and finishing the 912E is of high Porsche quality. "The 912E is comfortable where the Carrera is harsh, rational where the Carrera is excessive.” R&T’s 11.3-second 0-60 mph time and 115-mph top speed looked good against the observed 23.0-mpg economy."
Sold to the public for street use, the Porsche 912 has also proven successful as a race car, from production years to current vintage events. In 1967 the 912 contributed to Porsche factory rally history when independent Polish driver Sobiesław Zasada drove a factory-loaned 912, bearing Polish plate 6177 KR, to capture the European Rally Championship for Group 1 series touring cars. In the 1967 Rally of Poland, the second oldest rally in the world and one of the oldest motorsport events in the world, Zasada drove his 912 race No. 47 to finish first overall out of a starting field of 50 entries.
As a vintage rally car, on January 29, 2012 Hayden Burvill, Alastair Caldwell, and their #35 1968 Porsche 912 finished first in class, and 7th overall in the 2012 London to Cape Town World Cup Rally; a 14 country, three continent, 14,000 kilometre, 26 driving-days event. | https://en.wikipedia.org/wiki?curid=24377 |
Pope Victor I
Pope Victor I (died 199) was the bishop of Rome in the late second century (189–199 A.D.). He was of Berber origin. The dates of his tenure are uncertain, but one source states he became pope in 189 and gives the year of his death as 199. He was the first bishop of Rome born in the Roman Province of Africa—probably in Leptis Magna (or Tripolitania). He was later considered a saint. His feast day was celebrated on 28 July as "St Victor I, Pope and Martyr".
The primary sources vary over the dates assigned to Victor's episcopate, but indicate it included the last decade of the second century. Eusebius puts his accession in the tenth year of Commodus (i.e. A.D. 189), which is accepted by Lipsius as the correct date. Jerome's version of the Chronicle puts his accession in the reign of Pertinax, or the first year of Septimius Severus (i.e. 193), while the Armenian version puts it in the seventh year of Commodus (186). The "Liber Pontificalis" dates his accession to the consulate of Commodus and Glabrio (i.e. 186), while the "Liberian Catalogue", a surviving copy of the source the "Liber Pontificalis" drew upon for its chronology, is damaged at this point Concerning the duration of his episcopate, Eusebius, in his "History", does not state directly the duration of his episcopate, but the Armenian version of Eusebius' Chronicle gives it as 12 years. The Liberian Catalogue gives his episcopate a length of nine years two months and ten days, while the "Liber Pontificalis" states it was ten years and the same number of months and days; the Felician Catalogue something over ten. Finally, Eusebius in his "History" (5.28) states Zephyrinus succeeded him "about the ninth year of Severus", (201), while the "Liber Pontificalis" dates it to the consulate of Laternus and Rufinus (197). Lipsius, considering Victor in connection with his successors, concludes that he held office between nine and ten years, and therefore gives as his dates 189–198 or 199.
According to an anonymous writer quoted by Eusebius, Victor excommunicated Theodotus of Byzantium for teaching that Christ was a mere man. However, he is best known for his role in the Quartodeciman controversy. Prior to his elevation, a difference in dating the celebration of the Christian Passover/Easter between Rome and the bishops of Asia Minor had been tolerated by both the Roman and Eastern churches. The churches in Asia Minor celebrated it on the 14th of the Jewish month of Nisan, the day before Jewish Passover, regardless of what day of the week it fell on, as the Crucifixion had occurred on the Friday before Passover, justifying this as the custom they had learned from the apostles; for this the Latins called them "Quartodecimans". Synods were held on the subject in various parts—in Judea under Theophilus of Caesarea and Narcissus of Jerusalem, in Pontus under Palmas, in Gaul under Irenaeus, in Corinth under its bishop, Bachillus, at Osrhoene in Mesopotamia, and elsewhere—all of which disapproved of this practice and consequently issued by synodical letters declaring that "on the Lord's Day only the mystery of the resurrection of the Lord from the dead was accomplished, and that on that day only we keep the close of the paschal fast" (Eusebius H. E. v. 23). Despite this disapproval, the general feeling was that this divergent tradition was not sufficient grounds for excommunication. Victor alone was intolerant of this difference, and severed ties with these ancient churches, whose bishops included such luminaries as Polycrates of Ephesus; in response he was rebuked by Irenaeus and others, according to Eusebius. | https://en.wikipedia.org/wiki?curid=24382 |
Pope Victor II
Pope Victor II (c. 1018 – 28 July 1057), born Gebhard of Dollnstein-Hirschberg, was the bishop of Rome and ruler of the Papal States from 13 April 1055 until his death in 1057. Victor II was one of a series of German-born popes who led Gregorian Reform.
Gebhard was a native of the Kingdom of Germany in the Holy Roman Empire. He was a son of the Swabian Count Hartwig of Calw and a kinsman of Emperor Henry III. At the suggestion of the emperor's uncle, Gebhard, bishop of Ratisbon, the 24-year-old Gebhard was appointed bishop of Eichstätt. In this position, he supported the emperor's interests and eventually became one of his closest advisors.
After the death of Pope Leo IX, a Roman delegation headed by Hildebrand, later Pope Gregory VII, travelled to Mainz and asked the emperor to nominate Gebhard as successor. At a court Diet held at Ratisbon in March, 1055, Gebhard accepted the papacy, provided that the emperor restore to the Apostolic See all the possessions that had been taken from it. When the emperor agreed, Gebhard, taking the name Victor II, moved to Rome and was enthroned in St. Peter's Basilica on 13 April 1055.
Victor excommunicated both Count Ramon Berenguer I of Barcelona and Countess Almodis of Limoges for adultery at the behest of Ermesinde of Carcassonne in 1055.
In June 1055, Victor met the emperor at Florence and held a council, which reinforced Pope Leo IX's condemnation of clerical marriage, simony, and the loss of the church's properties. In the following year, he was summoned to the emperor's side, and was with Henry III when he died at Bodfeld in the Harz on 5 October 1056. As guardian of Henry III's infant son Henry IV and adviser of Empress Agnes, Henry IV's mother, Victor wielded enormous power, which he used to maintain peace throughout the empire and to strengthen the papacy against the aggressions of the barons. During, the rivalry between Archbishop Anno II of Cologne and other senior clergymen and the empress, Victor backed Agnes and her supporters. Many of her close followers would be promoted, men like Bishop Henry II of Augsburg, who would later become Emperor Henry's nominal regent, and several German princes were given high court and church offices.
Victor died shortly after his return to Italy, at Arezzo, on 28 July 1057. His death marked an end to the close relationship shared between the Salian dynasty and the papacy. Victor's retinue wished to bring his remains to the cathedral at Eichstätt for burial. Before they reached the city, however, the remains were seized by some citizens of Ravenna and buried there in the Church of Santa Maria Rotonda, the burial place of Theodoric the Great. | https://en.wikipedia.org/wiki?curid=24383 |
Pope Victor III
Pope Victor III ( 1026 – 16 September 1087), born Dauferio, was the bishop of Rome and ruler of the Papal States from 24 May 1086 to his death. He was the successor of Pope Gregory VII, yet his pontificate is far less impressive in history than his time as Desiderius, the great abbot of Montecassino.
His failing health was the factor that made him so reluctant to accept his pontifical election and his health was so poor that he fell to illness during his coronation. The only literary work of his that remains is his "Dialogues" on the miracles performed by Saint Benedict of Nursia and other saints at Montecassino.
Pope Leo XIII beatified him on 23 July 1887.
Dauferio was born in 1026. He was the only child of Prince Landulf V of Benevento, one of the last Lombard rulers in Italy. After his father died in battle with the invading Normans in 1047, Dauferio fled from an arranged marriage and, though brought back by force, eventually fled again. He went to Cava de' Tirreni, where he obtained permission to enter the monastery of S. Sophia at Benevento, where he changed his name from Dauferius to Desiderius. It was a decision that his mother vehemently opposed, as he was the sole heir.
The life at S. Sophia was not strict enough for the young monk, who betook himself first to the island monastery of Tremite San Nicolo in the Adriatic and in 1053 to the hermits at Majella in the Abruzzi. About this time he was brought to the notice of St. Leo IX, and it is probable that the pope employed him at Benevento to negotiate peace with the Normans after the fatal battle of Civitate.
Somewhat later Desiderius attached himself to the court of Pope Victor II at Florence. There he met two monks of the renowned Benedictine monastery of Monte Cassino, with whom he returned in 1055. He joined the community and was shortly afterwards appointed superior of the dependent house at Capua. In 1057 Pope Stephen IX, who had retained the abbacy of Monte Cassino, came to visit and at Christmas, believing himself to be dying, ordered the monks to elect a new abbot. Their choice fell on Desiderius. The pope recovered, and, desiring to retain the abbacy during his lifetime, appointed the abbot-designate his legate for Constantinople. It was at Bari, when about to sail for the East, that the news of the pope's death reached Desiderius. Having obtained a safe-conduct from Robert Guiscard, the Norman Count (later Duke) of Apulia, he returned to his monastery and was duly installed by Cardinal Humbert on Easter Day 1058.
Pope Nicholas II elevated him into the cardinalate the Cardinal-Deacon of Santi Sergio e Bacco on 6 March 1058. He opted to be the Cardinal-Priest of Santa Cecilia in 1059.
Desiderius rebuilt the church and conventual buildings, perfected the products of the "scriptorium" and re-established monastic discipline, so that there were 200 monks in the monastery in his day. On 1 October 1071, the new Basilica of Monte Cassino was consecrated by Pope Alexander II. Desiderius' reputation brought gifts and exemptions to the abbey. The money was spent on church ornaments, including a great golden altar front from Constantinople adorned with gems and enamels and "nearly all the church ornaments of Victor II, which had been pawned here and there throughout the city". Peter the Deacon gives a list of some seventy books Desiderius had copied at Monte Cassino, including works of Saint Augustine, Saint Ambrose, Saint Bede, Saint Basil, Saint Jerome, Saint Gregory of Nazianzus and Cassian, the histories of Josephus, Paul Warnfrid, Jordanes and Saint Gregory of Tours, the "Institutes" and "Novels" of Justinian, the works of Terence, Virgil and Seneca, Cicero's "De natura deorum", and Ovid's "Fasti".
Desiderius had been appointed papal vicar for Campania, Apulia, Calabria and the Principality of Beneventum with special powers for the reform of monasteries. So great was his reputation with the Holy See that he "...was allowed by the Roman Pontiff to appoint Bishops and Abbots from among his Benedictine brethren in whatever churches or monasteries he desired, of those that had lost their patron".
Within two years of the consecration of the Cassinese Basilica, Alexander II died and was succeeded by Hildebrand as Pope Gregory VII. Desiderius was able to call forth the help of the Normans of southern Italy repeatedly in favour of the Holy See. Already in 1059 he had persuaded Robert Guiscard and Richard of Capua to become vassals of St. Peter for their newly conquered territories: now Gregory VII immediately after his election sent for him to give an account of the state of Norman Italy and entrusted him with the negotiation of an interview with Robert Guiscard on 2 August 1073, at Benevento. In 1074 and 1075 he acted as intermediary, probably as Gregory's agent, between the Norman princes themselves, and even when the latter were at open war with the pope, they still maintained the best relations with Monte Cassino. At the end of 1080 Desiderius obtained Norman troops for Gregory. In 1082 he visited the Italian king and future Holy Roman Emperor Henry IV at Albano, while the troops of the Imperialist antipope were harassing the pope from Tivoli. In 1083 the peace-loving abbot joined Hugh of Cluny in an attempt to reconcile pope and emperor, and his proceedings seem to have aroused some suspicion in Gregory's entourage. In 1084, when Rome was in Henry's hands and the pope besieged in Castel Sant'Angelo, Desiderius announced the approach of Guiscard's army to both emperor and pope.
Though certainly a strong partisan of the Hildebrandine reforms, Desiderius belonged to the moderate party and could not always see eye-to-eye with Pope Gregory VII in his most intransigent proceedings. Yet when the latter lay dying at Salerno on 25 May 1085, the Abbot of Monte Cassino was one of those whom he recommended to the cardinals of southern Italy as fittest to succeed him. The Roman people had expelled Clement III from the city, and hither Desiderius hastened to consult with the cardinals on the approaching election. Finding, however, that they were bent on forcing the papal dignity upon him, he fled to Monte Cassino, where he busied himself in exhorting the Normans and Lombards to rally to the support of the Holy See. When autumn came, Desiderius accompanied the Norman army on its march to Rome. However, when he became aware of the plot between the cardinals and the Norman princes to force the papal tiara on him, he would not enter Rome unless they swore to abandon their design. They refused to do that, and the election was postponed. At about Easter the bishops and cardinals assembled at Rome summoned Desiderius and the cardinals who were with him at Monte Cassino to come to Rome to treat concerning the election.
On 23 May a great meeting was held in the deaconry of St. Lucy, and Desiderius was again importuned to accept the papacy but persisted in his refusal, threatening to return to his monastery in case of violence. On the next day, the feast of Pentecost, the same scene was repeated very early in the morning. The Roman consul Cencius now suggested the election of Odo, Cardinal-Bishop of Ostia (afterwards pope Urban II), but this was rejected by some of the cardinals on the grounds that the translation of a bishop was contrary to ecclesiastical law.
Cardinal Desiderio, O.S.B., abbot of Montecassino, was elected successor to Gregory VII on May 24, 1086 in the deaconry of S. Lucia in Septisolis and took the name Victor III. Four days later, pope and cardinals had to flee from Rome before the imperial prefect of the Eternal City, and at Terracina, in spite of all protests, Victor laid aside the papal insignia and once more retired to Monte Cassino, where he remained nearly a whole year. In the middle of Lent 1087, the pope-elect assisted at a council of cardinals and bishops held at Capua as "Papal vicar of those parts" (letter of Hugh of Lyons) together with the Norman princes, Cencius the Consul and the Roman nobles. Here, Victor finally yielded and "by the assumption of the cross and purple confirmed the past election". How much his obstinacy had irritated some of the prelates is evidenced in the letter of Hugh of Lyons preserved by Hugh of Flavigny.
Under pressure from Prince Jordan I of Capua, to whom he had also rendered important service, he was elected on 24 May 1086, taking the throne name of Victor III, but his consecration did not take place until 9 May 1087 owing to the presence of the Antipope Clement III in Rome. After celebrating Easter of 1087 in his monastery, Victor proceeded to Rome, and when the Normans had driven the soldiers of the Antipope Clement III (Guibert of Ravenna) out of St. Peter's, he was consecrated and enthroned on 9 May 1087. He only remained eight days in Rome and then returned to Monte Cassino, though with the help of Matilda and Jordan, he took back the Vatican Hill. Before May was out he was once more in Rome in answer to a summons for the countess Matilda of Tuscany, whose troops held the Leonine City and Trastevere, but when at the end of June the antipope once more gained possession of St. Peter's, Victor again withdrew at once to his Monte Cassino abbey. In August a council or synod of some importance was held at Benevento, which renewed the excommunication of the antipope Clement III and the condemnation of lay investiture, proclaimed a kind of crusade against the Saracens in northern Africa and anathematised Hugh of Lyons and Richard, Abbot of Marseilles.
When the council had lasted three days, Victor became seriously ill and retired to Monte Cassino to die. He had himself carried into the chapter-house, issued various decrees for the benefit of the abbey, appointed with the consent of the monks the prior, Cardinal Oderisius, to succeed him in the abbacy, just as he himself had been appointed by Stephen IX, and proposed Odo of Ostia to the assembled cardinals and bishops as the next pope. He died on 16 September 1087 and was buried in the tomb he had prepared for himself in the abbey's chapter-house. Odo was duly elected his successor as Pope Urban II.
Pope Victor's only existing literary work "Dialogues," is on the miracles wrought by St. Benedict and other saints at Monte Cassino. There is also a letter to the bishops of Sardinia, where (since c. 1050 brought under Pisan and Genoan control) he sent monks while still abbot of Monte Cassino.
In his "De Viris Illustribus Casinensibus", Peter the Deacon ascribes to him the composition of a "Cantus ad B. Maurum" and letters to King Philip I of France and to Hugh of Cluny, which no longer exist.
The cult of Blessed Victor III seems to have begun not later than the pontificate of Pope Anastasius IV, about six decades after his death (Acta Sanctorum, Loc. cit.). In 1515, Victor III's body was relocated to the main abbey church in Monte Cassino with many pilgrims visiting his tomb. In 1727 the abbot of Monte Cassino obtained from Pope Benedict XIII permission to keep his feast (Tosti, I, 393). Pope Leo XIII beatified Victor III. Victor's body was once again moved at the Chapel of St. Victor in 1887 when he was canonized.
During World War II, his body was removed and placed in Rome for safekeeping. The main abbey at Monte Cassino was destroyed in February 1944 by US bombing. Victor's body was moved back to the rebuilt abbey in 1963. | https://en.wikipedia.org/wiki?curid=24384 |
Political science
Political science, occasionally called politology, is a social science which deals with systems of governance, and the analysis of political activities, political thoughts, associated constitutions and political behavior.
Political science comprises numerous subfields, including comparative politics, political economy, international relations, political theory, public administration, public policy, and political methodology. Furthermore, political science is related to, and draws upon, the fields of economics, law, sociology, history, philosophy, geography, psychology/psychiatry, anthropology and neurosciences.
Comparative politics is the science of comparison and teaching of different types of constitutions, political actors, legislature and associated fields, all of them from an intrastate perspective. International relations deals with the interaction between nation-states as well as intergovernmental and transnational organizations. Political theory is more concerned with contributions of various classical and contemporary thinkers and philosophers.
Political science is methodologically diverse and appropriates many methods originating in psychology, social research and cognitive neuroscience. Approaches include positivism, interpretivism, rational choice theory, behavioralism, structuralism, post-structuralism, realism, institutionalism, and pluralism. Political science, as one of the social sciences, uses methods and techniques that relate to the kinds of inquiries sought: primary sources such as historical documents and official records, secondary sources such as scholarly journal articles, survey research, statistical analysis, case studies, experimental research, and model building.
Political science is a social study concerning the allocation and transfer of power in decision making, the roles and systems of governance including governments and international organizations, political behavior and public policies. They measure the success of governance and specific policies by examining many factors, including stability, justice, material wealth, peace and public health. Some political scientists seek to advance positive (attempt to describe how things are, as opposed to how they should be) theses by analysing politics. Others advance normative theses, by making specific policy recommendations.
Political scientists provide the frameworks from which journalists, special interest groups, politicians, and the electorate analyse issues. According to Chaturvedy,
In the United States, political scientists known as "Americanists" look at a variety of data including constitutional development, elections, public opinion, and public policy such as Social Security reform, foreign policy, US Congressional committees, and the US Supreme Court — to name only a few issues.
Because political science is essentially a study of human behavior, in all aspects of politics, observations in controlled environments are often challenging to reproduce or duplicate, though experimental methods are increasingly common (see experimental political science). Citing this difficulty, former American Political Science Association President Lawrence Lowell once said "We are limited by the impossibility of experiment. Politics is an observational, not an experimental science." Because of this, political scientists have historically observed political elites, institutions, and individual or group behavior in order to identify patterns, draw generalizations, and build theories of politics.
Like all social sciences, political science faces the difficulty of observing human actors that can only be partially observed and who have the capacity for making conscious choices unlike other subjects such as non-human organisms in biology or inanimate objects as in physics. Despite the complexities, contemporary political science has progressed by adopting a variety of methods and theoretical approaches to understanding politics and methodological pluralism is a defining feature of contemporary political science.
The advent of political science as a university discipline was marked by the creation of university departments and chairs with the title of political science arising in the late 19th century. In fact, the designation "political scientist" is typically for those with a doctorate in the field, but can also apply to those with a master's in the subject. Integrating political studies of the past into a unified discipline is ongoing, and the history of political science has provided a rich field for the growth of both normative and positive political science, with each part of the discipline sharing some historical predecessors. The American Political Science Association and the "American Political Science Review" were founded in 1903 and 1906, respectively, in an effort to distinguish the study of politics from economics and other social phenomena.
In the 1950s and the 1960s, a behavioral revolution stressing the systematic and rigorously scientific study of individual and group behavior swept the discipline. A focus on studying political behavior, rather than institutions or interpretation of legal texts, characterized early behavioral political science, including work by Robert Dahl, Philip Converse, and in the collaboration between sociologist Paul Lazarsfeld and public opinion scholar Bernard Berelson.
The late 1960s and early 1970s witnessed a take off in the use of deductive, game theoretic formal modelling techniques aimed at generating a more analytical corpus of knowledge in the discipline. This period saw a surge of research that borrowed theory and methods from economics to study political institutions, such as the United States Congress, as well as political behavior, such as voting. William H. Riker and his colleagues and students at the University of Rochester were the main proponents of this shift.
Despite considerable research progress in the discipline based on all the kinds of scholarship discussed above, it has been observed that progress toward systematic theory has been modest and uneven.
The theory of political transitions, and the methods of their analysis and anticipating of crises, form an important part of political science. Several general indicators of crises and methods were proposed for anticipating critical transitions. Among them, a statistical indicator of crisis, simultaneous increase of variance and correlations in large groups, was proposed for crisis anticipation and may be successfully used in various areas. Its applicability for early diagnosis of political crises was demonstrated by the analysis of the prolonged stress period preceding the 2014 Ukrainian economic and political crisis. There was a simultaneous increase in the total correlation between the 19 major public fears in the Ukrainian society (by about 64%) and also in their statistical dispersion (by 29%) during the pre-crisis years. A feature shared by certain major revolutions is that they were not predicted. The theory of apparent inevitability of crises and revolutions was also developed.
In the Soviet Union, political studies were carried out under the guise of some other disciplines like theory of state and law, area studies, international relations, studies of labor movement, "critique of bourgeois theories", etc. Soviet scholars were represented at the International Political Science Association (IPSA) since 1955 (since 1960 by the Soviet Association of Political and State Studies).
In 1979, the 11th World Congress of IPSA took place in Moscow. Until the late years of the Soviet Union, political science as a field was subjected to tight control of the Communist Party of the Soviet Union and was thus subjected to distrust. Anti-communists accused political scientists of being "false" scientists and of having served the old regime.
After the fall of the Soviet Union, two of the major institutions dealing with political science, the Institute of Contemporary Social Theories and the Institute of International Affairs, were disbanded, and most of their members were left without jobs. These institutes were victims of the first wave of anticommunist opinion and ideological attacks. Today, the Russian Political Science Association unites professional political scientists from all around Russia.
In 2000, the Perestroika Movement in political science was introduced as a reaction against what supporters of the movement called the mathematicization of political science. Those who identified with the movement argued for a plurality of methodologies and approaches in political science and for more relevance of the discipline to those outside of it.
Some evolutionary psychology theories argue that humans have evolved a highly developed set of psychological mechanisms for dealing with politics. However, these mechanisms evolved for dealing with the small group politics that characterized the ancestral environment and not the much larger political structures in today's world. This is argued to explain many important features and systematic cognitive biases of current politics.
Political science, possibly like the social sciences as a whole, "as a discipline lives on the fault line between the 'two cultures' in the academy, the sciences and the humanities." Thus, in some American colleges where there is no separate School or College of Arts and Sciences per se, political science may be a separate department housed as part of a division or school of Humanities or Liberal Arts. Whereas classical political philosophy is primarily defined by a concern for Hellenic and Enlightenment thought, political scientists are also marked by a great concern for "modernity" and the contemporary nation state, along with the study of classical thought, and as such share a greater deal of terminology with sociologists (e.g. structure and agency).
Most United States colleges and universities offer B.A. programs in political science. M.A. or M.A.T. and Ph.D. or Ed.D. programs are common at larger universities. The term "political science" is more popular in North America than elsewhere; other institutions, especially those outside the United States, see political science as part of a broader discipline of "political studies," "politics," or "government." While "political science" implies use of the scientific method, "political studies" implies a broader approach, although the naming of degree courses does not necessarily reflect their content. Separate degree granting programs in international relations and public policy are not uncommon at both the undergraduate and graduate levels. Master's level programs in political science are common when political scientists engage in public administration.
The national honor society for college and university students of government and politics in the United States is Pi Sigma Alpha.
Most political scientists work broadly in one or more of the following five areas:
Some political science departments also classify methodology as well as scholarship on the domestic politics of a particular country as distinct fields. In the United States, American politics is often treated as a separate subfield.
In contrast to this traditional classification, some academic departments organize scholarship into thematic categories, including political philosophy, political behavior (including public opinion, collective action, and identity), and political institutions (including legislatures and international organizations). Political science conferences and journals often emphasize scholarship in more specific categories. The American Political Science Association, for example, has 42 organized sections that address various methods and topics of political inquiry.
Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency. In both the public and private sectors, stakeholders often want to know whether the programs they are funding, implementing, voting for, receiving or objecting to are producing the intended effect. While program evaluation first focuses around this definition, important considerations often include how much the program costs per participant, how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful.
Policy analysis is a technique used in public administration to enable civil servants, activists, and others to examine and evaluate the available options to implement the goals of laws and elected officials.
As a social political science, contemporary political science started to take shape in the latter half of the 19th century. At that time it began to separate itself from political philosophy, which traces its roots back to the works of Aristotle and Plato, which were written nearly 2,500 years ago. The term "political science" was not always distinguished from political philosophy, and the modern discipline has a clear set of antecedents including also moral philosophy, political economy, political theology, history, and other fields concerned with normative determinations of what ought to be and with deducing the characteristics and functions of the ideal state. | https://en.wikipedia.org/wiki?curid=24388 |
Public relations
Public relations (PR) is the practice of deliberately managing the release and spread of information between an individual or an organization (such as a business, government agency, or a nonprofit organization) and the public. Public relations (PR) and publicity differ in that PR is controlled internally, whereas publicity is not controlled and contributed by external parties. Public relations may include an organization or individual gaining exposure to their audiences using topics of public interest and news items that do not require direct payment. This differentiates it from advertising as a form of marketing communications. Public relations aims to create or obtain coverage for clients for free, also known as 'earned media', rather than paying for marketing or advertising. But in the 2010s, advertising is also a part of broader PR activities.
An example of good public relations would be generating an article featuring a client, rather than paying for the client to be advertised next to the article. The aim of public relations is to inform the public, prospective customers, investors, partners, employees, and other stakeholders, and ultimately persuade them to maintain a positive or favorable view about the organization, its leadership, products, or political decisions. Public relations professionals typically work for PR and marketing firms, businesses and companies, government, and public officials as public information officers and nongovernmental organizations, and nonprofit organizations. Jobs central to public relations include account coordinator, account executive, account supervisor, and media relations manager.
Public relations specialists establish and maintain relationships with an organization's target audience, the media, relevant trade media, and other opinion leaders. Common responsibilities include designing communications campaigns, writing press releases and other content for news, working with the press, arranging interviews for company spokespeople, writing speeches for company leaders, acting as an organisation's spokesperson, preparing clients for press conferences, media interviews and speeches, writing website and social media content, managing company reputation (crisis management), managing internal communications, and marketing activities like brand awareness and event management Success in the field of public relations requires a deep understanding of the interests and concerns of each of the company's many stakeholders. The public relations professional must know how to effectively address those concerns using the most powerful tool of the public relations trade, which is publicity.
Ivy Lee, the man who turned around the Rockefeller name and image, and his friend, Edward Louis Bernays, established the first definition of public relations in the early 1900s as follows: "a management function, which tabulates public attitudes, defines the policies, procedures and interests of an organization... followed by executing a program of action to earn public understanding and acceptance." However, when Lee was later asked about his role in a hearing with the United Transit Commission, he said "I have never been able to find a satisfactory phrase to describe what I do." In 1948, historian Eric Goldman noted that the definition of public relations in Webster's would be "disputed by both practitioners and critics in the field."
According to Bernays, the public relations counsel is the agent working with both modern media of communications and group formations of society in order to provide ideas to the public's consciousness. Furthermore, he is also concerned with ideologies and courses of actions as well as material goods and services and public utilities and industrial associations and large trade groups for which it secures popular support.
In August 1978, the World Assembly of Public Relations Associations defined the field as "the art and social science of analyzing trends, predicting their consequences, counseling organizational leaders and implementing planned programs of action, which will serve both the organization and the public interest."
Public Relations Society of America, a professional trade association, defined public relations in 1982 as: "Public relations helps an organization and its publics adapt mutually to each other."
In 2011 and 2012, the PRSA solicited crowd supplied definitions for the term and allowed the public to vote on one of three finalists. The winning definition stated that:
"Public relations is a strategic communication process that builds mutually beneficial relationships between organizations and their publics."
Public relations can also be defined as the practice of managing communication between an organization and its publics.
Public relations is to speak out its advocacy in public, and it builds up a talking platform to achieve its goals and protect the interests of people.
Public relations is not a phenomenon of the 20th century, but rather has historical roots. Most textbooks consider the establishment of the Publicity Bureau in 1900 to be the founding of the public relations profession. However, academics have found early forms of public influence and communications management in ancient civilizations, during the settling of the New World and during the movement to abolish slavery in England. Basil Clark is considered the founder of public relations in the United Kingdom for his establishment of Editorial Services in 1924.
Propaganda was used by the United States, the United Kingdom, Germany, and others to rally for domestic support and demonize enemies during the World Wars, which led to more sophisticated commercial publicity efforts as public relations talent entered the private sector. Most historians believe public relations became established first in the US by Ivy Lee or Edward Bernays, then spread internationally. Many American companies with PR departments spread the practice to Europe when they created European subsidiaries as a result of the Marshall plan.
The second half of the 1900s is considered the professional development building era of public relations. Trade associations, PR news magazines, international PR agencies, and academic principles for the profession were established. In the early 2000s, press release services began offering social media press releases. The Cluetrain Manifesto, which predicted the effect of social media in 1999, was controversial in its time, but by 2006, the effect of social media and new internet technologies became broadly accepted.
"Cosmopolitan" reported that the average annual salary for a "public relations director" was £77,619 in 2017. One notable former PR practitioner was former Prime Minister David Cameron.
Public relations practitioners typically have a bachelor's degree in journalism, communications, public relations, marketing, or English. Many senior practitioners have advanced degrees; a 2015 survey found that forty-percent of chief communications officers at Fortune 500 companies had master's degrees.
In 2013, a survey of the 21,000 members of the Public Relations Society of America found that 18-percent held the Accreditation in Public Relations.
In 2019, a "PR Week" survey found a median annual compensation of $95,000 for public relations practitioners, with sector medians ranging from $85,000 in the non-profit sector, $96,000 in a private agency setting, and $126,000 in a for-profit corporation. The Bureau of Labor Statistics, meanwhile, reports the median annual for "public relations specialists" at $68,000 in 2017 and $114,000 for "public relations managers".
According to a 2017 survey by Spring Associates, public relations practitioners in the United States private sector – working at PR agencies - earn salaries which range from $54,900 for an early career position as an account executive, to $118,400 for a mid-career position as an account director, to $174,200 for a senior position as an executive vice-president. Those working in the private sector within a company or organization's PR department earn salaries ranging from $77,600 for an early-career position as a PR specialist, to $149,300 in a mid-career position as a PR director, to $185,000 for a senior position as a vice-president of public relations. Salaries tended to be higher for persons employed in major media markets such as New York and Los Angeles, and lower for those employed in tertiary markets.
The c-level position of chief communications officer (CCO), used in some private companies, usually earned more than $220,000 annually as of 2013. CCOs at Fortune 200 companies, meanwhile, had an average compensation package of just over $1 million annually, according to a 2009 survey by "Fortune"; this amount included base salary, bonus, and stock options.
Within the U.S. federal government, public affairs workers had a 2016 average salary of approximately $101,922, with the U.S. Forest Service employing the most such professionals. Of federal government agencies employing more than one public affairs worker, those at the Federal Aviation Administration earned the most, on average, at approximately $150,130. The highest-earning public affairs worker within the U.S. government, meanwhile, earned $229,333.
Salaries of public relations specialists in local government vary widely. The chief communications officer of the Utah Transit Authority earned $258,165 in total compensation in 2014 while an early-career public information officer for the city of Conway, South Carolina had a pay range beginning at approximately $59,000 per year in 2017.
"Indeed" reported that the average annual salary for a "public relations manager" was $59,326 in June, 2019. According to Stats Canada, there has been no growth in the demand for journalists in Canada, but the demand for PR practitioners continues to grow. Most journalists transition into public relations smoothly and bring a much-needed skill-set to the profession.
Public relations practitioners typically have a bachelor's degree in communications, public relations, journalism, or English. Some senior practitioners have advanced degrees. The industry has seen an influx of journalists because newsrooms are in decline and the salaries tend to be higher.
Public relations professionals present the face of an organization or individual, usually to articulate its objectives and official views on issues of relevance, primarily to the media. Public relations contributes to the way an organization is perceived by influencing the media and maintaining relationships with stakeholders. According to Dr. Jacquie L’Etang from Queen Margaret University, public relations professionals can be viewed as "discourse workers specializing in communication and the presentation of argument and employing rhetorical strategies to achieve managerial aims."
Specific public relations disciplines include:
Building and managing relationships with those who influence an organization or individual's audiences has a central role in doing public relations. After a public relations practitioner has been working in the field, they accumulate a list of relationships that become an asset, especially for those in media relations.
Within each discipline, typical activities include publicity events, speaking opportunities, press releases, newsletters, blogs, social media, press kits, and outbound communication to members of the press. Video and audio news releases (VNRs and ANRs) are often produced and distributed to TV outlets in hopes they will be used as regular program content.
A fundamental technique used in public relations is to identify the target audience and to tailor messages to be relevant to each audience. Sometimes the interests of differing audiences and stakeholders common to a public relations effort necessitate the creation of several distinct but complementary messages. These messages however should be relevant to each other, thus creating a consistency to the overall message and theme. Audience targeting tactics are important for public relations practitioners because they face all kinds of problems: low visibility, lack of public understanding, opposition from critics, and insufficient support from funding sources.
On the other hand, stakeholder theory identifies people who have a stake in a given institution or issue. All audiences are stakeholders (or presumptive stakeholders), but not all stakeholders are audiences. For example, if a charity commissions a public relations agency to create an advertising campaign to raise money to find a cure for a disease, the charity and the people with the disease are stakeholders, but the audience is anyone who is likely to donate money. Public relations experts possess deep skills in media relations, market positioning, and branding. They are powerful agents that help clients deliver clear, unambiguous information to a target audience that matters to them.
The public is any group whose members have a common interest or common values in a particular subject, such as political party. Those members would then be considered stakeholders, which are people who have a stake or an interest in an organization or issue that potentially involves the organization or group they're interested in. The Publics in Public Relations are:
Early literature authored by James Grunig (1978) suggested that publics develop in stages determined by their levels of problem recognition, constraint recognition and involvement in addressing the issue. The theory posited that publics develop in the following stages:
Messaging is the process of creating a consistent story around: a product, person, company, or service. Messaging aims to avoid having readers receive contradictory or confusing information that will instill doubt in their purchasing choices, or other decisions that affect the company. Brands aim to have the same problem statement, industry viewpoint, or brand perception shared across sources and media.
Digital marketing is the use of Internet tools and technologies such as search engines, Web 2.0 social bookmarking, new media relations, blogging, and social media marketing. Interactive PR allows companies and organizations to disseminate information without relying solely on mainstream publications and communicate directly with the public, customers and prospects.
PR practitioners have always relied on the media such as TV, radio, and magazines, to promote their ideas and messages tailored specifically to a target audience. Social media marketing is not only a new way to achieve that goal, it is also a continuation of a strategy that existed for decades. Lister et al. said that "Digital media can be seen as a continuation and extension of a principal or technique that was already in place".
Social media platforms enable users to connect with audiences to build brands, increase sales, and drive website traffic. This involves publishing content on social media profiles, engaging with followers, analyzing results, and running social media advertisements. The goal is to produce content that users will share with their social network to help a company increase brand exposure and broaden customer reach. Some of the major social media platforms are currently Facebook, Instagram, Twitter, LinkedIn, Pinterest, YouTube, and Snapchat.
As digital technology has evolved, the methods to measure effective online public relations effectiveness have improved. The Public Relations Society of America, which has been developing PR strategies since 1947, identified 5 steps to measure online public relations effectiveness.
Publicists can work in a host of different types of business verticals such as entertainment, technology, music, travel, television, food, consumer electronics and more. Many publicists build their career in a specific business space to leverage relationships and contacts. There are different kinds of press strategies for such as B2B (business to business) or B2C (business to consumer). Business to business publicity highlights service providers who provide services and products to other businesses. Business to Consumer publicizes products and services for regular consumers, such as toys, travel, food, entertainment, personal electronics and music.
Litigation public relations is the management of the communication process during the course of any legal dispute or adjudicatory processing so as to affect the outcome or its effect on the client's overall reputation (Haggerty, 2003).
Public relations professionals both serve the public's interest and private interests of businesses, associations, non-profit organizations, and governments. This dual obligation gave rise to heated debates among scholars of the discipline and practitioners over its fundamental values. This conflict represents the main ethical predicament of public relations. In 2000, the Public Relations Society of America (PRSA) responded to the controversy by acknowledging in its new code of ethics "advocacy" – for the first time – as a core value of the discipline.
The field of public relations is generally highly un-regulated, but many professionals voluntarily adhere to the code of conduct of one or more professional bodies to avoid exposure for ethical violations. The Chartered Institute of Public Relations, the Public Relations Society of America, and The Institute of Public Relations are a few organizations that publish an ethical code. Still, Edelman's 2003 semi-annual trust survey found that only 20 percent of survey respondents from the public believed paid communicators within a company were credible. Public relations people are growing increasingly concerned with their company's marketing practices, questioning whether they agree with the company's social responsibility. They seek more influence over marketing and more of a counseling and policy-making role. On the other hand, marketing people are increasingly interested in incorporating publicity as a tool within the realm marketing.
According to Scott Cutlip, the social justification for public relations is the right for an organization to have a fair hearing of their point of view in the public forum, but to obtain such a hearing for their ideas requires a skilled advocate.
The Public Relation Student Society of America has established a set of fundamental guidelines that people within the public relations professions should practice and use in their business atmosphere. These values are:
Spin has been interpreted historically to mean overt deceit that is meant to manipulate the public, but since the 1990s has shifted to describing a "polishing of the truth." Today, spin refers to providing a certain interpretation of information meant to sway public opinion. Companies may use spin to create the appearance of the company or other events are going in a slightly different direction than they actually are. Within the field of public relations, spin is seen as a derogatory term, interpreted by professionals as meaning blatant deceit and manipulation. Skilled practitioners of spin are sometimes called "spin doctors."
In Stuart Ewen's "PR! A Social History of Spin", he argues that public relations can be a real menace to democracy as it renders the public discourse powerless. Corporations are able to hire public relations professionals and transmit their messages through the media channels and exercise a huge amount of influence upon the individual who is defenseless against such a powerful force. He claims that public relations is a weapon for capitalist deception and the best way to resist is to become media literate and use critical thinking when interpreting the various mediated messages.
According to Jim Hoggan, " public relations is not by definition 'spin'. Public relations is the art of building good relationships. You do that most effectively by earning trust and goodwill among those who are important to you and your business... Spin us to public relations what manipulation is to interpersonal communications. It's a diversion whose primary effect is ultimately to undermine the central goal of building trust and nurturing a good relationship."
The techniques of spin include selectively presenting facts and quotes that support ideal positions (cherry picking), the so-called "non-denial denial," phrasing that in a way presumes unproven truths, euphemisms for drawing attention away from items considered distasteful, and ambiguity in public statements. Another spin technique involves careful choice of timing in the release of certain news so it can take advantage of prominent events in the news.
Negative public relations, also called dark public relations (DPR) and in some earlier writing "Black PR", is a process of destroying the target's reputation and/or corporate identity. The objective in DPR is to discredit someone else, who may pose a threat to the client's business or be a political rival. DPR may rely on IT security, industrial espionage, social engineering and competitive intelligence. Common techniques include using dirty secrets from the target, producing misleading facts to fool a competitor. In politics, a decision to use negative PR is also known as negative campaigning.
In "Propaganda" (1928), Bernays argued that the manipulation of public opinion was a necessary part of democracy. In public relations, lobby groups are created to influence government policy, corporate policy or public opinion, typically in a way that benefits the sponsoring organization.
In fact, Bernays stresses that we are in fact dominated in almost every aspect of our lives, by a relatively small number of persons who have mastered the ‘mental processes and social patterns of the masses,’ which include our behavior, political and economic spheres or our morals. In theory, each individual chooses his own opinion on behavior and public issues. However, in practice, it is impossible for one to study all variables and approaches of a particular question and come to a conclusion without any external influence. This is the reason why the society has agreed upon an ‘invisible government’ to interpret on our behalf information and narrow the choice field to a more practical scale.
When a lobby group hides its true purpose and support base, it is known as a front group. Front groups are a form of astroturfing, because they intend to sway the public or the government without disclosing their financial connection to corporate or political interests. They create a fake grass-roots movement by giving the appearance of a trusted organization that serves the public, when they actually serve their sponsors.
Politicians also employ public relations professionals to help project their views, policies and even personalities to their best advantages. | https://en.wikipedia.org/wiki?curid=24389 |
Paradox
A paradox, also known as an antinomy, is a logically self-contradictory statement or a statement that runs contrary to one's expectation. It is a statement that, despite apparently valid reasoning from true premises, leads to a seemingly self-contradictory or a logically unacceptable conclusion. A paradox usually involves contradictory-yet-interrelated elements—that exist simultaneously and persist over time.
In logic, many paradoxes exist which are known to be invalid arguments, but which are nevertheless valuable in promoting critical thinking, while other paradoxes have revealed errors in definitions which were assumed to be rigorous, and have caused axioms of mathematics and logic to be re-examined. One example is Russell's paradox, which questions whether a "list of all lists that do not contain themselves" would include itself, and showed that attempts to found set theory on the identification of sets with properties or predicates were flawed. Others, such as Curry's paradox, cannot be easily resolved by making foundational changes in a logical system.
Examples outside logic include the ship of Theseus from philosophy, a paradox which questions whether a ship repaired over time by replacing each and all of its wooden parts, one at a time, would remain the same ship. Paradoxes can also take the form of images or other media. For example, M.C. Escher featured perspective-based paradoxes in many of his drawings, with walls that are regarded as floors from other points of view, and staircases that appear to climb endlessly.
In common usage, the word "paradox" often refers to statements that are ironic or unexpected, such as "the paradox that standing is more tiring than walking".
Common themes in paradoxes include self-reference, infinite regress, circular definitions, and confusion or equivocation between different levels of abstraction.
Patrick Hughes outlines three laws of the paradox:
Other paradoxes involve false statements ("'impossible' is not a word in my vocabulary", a simple paradox) or half-truths and the resulting biased assumptions. This form is particularly common in howlers.
As an example, consider a situation in which a father and his son are driving down the road. The car crashes into a tree and the father is killed. The boy is rushed to the nearest hospital where he is prepared for emergency surgery. Upon entering the surgery-suite, the surgeon says, "I can't operate on this boy. He's my son."
The apparent paradox is caused by a hasty generalization, for if the surgeon is the boy's father, the statement cannot be true. On the other hand, the paradox is resolved if it is revealed that the surgeon is a woman — the boy's mother - or the parents were a gay couple.
Paradoxes which are not based on a hidden error generally occur at the fringes of context or language, and require extending the context or language in order to lose their paradoxical quality. Paradoxes that arise from apparently intelligible uses of language are often of interest to logicians and philosophers. "This sentence is false" is an example of the well-known liar paradox: it is a sentence which cannot be consistently interpreted as either true or false, because if it is known to be false, then it can be inferred that it must be true, and if it is known to be true, then it can be inferred that it must be false. Russell's paradox, which shows that the notion of "the set of all those sets that do not contain themselves" leads to a contradiction, was instrumental in the development of modern logic and set theory.
Thought-experiments can also yield interesting paradoxes. The grandfather paradox, for example, would arise if a time-traveler were to kill his own grandfather before his mother or father had been conceived, thereby preventing his own birth. This is a specific example of the more general observation of the butterfly effect, or that a time-traveller's interaction with the past—however slight—would entail making changes that would, in turn, change the future in which the time-travel was yet to occur, and would thus change the circumstances of the time-travel itself.
Often a seemingly paradoxical conclusion arises from an inconsistent or inherently contradictory definition of the initial premise. In the case of that apparent paradox of a time-traveler killing his own grandfather, it is the inconsistency of defining the past to which he returns as being somehow different from the one which leads up to the future from which he begins his trip, but also insisting that he must have come to that past from the same future as the one that it leads up to.
W. V. Quine (1962) distinguished between three classes of paradoxes:
A fourth kind, which may be alternatively interpreted as a special case of the third kind, has sometimes been described since Quine's work:
A taste for paradox is central to the philosophies of Laozi, Zeno of Elea, Zhuangzi, Heraclitus, Bhartrhari, Meister Eckhart, Hegel, Kierkegaard, Nietzsche, and G.K. Chesterton, among many others. Søren Kierkegaard, for example, writes in the "Philosophical Fragments" that:
But one must not think ill of the paradox, for the paradox is the passion of thought, and the thinker without the paradox is like the lover without passion: a mediocre fellow. But the ultimate potentiation of every passion is always to will its own downfall, and so it is also the ultimate passion of the understanding to will the collision, although in one way or another the collision must become its downfall. This, then, is the ultimate paradox of thought: to want to discover something that thought itself cannot think.
A paradoxical reaction to a drug is the opposite of what one would expect, such as becoming agitated by a sedative or sedated by a stimulant. Some are common and are used regularly in medicine, such as the use of stimulants such as Adderall and Ritalin in the treatment of attention deficit hyperactivity disorder (also known as ADHD), while others are rare and can be dangerous as they are not expected, such as severe agitation from a benzodiazepine. | https://en.wikipedia.org/wiki?curid=24390 |
Paul J. McAuley
Paul J. McAuley (born 23 April 1955) is a British botanist and science fiction author.
A biologist by training, McAuley writes mostly hard science fiction. His novels dealing with themes such as biotechnology, alternative history/alternative reality, and space travel.
McAuley began with far-future space opera "Four Hundred Billion Stars", its sequel "Eternal Light", and the planetary-colony adventure "Of the Fall". "Red Dust", set on a far-future Mars colonized by the Chinese, is a planetary romance featuring many emerging technologies and SF motifs: nanotechnology, biotechnology, artificial intelligence, personality downloads, virtual reality. The Confluence series, set in an even more distant future (about ten million years from now), is one of a number of novels to use Frank J. Tipler's Omega Point Theory (that the universe seems to be evolving toward a maximum degree of complexity and consciousness) as one of its themes.
About the same time, he published "Pasquale's Angel", set in an alternative Italian Renaissance and featuring Niccolò Machiavegli (Machiavelli) and Leonardo da Vinci as major characters.
McAuley has also used biotechnology and nanotechnology themes in near-future settings: "Fairyland" describes a dystopian, war-torn Europe where genetically engineered "dolls" are used as disposable slaves. Since 2001 he has produced several SF-based techno-thrillers such as "The Secret of Life", "Whole Wide World", and "White Devils".
"Four Hundred Billion Stars", his first novel, won the Philip K. Dick Award in 1988. "Fairyland" won the 1996 Arthur C. Clarke Award and the 1997 John W. Campbell Memorial Award for Best SF Novel. "The Temptation of Dr. Stein", won the British Fantasy Award. "Pasquale's Angel" won the Sidewise Award for Alternate History (Long Form). | https://en.wikipedia.org/wiki?curid=24397 |
PDP-11
The PDP-11 is a series of 16-bit minicomputers sold by Digital Equipment Corporation (DEC) from 1970 into the 1990s, one of a set of products in the Programmed Data Processor (PDP) series. In total, around 600,000 PDP-11s of all models were sold, making it one of DEC's most successful product lines. The PDP-11 is considered by some experts to be the most popular minicomputer ever.
The PDP-11 included a number of innovative features in its instruction set and additional general-purpose registers that made it much easier to program than earlier models in the PDP series. Additionally, the innovative Unibus system allowed external devices to be easily interfaced to the system using direct memory access, opening the system to a wide variety of peripherals. The PDP-11 replaced the PDP-8 in many real-time applications, although both product lines lived in parallel for more than 10 years. The ease of programming of the PDP-11 made it very popular for general-purpose computing uses as well.
The design of the PDP-11 inspired the design of late-1970s microprocessors including the Intel x86 and the Motorola 68000. Design features of PDP-11 operating systems, as well as other operating systems from Digital Equipment, influenced the design of other operating systems such as CP/M and hence also MS-DOS. The first officially named version of Unix ran on the PDP-11/20 in 1970. It is commonly stated that the C programming language took advantage of several low-level PDP-11–dependent programming features, albeit not originally by design.
An effort to expand the PDP-11 from 16 to 32-bit addressing led to the VAX-11 design, which took part of its name from the PDP-11.
In 1963, DEC introduced what is considered to be the first commercial minicomputer in the form of the PDP-5. This was a 12-bit design adapted from the 1962 LINC machine that was intended to be used in a lab setting. DEC slightly simplified the LINC system and instruction set, aiming the PDP-5 at smaller settings that did not need the power of their larger 18-bit PDP-4. The PDP-5 was a success, ultimately selling about 50,000 examples.
During this period, the computer market was moving from computer word lengths based on units of 6 bits to units of 8 bits, following the introduction of the 7-bit ASCII standard. In 1967–1968, DEC engineers designed a 16-bit machine, the PDP-X, but management ultimately canceled the project as it did not appear to offer a significant advantage over their existing 12- and 18-bit platforms.
Several of the engineers from the PDP-X left DEC and formed Data General. The next year they introduced the 16-bit Data General Nova. The Nova was a major success, selling tens of thousands of units and launching what would become one of DEC's major competitors through the 1970s and 1980s.
A subsequent effort, code-named "Desk Calculator", looked at a variety of options before choosing what became the 16-bit PDP-11; The PDP-11 family was announced in January 1970 and shipments began early that year. DEC sold over 170,000 PDP-11s in the 1970s.
Initially manufactured of small-scale transistor–transistor logic, a single-board large scale integration version of the processor was developed in 1975. A two-or-three-chip processor, the J-11 was developed in 1979. The last models of the PDP-11 line were the PDP-11/94 and PDP-11/93 introduced in 1990.
The PDP-11 processor architecture has a mostly orthogonal instruction set. For example, instead of instructions such as "load" and "store", the PDP-11 has a "move" instruction for which either operand (source and destination) can be memory or register. There are no specific "input" or "output" instructions; the PDP-11 uses memory-mapped I/O and so the same "move" instruction is used; orthogonality even enables moving data directly from an input device to an output device. More complex instructions such as "add" likewise can have memory, register, input, or output as source or destination.
Most operands can apply any of eight addressing modes to eight registers. The addressing modes provide register, immediate, absolute, relative, deferred (indirect), and indexed addressing, and can specify autoincrementation and autodecrementation of a register by one (byte instructions) or two (word instructions). Use of relative addressing lets a machine-language program be position-independent.
Early models of the PDP-11 had no dedicated bus for input/output, but only a system bus called the Unibus, as input and output devices were mapped to memory addresses.
An input/output device determined the memory addresses to which it would respond, and specified its own interrupt vector and interrupt priority. This flexible framework provided by the processor architecture made it unusually easy to invent new bus devices, including devices to control hardware that had not been contemplated when the processor was originally designed. DEC openly published the basic Unibus specifications, even offering prototyping bus interface circuit boards, and encouraging customers to develop their own Unibus-compatible hardware.
The Unibus made the PDP-11 suitable for custom peripherals. One of the predecessors of Alcatel-Lucent, the Bell Telephone Manufacturing Company, developed the BTMC DPS-1500 packet-switching (X.25) network and used PDP-11s in the regional and national network management system, with the Unibus directly connected to the DPS-1500 hardware.
Higher-performance members of the PDP-11 family, starting with the PDP-11/45 Unibus and 11/83 Q-bus systems, departed from the single-bus approach. Instead, memory was interfaced by dedicated circuitry and space in the CPU cabinet, while the Unibus continued to be used for I/O only. In the PDP-11/70, this was taken a step further, with the addition of a dedicated interface between disks and tapes and memory, via the Massbus. Although input/output devices continued to be mapped into memory addresses, some additional programming was necessary to set up the added bus interfaces.
The PDP-11 supports hardware interrupts at four priority levels. Interrupts are serviced by software service routines, which could specify whether they themselves could be interrupted (achieving interrupt nesting). The event that causes the interrupt is indicated by the device itself, as it informs the processor of the address of its own interrupt vector.
Interrupt vectors are blocks of two 16-bit words in low kernel address space (which normally corresponded to low physical memory) between 0 and 776. The first word of the interrupt vector contains the address of the interrupt service routine and the second word the value to be loaded into the PSW (priority level) on entry to the service routine.
The article on PDP-11 architecture provides more details on interrupts.
The PDP-11 was designed for ease of manufacture by semiskilled labor. The dimensions of its pieces were relatively non-critical. It used a wire-wrapped backplane.
The LSI-11 (PDP-11/03), introduced in February 1975 is the first PDP-11 model produced using large-scale integration; the entire CPU is contained on four LSI chips made by Western Digital (the MCP-1600 chip set; a fifth chip can be added to extend the instruction set, as pictured on the right). It uses a bus which is a close variant of the Unibus called the LSI Bus or Q-Bus; it differs from the Unibus primarily in that addresses and data are multiplexed onto a shared set of wires rather than having separate sets of wires. It also differs slightly in how it addresses I/O devices and it eventually allowed a 22-bit physical address (whereas the Unibus only allows an 18-bit physical address) and block-mode operations for significantly improved bandwidth (which the Unibus does not support).
The CPU microcode includes a debugger: firmware with a direct serial interface (RS-232 or current loop) to a terminal. This lets the operator do debugging by typing commands and reading octal numbers, rather than operating switches and reading lights, the typical debugging method at the time. The operator can thus examine and modify the computer's registers, memory, and input/output devices, diagnosing and perhaps correcting failures in software and peripherals (unless a failure disables the microcode itself). The operator can also specify which disk to boot from.
Both innovations increased the reliability and decreased the cost of the LSI-11.
Later Q-Bus based systems such as the LSI-11/23, /73, and /83 are based upon chip sets designed in house by Digital Equipment Corporation. Later PDP-11 Unibus systems were designed to use similar Q-Bus processor cards, using a Unibus adapter to support existing Unibus peripherals, sometimes with a special memory bus for improved speed.
There were other significant innovations in the Q-Bus lineup. For example, a system variant of the PDP-11/03 introduced full system power-on self-test (POST).
The basic design of the PDP-11 was flexible, and was continually updated to use newer technologies. However, the limited throughput of the Unibus and Q-bus started to become a system-performance bottleneck, and the 16-bit logical address limitation hampered the development of larger software applications. The article on PDP-11 architecture describes the hardware and software techniques used to work around address-space limitations.
DEC's 32-bit successor to the PDP-11, the VAX (for "Virtual Address eXtension") overcame the 16-bit limitation, but was initially a superminicomputer aimed at the high-end time-sharing market. The early VAX CPUs provided a PDP-11 compatibility mode under which much existing software could be immediately used, in parallel with newer 32-bit software, but this capability was dropped with the first MicroVAX.
For a decade, the PDP-11 was the smallest system that could run Unix, but in the 1980s, the IBM PC and its clones largely took over the small computer market; "BYTE" in 1984 reported that the PC's Intel 8088 microprocessor outperformed the PDP-11/23 when running Unix. Newer microprocessors such as the Motorola 68000 (1979) and Intel 80386 (1985) also included 32-bit logical addressing. The 68000 in particular facilitated the emergence of a market of increasingly powerful scientific and technical workstations that would often run Unix variants. These included the HP 9000 series 200 (starting with the HP 9826A in 1981) and 300/400, with the HP-UX system being ported to the 68000 in 1984; Sun Microsystems workstations running SunOS, starting with the Sun-1 in 1982; Apollo Domain workstations starting with the DN100 in 1981 running Domain/OS, which was proprietary but offered a degree of Unix compatibility; and the Silicon Graphics IRIS range, which developed into Unix-based workstations by 1985 (IRIS 2000). Personal computers based on the 68000 like the Apple Lisa and Macintosh or the Commodore Amiga arguably constituted less of a threat to DEC's business, although technically these systems could also run Unix derivatives. In the early years, in particular, Microsoft's Xenix was ported to systems like the TRS-80 Model 16 (with up to 1 MB of memory) in 1983, and to the Apple Lisa, with up to 2 MB of installed RAM, in 1984. The mass-production of those chips eliminated any cost advantage for the 16-bit PDP-11. A line of personal computers based on the PDP-11, the DEC Professional series, failed commercially, along with other non-PDP-11 PC offerings from DEC.
In 1994 DEC sold the PDP-11 system-software rights to Mentec Inc., an Irish producer of LSI-11 based boards for Q-Bus and ISA architecture personal computers, and in 1997 discontinued PDP-11 production. For several years, Mentec produced new PDP-11 processors. Other companies found a niche market for replacements for legacy PDP-11 processors, disk subsystems, etc.
By the late 1990s, not only DEC but most of the New England computer industry which had been built around minicomputers similar to the PDP-11 collapsed in the face of microcomputer-based workstations and servers.
The PDP-11 processors tend to fall into several natural groups depending on the original design upon which they are based and which I/O bus they use. Within each group, most models were offered in two versions, one intended for OEMs and one intended for end-users. Although all models share the same instruction set, later models added new instructions and interpreted certain instructions slightly differently. As the architecture evolved, there were also variations in handling of some processor status and control registers.
The following models use the Unibus as their principal bus:
The following models use the Q-Bus as their principal bus:
The PDT series were desktop systems marketed as "smart terminals". The /110 and /130 were housed in a VT100 terminal enclosure. The /150 was housed in a table-top unit which included two 8-inch floppy drives, three asynchronous serial ports, one printer port, one modem port and one synchronous serial port and required an external terminal. All three employed the same chipset as used on the LSI-11/03 and LSI-11/2 in four "microm"s. There is an option which combines two of the microms into one dual carrier, freeing one socket for an EIS/FIS chip. The /150 in combination with a VT105 terminal was also sold as MiniMINC, a budget version of the MINC-11.
The DEC Professional series are desktop PCs intended to compete with IBM's earlier 8088 and 80286 based personal computers. The models are equipped with 5¼ inch floppy disk drives and hard disks, except the 325 which has no hard disk. The original operating system was P/OS, which was essentially RSX-11M+ with a menu system on top. As the design was intended to avoid software exchange with existing PDP-11 models, their ill fate in the market was no surprise for anyone except DEC. The RT-11 operating system was eventually ported to the PRO series. A port of RSTS/E to the PRO series was also done internal to DEC, but it was not released. The PRO-325 and -350 units are based on the DCF-11 ("Fonz") chipset, the same as found in the 11/23, 11/23+ and 11/24. The PRO-380 is based on the DCJ-11 ("Jaws") chipset, the same as found in the 11/53,73,83 and others, though running only at 10 MHz because of limitations in the support chipset.
The PDP-11 was sufficiently popular that many unlicensed PDP-11-compatible minicomputers and microcomputers were produced in Eastern Bloc countries. Some were pin-compatible with the PDP-11 and could use its peripherals and system software. These include:
Several operating systems were available for the PDP-11
The DECSA communications server was a communications platform developed by DEC based on a PDP-11/24, with the provision for user installable I/O cards including asynchronous and synchronous modules. This product was used as one of the earliest commercial platforms upon which networking products could be built, including X.25 gateways, SNA gateways, routers, and terminal servers.
A wide range of peripherals were available; some of them were also used in other DEC systems like the PDP-8 or PDP-10.
The following are some of the more common PDP-11 peripherals.
The PDP-11 family of computers was used for many purposes. It was used as a standard minicomputer for general-purpose computing, such as timesharing, scientific, educational, medical, or business computing. Another common application was real-time process control and factory automation.
Some OEM models were also frequently used as embedded systems to control complex systems like traffic-light systems, medical systems, numerical controlled machining, or for network-management. An example of such use of PDP-11s was the management of the packet switched network Datanet 1. In the 1980s, the UK's air traffic control radar processing was conducted on a PDP 11/34 system known as PRDS – Processed Radar Display System at RAF West Drayton. The software for the Therac-25 medical linear particle accelerator also ran on a 32K PDP 11/23.
In 2013, it was reported that PDP-11 programmers would be needed to control nuclear power plants through 2050.
Another use was for storage of test programs for Teradyne ATE equipment, in a system known as the TSD (Test System Director). As such, they were in use until their software was rendered inoperable by the Year 2000 problem. The U.S. Navy used a PDP-11/34 to control its Multi-station Spatial Disorientation Device, a simulator used in pilot training, until 2007, when it was replaced by a PC-based emulator that could run the original PDP-11 software and interface with custom Unibus controller cards.
A PDP-11/45 was used for the experiment that discovered the J/ψ meson at the Brookhaven National Laboratory. In 1976, Samuel C. C. Ting received the Nobel Prize for this discovery.
Ersatz-11, a product of D Bit, emulates the PDP-11 instruction set running under DOS, OS/2, Windows, Linux or stand-alone (no OS). It can be used to run RSTS or other PDP-11 operating systems.
SimH is an emulator that compiles and runs on a number of platforms (including Linux) and supports hardware emulation for the DEC PDP-1, PDP-8, PDP-10, PDP-11, VAX, AltairZ80, several IBM mainframes, and other minicomputers. | https://en.wikipedia.org/wiki?curid=24399 |
Pair programming
Pair programming is an agile software development technique in which two programmers work together at one workstation. One, the "driver", writes code while the other, the "observer" or "navigator", reviews each line of code as it is typed in. The two programmers switch roles frequently.
While reviewing, the observer also considers the "strategic" direction of the work, coming up with ideas for improvements and likely future problems to address. This is intended to free the driver to focus all of their attention on the "tactical" aspects of completing the current task, using the observer as a safety net and guide.
Pair programming increases the man-hours required to deliver code compared to programmers working individually. Experiments yielded diverse results, suggesting increases of between 15% and 100%. However, the resulting code has about 15% fewer defects. Along with code development time, other factors like field support costs and quality assurance also figure in to the return on investment. Pair programming might theoretically offset these expenses by reducing defects in the programs.
In addition to preventing mistakes as they are made, other intangible benefits may exist. For example, the courtesy of rejecting phone calls or other distractions while working together, taking fewer breaks at agreed-upon intervals, or shared breaks to return phone calls (but returning to work quickly since someone is waiting). One member of the team might have more focus and help drive or awaken the other if they lose focus, and that role might periodically change. One member might have knowledge of a topic or technique which the other does not, which might eliminate delays to find or test a solution, or allow for a better solution, thus effectively expanding the skill set, knowledge, and experience of a programmer as compared to working alone. Each of these intangible benefits, and many more, may be challenging to accurately measure, but can contribute to more efficient working hours.
A system with two programmers possesses greater potential for the generation of more diverse solutions to problems for three reasons:
In an attempt to share goals and plans, the programmers must overtly negotiate a shared course of action when a conflict arises between them. In doing so, they consider a larger number of ways of solving the problem than a single programmer alone might do. This significantly improves the design quality of the program as it reduces the chances of selecting a poor method.
In an online survey of pair programmers from 2000, 96% of them stated that they enjoyed their work more than when they programmed alone and 95% said that they were more confident in their solutions when they pair programmed.
The first finding of 96% is not scientific however. Since the sample only contains those who work as a pair, it is biased towards those who "enjoy" working as a pair.
Knowledge is constantly shared between pair programmers, whether in the industry or in a classroom. Many sources suggest that students show higher confidence when programming in pairs, and many learn whether it be from tips on programming language rules to overall design skill. In "promiscuous pairing", each programmer communicates and works with all the other programmers on the team rather than pairing only with one partner, which causes knowledge of the system to spread throughout the whole team. Pair programming allows programmers to examine their partner's code and provide feedback, which is necessary to increase their own ability to develop monitoring mechanisms for their own learning activities.
Pair programming allows team members to share problems and solutions quickly making them less likely to have hidden agendas from each other. This helps pair programmers to learn to communicate more easily. “This raises the communication bandwidth and frequency within the project, increasing overall information flow within the team.”
There are both empirical studies and meta-analyses of pair programming. The empirical studies tend to examine the level of productivity and the quality of the code, while meta-analyses may focus on biases introduced by the process of testing and publishing.
A meta-analysis found pairs typically consider more design alternatives than programmers working alone, arrive at simpler more maintainable designs, and catch design defects earlier. However, it raised concerns that its findings may have been influenced by "signs of publication bias among published studies on pair programming". It concluded that "pair programming is not uniformly beneficial or effective".
Although pair programmers may complete a task faster than a solo programmer, the total number of man-hours increases. A manager would have to balance faster completion of the work and reduced testing and debugging time against the higher cost of coding. The relative weight of these factors can vary by project and task.
The benefit of pairing is greatest on tasks that the programmers do not fully understand before they begin: that is, challenging tasks that call for creativity and sophistication, and for novices as compared to experts. Pair programming could be helpful for attaining high quality and correctness on complex programming tasks, but it would also increase the development effort (cost) significantly.
On simple tasks, which the pair already fully understands, pairing results in a net drop in productivity. It may reduce the code development time but also risks reducing the quality of the program. Productivity can also drop when novice–novice pairing is used without sufficient availability of a mentor to coach them.
There are indicators that a pair is not performing well:
Remote pair programming, also known as virtual pair programming or distributed pair programming, is pair programming in which the two programmers are in different locations, working via a collaborative real-time editor, shared desktop, or a remote pair programming IDE plugin. Remote pairing introduces difficulties not present in face-to-face pairing, such as extra delays for coordination, depending more on "heavyweight" task-tracking tools instead of "lightweight" ones like index cards, and loss of verbal communication resulting in confusion and conflicts over such things as who "has the keyboard".
Tool support could be provided by: | https://en.wikipedia.org/wiki?curid=24400 |
Psychology of torture
The psychology of torture refers to the psychological processes underlying all aspects of torture including the relationship between the perpetrator and the victim, the immediate and long-term effects, and the political and social institutions that influence its use. Torture itself is the use of physical or psychological pain to control the victim or fulfill some needs of the perpetrator.
Research during the past 60 years, starting with the Milgram experiment, suggests that under the right circumstances, and with the appropriate encouragement and setting, most people can be encouraged to actively torture others.
John Conroy:"When torture takes place, people believe they are on the high moral ground, that the nation is under threat and they are the front line protecting the nation, and people will be grateful for what they are doing."
Stages of the perpetrator's torture mentality include:
Example:
One of the apparent ringleaders of the Abu Ghraib prison torture, Charles Graner Jr., exemplified the stages of dehumanization and disinhibition when he was reported to have said, "The Christian in me says it's wrong, but the corrections officer in me says, 'I love to make a grown man piss himself.'"
As P. Saliya Sumanatilake concludes:
"Whether it be for securing a justifiable or reprehensible end, torture cannot be effectuated without invoking and focusing one's diffused innate cruelty. Accordingly, it is the prevalence of this congenital trait of heinousness that renders every human being a potential torturer: hence, the existence of torture! Moreover, it is the natural occurrence of such nascent evil within each successive generation of human beings that serves to propagate torture!"
The effects of torture on the victim and the perpetrator are likely to be influenced by many factors. Therefore, it is unlikely that providing diagnostic categories of symptoms and behavior will be applicable across countries with very different personal, political or religious beliefs and perspectives. There is always a question about applying diagnostic categories and descriptions of symptoms or behavior developed in Western societies to people from the developing countries with very different personal, political, or religious beliefs and perspectives. One of the most marked cultural differences may occur between individualist societies where realization of personal goals often takes priority over the needs of kin and societal expectations, and collectivist societies in which the needs of family and prescribed roles take precedence over personal preferences. Another evident difference is the belief in a subsequent life in which suffering in this life is rewarded, and this has emerged in some studies of torture survivors in South East Asia.
Torture has profound and long-lasting physical and psychological effects. Torture is a form of collective suffering that is not limited to the victim. The victims' family members and friends are often also affected due to adjustment problems such as outbreaks of anger and violence directed towards family members. According to research, psychological and physical torture have similar mental effects. Often torture victims suffer from elevated rates of the following:
No diagnostic terminology encapsulates the deep distrust of others which many torture survivors have developed, nor the destruction of all that gave their lives meaning. Guilt and shame about humiliation during torture, and about the survivor's inability to withstand it, as well as guilt at surviving, are common problems which discourage disclosure. Additional stress may be added due to uncertainty about the future, any possibility of being sent back to the country in which the survivor was tortured, and the potential lack of close confidants or social support systems. In addition, the presence of social isolation, poverty, unemployment, institutional accommodation, and pain can all predict higher levels of emotional distress in victims who survive torture.
The development of the diagnosis of post traumatic stress disorder (PTSD) for American veterans of the Vietnam War can be understood as a political act which labeled the collective distress of a defeated USA as individual psychopathology. Proponents of this view, point to the de-politicization of the distress of torture survivors by describing their distress, disturbance, and profound sense of injustice in psychiatric terms. These are not only conceptual issues, because they may influence treatment outcomes. Recovery is associated with reconstruction of social and cultural networks, economic supports, and respect for human rights.
The rich research on treatment of PTSDs in veterans has substantially informed treatment offered to torture survivors. It is more appropriate than extrapolation from work with civilian survivors of single events as individuals (assault, accidents) or as communities or groups (natural or man-made disasters). Some literature distinguishes between single-event trauma (type 1) and prolonged and repeated trauma, such as torture (type 2). There is no doubt that (disregarding concerns about the diagnosis) rates of PTSD are much higher in refugees than among people of a similar age in the countries where the refugees settle, and that, among refugees, rates of PTSD are even higher among those seeking asylum.
The argument that torture causes unique problems waxes and wanes, and is often associated with claims to particular expertise in treatment, and therefore claims on funding. Gurr et al. describe how torture targets the person as a whole – physically, emotionally, and socially – so that PTSD is an inadequate description of the magnitude and complexity of the effects of torture. When the diagnosis of PTSD is applied, some survivors of torture who have very severe symptoms related to trauma may still not reach the criteria for diagnosis. Categories such as 'complex trauma' have been proposed, and it may be that the next iterations of the diagnostic compendia may modify the criteria
Many people who engage in torture have various psychological deviations and often they derive sadistic satisfaction. Torture may fulfill the emotional needs of perpetrators when they willingly engage in these activities. They lack empathy and their victims' agonized painful reactions, screaming and pleading give them a sense of authority and feelings of superiority.
Torture can harm not only the victim but the perpetrators as well. After the fact, perpetrators will often experience failing mental health, PTSD, suicidal tendencies, substance dependency and a myriad of other mental defects associated with inducing physical or mental trauma upon their victims.
The perpetrators may experience flashbacks of torture, intense rage, suicidal and homicidal ideas, alienation, impulse deregulation, alterations in attention and consciousness, alterations in self-perception, alterations in relationships with others, inability to trust and inability to maintain long-term relationships, or even mere intimacy.
For physicians, it is useful to recognize that symptoms of post-traumatic stress can complicate presentation and treatment. Pain predicts greater severity of both PTSD symptoms and major depression, and intrusive memories and flashbacks can exacerbate existing pain. While under-recognition and under-treatment of torture victims is common, there are useful guidelines for evidence-based medical practice, although not specifically concerned with pain, and for evidence-based psychological practice.
Some people die during torture; many survivors are too disabled and destitute to find their way to safety. A large element of chance, and, to a lesser extent, resources and resilience, enable a minority to arrive in developed countries. Nevertheless, they often present multiple and complex problems, which the clinician can find overwhelming. For all these reasons, an interdisciplinary approach to assessment and treatment is therefore recommended, guarding against either disregarding significant psychological distress as inevitable in torture survivors or discounting physical symptoms by attributing them to psychological origin.
Rehabilitation and reparation are part of the rights of the torture survivor under the United Nations Convention, yet far less attention is paid to health needs on a national or international basis than to legal and civil claims. Collaborative efforts involving survivors themselves are needed to better understand the usefulness and limitations of existing assessment instruments and treatment methods. Some studies exist, such as that by Elsass et al. who interviewed Tibetan Lamas on the quantification of suffering in scales used to evaluate intervention with Tibetan torture survivors.
Education of medical and other healthcare personnel needs to address issues concerning treatment of torture survivors, who will be seen in all possible settings but not necessarily recognized or treated adequately. Teaching on ethics is also important, since medical students can have tolerant views of torture, and the complicity of medical and healthcare staff in torture continues in many countries. Medical staff are often in a key position to try to prevent torture and to help those who have survived.
In addition to providing treatment for victims of torture, psychologists have the skills and knowledge to conduct research regarding interrogation methods and determine when the methods used become torture. The standards, policies, and procedures of each country's professional psychological association may influence the participation of psychologists in administering torture, researching torture methods, and evaluating the effectiveness of the results. Kenneth Pope (2011) used direct quotes to indicate the American Psychological Association believes psychologists have a key role in eliciting information from people since interrogations require an understanding of psychological processes. Each professional association sets the standards for ethics and expected professional behavior which may influence psychologist researchers who investigate interrogation or torture and clinical psychologists' participation in interrogations that use methods deemed to be consistent with torture.
For an example of policy that influences the use of torture by American psychologists, please see the American Psychological Association Council of Representatives policy released in 2015. For an example of an external review of whether psychologists adhered to the APA ethics and policy please see the Hoffman Report (2015).
Due to differences in political power globally, professional psychological organizations in well-developed countries may have a greater influence on discovering and defining what constitutes torture. Psychological associations in less developed countries may choose to adopt the definitions, standards, and ethical positions regarding torture developed by the APA when they are unable to support research regarding torture themselves within their own culture. The professional associations in well developed countries, such as the APA are likely to have a strong influence in defining the psychology of torture globally.
People within an organization may be influenced to participate in torturing people. The culture and procedures of an organization provide the foundation to allow professionals, such as physicians, to violate the medical code of ethics in a manner that appears to align and meet the necessary standards of their employment. Annas and Crosby (2015) reported that lawyers provided advanced confirmation that physicians who participated in the enhanced interrogation techniques used at CIA sites would be given immunity for their actions since they were deemed a necessary requirement to protect the country (see also; Milgram experiment ) .
The physicians assisted by providing medical evaluations to ensure victims were healthy enough to undergo torture, developed methods of torture, ensured victims would survive the torture, and assisted victims to heal following torture procedures. Working in a secret facility with policies and procedures that promoted an expectation that torture and enhanced interrogation practices were required to protect the nation and would not result in negative personal consequences resulted in a setting in which physicians were willing to ignore the Hippocratic oath.
The policies and procedures within the United States military have also been found to produce an environment in which torture and enhanced interrogation techniques were used. Although the military has an excellent process for recruiting and training interrogators who use non abusive techniques successfully, changes in funding resulted in fewer highly trained interrogators being available. As more interrogators were recruited after 9/11, they were not as rigorously assessed, trained, or mentored and did not demonstrate the same abilities as the previous generation of military interrogators. In addition, the military rank of interrogators is not sufficient to control the decisions made when interrogation is needed. Military interrogators may be ordered to perform techniques they know to be inappropriate and ineffective by higher ranking officers who have not been adequately educated about effective interrogation procedures. The combination of a change in recruitment, reduced education and mentorship, and relatively low rank result in opportunities for torture and abuse to be used during interrogations.
Fictional stories, movies, and television shows may influence the beliefs people have regarding the efficacy of torture as a means for rapidly obtaining life-saving information. People who believe torture is an effective interrogation method are more supportive of using torture and enhanced interrogation techniques than those who do not think it provides accurate information. In addition, the information obtained through torture is also perceived as more valuable by people who support using torture than the same information obtained through non-abusive means of interrogation. These findings suggest confirmation bias (perception is skewed toward what a person already believes) influences the support for torture and is influenced by many commercially available sources of fictional examples. | https://en.wikipedia.org/wiki?curid=24401 |
Pongo de Manseriche
The Pongo de Manseriche is a gorge in northwest Peru. The Marañón River runs through this gorge (and water gap) before it reaches the Amazon Basin.
The Pongo de Manseriche is 3 miles (4.8 km) long, located at 4° 27′ 30″ south latitude and 77° 34′ 51″ west longitude, just below the mouth of the Río Santiago, and between it and the old missionary station of Borja.
According to Captain Carvajal, who descended the Pongo in the little steamer "Napo," in 1868, it is a vast rent in the Andes about deep, narrowing in places to a width of only , the precipices "seeming to close in at the top." Through this dark canyon the Marañón leaps along, at times, at the rate of .
The Pongo de Manseriche was first discovered by the "Adelantado" Joan de Salinas. He fitted out an expedition at Loja in Ecuador, descended the Rio Santiago to the Marañón, passed through the Pongo in 1557 and invaded the country of the Mayna Indians. Later, the missionaries of Cajamarca and Cusco established many missions in the Maynas, and made extensive use of the Pongo de Manseriche as an avenue of communication with their several convents on the Andean plateau. According to their accounts, the huge rent in the Andes, the Pongo, is about five or six miles (10 km) long, and in places not more than 80 feet (25 m) wide, and is a frightful series of torrents and whirlpools interspersed with rocks. There is an ancient tradition of the indigenous people of the vicinity that one of their gods descended the Marañón and another ascended the Amazon to communicate with him. They opened the pass called the Pongo de Manseriche. | https://en.wikipedia.org/wiki?curid=24402 |
Personality psychology
Personality psychology is a branch of psychology that studies personality and its variation among individuals. It is a scientific study which aims to show how people are individually different due to psychological forces. Its areas of focus include:
"Personality" is a dynamic and organized set of characteristics possessed by a person that uniquely influences their environment, cognition, emotions, motivations, and behaviors in various situations. The word "personality" originates from the Latin "persona", which means "mask".
Personality also refers to the pattern of thoughts, feelings, social adjustments, and behaviors consistently exhibited over time that strongly influences one's expectations, self-perceptions, values, and attitudes. Personality also predicts human reactions to other people, problems, and stress. Gordon Allport (1937) described two major ways to study personality: the nomothetic and the idiographic. "Nomothetic psychology" seeks general laws that can be applied to many different people, such as the principle of self-actualization or the trait of extraversion. "Idiographic psychology" is an attempt to understand the unique aspects of a particular individual.
The study of personality has a broad and varied history in psychology with an abundance of theoretical traditions. The major theories include dispositional (trait) perspective, psychodynamic, humanistic, biological, behaviorist, evolutionary, and social learning perspective. However, many researchers and psychologists do not explicitly identify themselves with a certain perspective and instead take an eclectic approach. Research in this area is empirically driven — such as dimensional models, based on multivariate statistics such as factor analysis — or emphasizes theory development, such as that of the psychodynamic theory. There is also a substantial emphasis on the applied field of personality testing. In psychological education and training, the study of the nature of personality and its psychological development is usually reviewed as a prerequisite to courses in abnormal psychology or clinical psychology.
Many of the ideas developed by historical and modern personality theorists stem from the basic philosophical assumptions they hold. The study of personality is not a purely empirical discipline, as it brings in elements of art, science, and philosophy to draw general conclusions. The following five categories are some of the most fundamental philosophical assumptions on which theorists disagree:
Personality type refers to the psychological classification of different types of people. Personality types are distinguished from personality traits, which come in different degrees. There are many types of theories regarding personality, but each theory contains several and sometimes many sub theories. A "theory of personality" constructed by any given psychologist will contain multiple relating theories or sub theories often expanding as more psychologists explore the theory. For example, according to type theories, there are two types of people, introverts and extroverts. According to trait theories, introversion and extroversion are part of a continuous dimension with many people in the middle. The idea of psychological types originated in the theoretical work of Carl Jung, specifically in his 1921 book "Psychologische Typen" ("Psychological Types") and William Marston.
Building on the writings and observations of Jung during World War II, Isabel Briggs Myers and her mother, Katharine C. Briggs, delineated personality types by constructing the Myers–Briggs Type Indicator. This model was later used by David Keirsey with a different understanding from Jung, Briggs and Myers. In the former Soviet Union, Lithuanian Aušra Augustinavičiūtė independently derived a model of personality type from Jung's called socionics.
Theories could also be considered an "approach" to personality or psychology and is generally referred to as a model. The model is an older and more theoretical approach to personality, accepting extroversion and introversion as basic psychological orientations in connection with two pairs of psychological functions:
Briggs and Myers also added another personality dimension to their type indicator to measure whether a person prefers to use a judging or perceiving function when interacting with the external world. Therefore, they included questions designed to indicate whether someone wishes to come to conclusions (judgement) or to keep options open (perception).
This personality typology has some aspects of a trait theory: it explains people's behavior in terms of opposite fixed characteristics. In these more traditional models, the sensing/intuition preference is considered the most basic, dividing people into "N" (intuitive) or "S" (sensing) personality types. An "N" is further assumed to be guided either by thinking or feeling and divided into the "NT" (scientist, engineer) or "NF" (author, humanitarian) temperament. An "S", in contrast, is assumed to be guided more by the judgment/perception axis and thus divided into the "SJ" (guardian, traditionalist) or "SP" (performer, artisan) temperament. These four are considered basic, with the other two factors in each case (including always extraversion/introversion) less important. Critics of this traditional view have observed that the types can be quite strongly stereotyped by professions (although neither Myers nor Keirsey engaged in such stereotyping in their type descriptions), and thus may arise more from the need to categorize people for purposes of guiding their career choice. This among other objections led to the emergence of the five-factor view, which is less concerned with behavior under work conditions and more concerned with behavior in personal and emotional circumstances. (The MBTI is not designed to measure the "work self", but rather what Myers and McCaulley called the "shoes-off self.")
Type A and Type B personality theory: During the 1950s, Meyer Friedman and his co-workers defined what they called Type A and Type B behavior patterns. They theorized that intense, hard-driving Type A personalities had a higher risk of coronary disease because they are "stress junkies." Type B people, on the other hand, tended to be relaxed, less competitive, and lower in risk. There was also a Type AB mixed profile.
John L. Holland's "RIASEC" vocational model, commonly referred to as the "Holland Codes", stipulates that six personality types lead people to choose their career paths. In this circumplex model, the six types are represented as a hexagon, with adjacent types more closely related than those more distant. The model is widely used in vocational counseling.
Trnka et al. (2016) pointed out "the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the circumplex model” and constructed a 3D hypercube-projection.
Eduard Spranger's personality-model, consisting of six (or, by some revisions, 6 +1) basic types of "value attitudes", described in his book "Types of Men" ("Lebensformen"; Halle (Saale): Niemeyer, 1914; English translation by P. J. W. Pigors - New York: G. E. Stechert Company, 1928).
The Enneagram of Personality, a model of human personality which is principally used as a typology of nine interconnected personality types. It has been criticized as being subject to interpretation, making it difficult to test or validate scientifically.
Perhaps the most ancient attempt at personality psychology is the personality typology outlined by the Indian Buddhist Abhidharma schools. This typology mostly focuses on negative personal traits (greed, hatred, and delusion) and the corresponding positive meditation practices used to counter those traits.
Psychoanalytic theories explain human behavior in terms of the interaction of various components of personality. Sigmund Freud was the founder of this school of thought. Freud drew on the physics of his day (thermodynamics) to coin the term psychodynamics. Based on the idea of converting heat into mechanical energy, he proposed psychic energy could be converted into behavior. Freud's theory places central importance on dynamic, unconscious psychological conflicts.
Freud divides human personality into three significant components: the id, ego and super-ego. The id acts according to the "pleasure principle", demanding immediate gratification of its needs regardless of external environment; the ego then must emerge in order to realistically meet the wishes and demands of the id in accordance with the outside world, adhering to the "reality principle". Finally, the superego (conscience) inculcates moral judgment and societal rules upon the ego, thus forcing the demands of the id to be met not only realistically but morally. The superego is the last function of the personality to develop, and is the embodiment of parental/social ideals established during childhood. According to Freud, personality is based on the dynamic interactions of these three components.
The channeling and release of sexual (libidal) and aggressive energies, which ensues from the "Eros" (sex; instinctual self-preservation) and "Thanatos" (death; instinctual self-annihilation) drives respectively, are major components of his theory. It is important to note that Freud's broad understanding of sexuality included all kinds of pleasurable feelings experienced by the human body.
Freud proposed five psychosexual stages of personality development. He believed adult personality is dependent upon early childhood experiences and largely determined by age five. Fixations that develop during the infantile stage contribute to adult personality and behavior.
One of Sigmund Freud's earlier associates, Alfred Adler, did agree with Freud that early childhood experiences are important to development and believed birth order may influence personality development. Adler believed that the oldest child was the individual who would set high achievement goals in order to gain attention lost when the younger siblings were born. He believed the middle children were competitive and ambitious. He reasoned that this behavior was motivated by the idea of surpassing the firstborn's achievements. He added, however, that the middle children were often not as concerned about the glory attributed with their behavior. He also believed the youngest would be more dependent and sociable. Adler finished by surmising that an only child loves being the center of attention and matures quickly but in the end fails to become independent.
Heinz Kohut thought similarly to Freud's idea of transference. He used narcissism as a model of how people develop their sense of self. Narcissism is the exaggerated sense of one self in which one is believed to exist in order to protect one's low self-esteem and sense of worthlessness. Kohut had a significant impact on the field by extending Freud's theory of narcissism and introducing what he called the 'self-object transferences' of mirroring and idealization. In other words, children need to idealize and emotionally "sink into" and identify with the idealized competence of admired figures such as parents or older siblings. They also need to have their self-worth mirrored by these people. These experiences allow them to thereby learn the self-soothing and other skills that are necessary for the development of a healthy sense of self.
Another important figure in the world of personality theory is Karen Horney. She is credited with the development of "Feminist Psychology". She disagrees with Freud on some key points, one being that women's personalities aren't just a function of "Penis Envy", but that girl children have separate and different psychic lives unrelated to how they feel about their fathers or primary male role models. She talks about three basic Neurotic needs "Basic Anxiety", "Basic Hostility" and "Basic Evil". She posits that to any anxiety an individual experiences they would have one of three approaches, moving toward people, moving away from people or moving against people. It's these three that give us varying personality types and characteristics. She also places a high premium on concepts like Overvaluation of Love and romantic partners.
Behaviorists explain personality in terms of the effects external stimuli have on behavior. The approaches used to analyze the behavioral aspect of personality are known as behavioral theories or learning-conditioning theories. These approaches were a radical shift away from Freudian philosophy. One of the major tenets of this concentration of personality psychology is a strong emphasis on scientific thinking and experimentation. This school of thought was developed by B. F. Skinner who put forth a model which emphasized the mutual interaction of the person or "the organism" with its environment. Skinner believed children do bad things because the behavior obtains attention that serves as a reinforcer. For example: a child cries because the child's crying in the past has led to attention. These are the "response", and "consequences". The response is the child crying, and the attention that child gets is the reinforcing consequence. According to this theory, people's behavior is formed by processes such as operant conditioning. Skinner put forward a "three term contingency model" which helped promote analysis of behavior based on the "Stimulus - Response - Consequence Model" in which the critical question is: "Under which circumstances or antecedent 'stimuli' does the organism engage in a particular behavior or 'response', which in turn produces a particular 'consequence'?"
Richard Herrnstein extended this theory by accounting for attitudes and traits. An attitude develops as the response strength (the tendency to respond) in the presences of a group of stimuli become stable. Rather than describing conditionable traits in non-behavioral language, response strength in a given situation accounts for the environmental portion. Herrstein also saw traits as having a large genetic or biological component, as do most modern behaviorists.
Ivan Pavlov is another notable influence. He is well known for his classical conditioning experiments involving dogs, which led him to discover the foundation of behaviorism.
In cognitive theory, behavior is explained as guided by cognitions (e.g. expectations) about the world, especially those about other people. Cognitive theories are theories of personality that emphasize cognitive processes, such as thinking and judging.
Albert Bandura, a social learning theorist suggested the forces of memory and emotions worked in conjunction with environmental influences. Bandura was known mostly for his "Bobo doll experiment". During these experiments, Bandura video taped a college student kicking and verbally abusing a bobo doll. He then showed this video to a class of kindergarten children who were getting ready to go out to play. When they entered the play room, they saw bobo dolls, and some hammers. The people observing these children at play saw a group of children beating the doll. He called this study and his findings observational learning, or modeling.
Early examples of approaches to cognitive style are listed by Baron (1982). These include Witkin's (1965) work on field dependency, Gardner's (1953) discovering people had consistent preference for the number of categories they used to categorise heterogeneous objects, and Block and Petersen's (1955) work on confidence in line discrimination judgments. Baron relates early development of cognitive approaches of personality to ego psychology. More central to this field have been:
Various scales have been developed to assess both attributional style and locus of control. Locus of control scales include those used by Rotter and later by Duttweiler, the Nowicki and Strickland (1973) Locus of Control Scale for Children and various locus of control scales specifically in the health domain, most famously that of Kenneth Wallston and his colleagues, The Multidimensional Health Locus of Control Scale. Attributional style has been assessed by the Attributional Style Questionnaire, the Expanded Attributional Style Questionnaire, the Attributions Questionnaire, the Real Events Attributional Style Questionnaire and the Attributional Style Assessment Test.
Recognition that the tendency to believe that hard work and persistence often results in attainment of life and academic goals has influenced formal educational and counseling efforts with students of various ages and in various settings since the 1970s research about achievement. Counseling aimed toward encouraging individuals to design ambitious goals and work toward them, with recognition that there are external factors that may impact, often results in the incorporation of a more positive achievement style by students and employees, whatever the setting, to include higher education, workplace, or justice programming.
Walter Mischel (1999) has also defended a cognitive approach to personality. His work refers to "Cognitive Affective Units", and considers factors such as encoding of stimuli, affect, goal-setting, and self-regulatory beliefs. The term "Cognitive Affective Units" shows how his approach considers affect as well as cognition.
Cognitive-Experiential Self-Theory (CEST) is another cognitive personality theory. Developed by Seymour Epstein, CEST argues that humans operate by way of two independent information processing systems: experiential system and rational system. The experiential system is fast and emotion-driven. The rational system is slow and logic-driven. These two systems interact to determine our goals, thoughts, and behavior.
Personal construct psychology (PCP) is a theory of personality developed by the American psychologist George Kelly in the 1950s. Kelly's fundamental view of personality was that people are like naive scientists who see the world through a particular lens, based on their uniquely organized systems of construction, which they use to anticipate events. But because people are naive scientists, they sometimes employ systems for construing the world that are distorted by idiosyncratic experiences not applicable to their current social situation. A system of construction that chronically fails to characterize and/or predict events, and is not appropriately revised to comprehend and predict one's changing social world, is considered to underlie psychopathology (or mental illness.)
From the theory, Kelly derived a psychotherapy approach and also a technique called "The Repertory Grid Interview" that helped his patients to uncover their own "constructs" with minimal intervention or interpretation by the therapist. The repertory grid was later adapted for various uses within organizations, including decision-making and interpretation of other people's world-views.
Humanistic psychology emphasizes that people have free will and that this plays an active role in determining how they behave. Accordingly, humanistic psychology focuses on subjective experiences of persons as opposed to forced, definitive factors that determine behavior. Abraham Maslow and Carl Rogers were proponents of this view, which is based on the "phenomenal field" theory of Combs and Snygg (1949). Rogers and Maslow were among a group of psychologists that worked together for a decade to produce the "Journal of Humanistic Psychology". This journal was primarily focused on viewing individuals as a whole, rather than focusing solely on separate traits and processes within the individual.
Robert W. White wrote the book "The Abnormal Personality" that became a standard text on abnormal psychology. He also investigated the human need to strive for positive goals like competence and influence, to counterbalance the emphasis of Freud on the pathological elements of personality development.
Maslow spent much of his time studying what he called "self-actualizing persons", those who are "fulfilling themselves and doing the best they are capable of doing". Maslow believes all who are interested in growth move towards self-actualizing (growth, happiness, satisfaction) views. Many of these people demonstrate a trend in dimensions of their personalities. Characteristics of self-actualizers according to Maslow include the four key dimensions:
Maslow and Rogers emphasized a view of the person as an active, creative, experiencing human being who lives in the present and subjectively responds to current perceptions, relationships, and encounters. They disagree with the dark, pessimistic outlook of those in the Freudian psychoanalysis ranks, but rather view humanistic theories as positive and optimistic proposals which stress the tendency of the human personality toward growth and self-actualization. This progressing self will remain the center of its constantly changing world; a world that will help mold the self but not necessarily confine it. Rather, the self has opportunity for maturation based on its encounters with this world. This understanding attempts to reduce the acceptance of hopeless redundancy. Humanistic therapy typically relies on the client for information of the past and its effect on the present, therefore the client dictates the type of guidance the therapist may initiate. This allows for an individualized approach to therapy. Rogers found patients differ in how they respond to other people. Rogers tried to model a particular approach to therapy- he stressed the reflective or empathetic response. This response type takes the client's viewpoint and reflects back their feeling and the context for it. An example of a reflective response would be, "It seems you are feeling anxious about your upcoming marriage". This response type seeks to clarify the therapist's understanding while also encouraging the client to think more deeply and seek to fully understand the feelings they have expressed.
Biology plays a very important role in the development of personality. The study of the biological level in personality psychology focuses primarily on identifying the role of genetic determinants and how they mold individual personalities. Some of the earliest thinking about possible biological bases of personality grew out of the case of Phineas Gage. In an 1848 accident, a large iron rod was driven through Gage's head, and his personality apparently changed as a result, although descriptions of these psychological changes are usually exaggerated.
In general, patients with brain damage have been difficult to find and study. In the 1990s, researchers began to use electroencephalography (EEG), positron emission tomography (PET), and more recently functional magnetic resonance imaging (fMRI), which is now the most widely used imaging technique to help localize personality traits in the brain.
Ever since the Human Genome Project allowed for a much more in depth understanding of genetics, there has been an ongoing controversy involving heritability, personality traits, and environmental vs. genetic influence on personality. The human genome is known to play a role in the development of personality.
Previously, genetic personality studies focused on specific genes correlating to specific personality traits. Today's view of the gene-personality relationship focuses primarily on the activation and expression of genes related to personality and forms part of what is referred to as behavioural genetics. Genes provide numerous options for varying cells to be expressed; however, the environment determines which of these are activated. Many studies have noted this relationship in varying ways in which our bodies can develop, but the interaction between genes and the shaping of our minds and personality is also relevant to this biological relationship.
DNA-environment interactions are important in the development of personality because this relationship determines what part of the DNA code is actually made into proteins that will become part of an individual. While different choices are made available by the genome, in the end, the environment is the ultimate determinant of what becomes activated. Small changes in DNA in individuals are what lead to the uniqueness of every person as well as differences in looks, abilities, brain functioning, and all the factors that culminate to develop a cohesive personality.
Cattell and Eysenck have proposed that genetics have a strong influence on personality. A large part of the evidence collected linking genetics and the environment to personality have come from twin studies. This "twin method" compares levels of similarity in personality using genetically identical twins. One of the first of these twin studies measured 800 pairs of twins, studied numerous personality traits, and determined that identical twins are most similar in their general abilities. Personality similarities were found to be less related for self-concepts, goals, and interests.
Twin studies have also been important in the creation of the five factor personality model: neuroticism, extraversion, openness, agreeableness, and conscientiousness. Neuroticism and extraversion are the two most widely studied traits. A person that may fall into the extravert category can display characteristics such as impulsiveness, sociability, and activeness. A person falling into the neuroticism category may be more likely to be moody, anxious, or irritable. Identical twins, however, have higher correlations in personality traits than fraternal twins. One study measuring genetic influence on twins in five different countries found that the correlations for identical twins were .50, while for fraternal they were about .20. It is suggested that heredity and environment interact to determine one's personality.
Charles Darwin is the founder of the theory of the evolution of the species. The evolutionary approach to personality psychology is based on this theory. This theory examines how individual personality differences are based on natural selection. Through natural selection organisms change over time through adaptation and selection. Traits are developed and certain genes come into expression based on an organism's environment and how these traits aid in an organism's survival and reproduction.
Polymorphisms, such as gender and blood type, are forms of diversity which evolve to benefit a species as a whole. The theory of evolution has wide-ranging implications on personality psychology. Personality viewed through the lens of evolutionary psychology places a great deal of emphasis on specific traits that are most likely to aid in survival and reproduction, such as conscientiousness, sociability, emotional stability, and dominance. The social aspects of personality can be seen through an evolutionary perspective. Specific character traits develop and are selected for because they play an important and complex role in the social hierarchy of organisms. Such characteristics of this social hierarchy include the sharing of important resources, family and mating interactions, and the harm or help organisms can bestow upon one another.
In the 1930s, John Dollard and Neal Elgar Miller met at Yale University, and began an attempt to integrate drives (see Drive theory), into a theory of personality, basing themselves on the work of Clark Hull. They began with the premise that personality could be equated with the habitual responses exhibited by an individual – their habits. From there, they determined that these habitual responses were built on secondary, or acquired drives.
Secondary drives are internal needs directing the behaviour of an individual that results from learning. Acquired drives are learned, by and large in the manner described by classical conditioning. When we are in a certain environment and experience a strong response to a stimulus, we internalize cues from the said environment. When we find ourselves in an environment with similar cues, we begin to act in anticipation of a similar stimulus. Thus, we are likely to experience anxiety in an environment with cues similar to one where we have experienced pain or fear – such as the dentist's office.
Secondary drives are built on primary drives, which are biologically driven, and motivate us to act with no prior learning process – such as hunger, thirst or the need for sexual activity. However, secondary drives are thought to represent more specific elaborations of primary drives, behind which the functions of the original primary drive continue to exist. Thus, the primary drives of fear and pain exist behind the acquired drive of anxiety. Secondary drives can be based on multiple primary drives and even in other secondary drives. This is said to give them strength and persistence. Examples include the need for money, which was conceptualized as arising from multiple primary drives such as the drive for food and warmth, as well as from secondary drives such as imitativeness (the drive to do as others do) and anxiety.
Secondary drives vary based on the social conditions under which they were learned – such as culture. Dollard and Miller used the example of food, stating that the primary drive of hunger manifested itself behind the learned secondary drive of an appetite for a specific type of food, which was dependent on the culture of the individual.
Secondary drives are also explicitly social, representing a manner in which we convey our primary drives to others. Indeed, many primary drives are actively repressed by society (such as the sexual drive). Dollard and Miller believed that the acquisition of secondary drives was essential to childhood development. As children develop, they learn not to act on their primary drives, such as hunger but acquire secondary drives through reinforcement. Friedman and Schustack describe an example of such developmental changes, stating that if an infant engaging in an active orientation towards others brings about the fulfillment of primary drives, such as being fed or having their diaper changed, they will develop a secondary drive to pursue similar interactions with others – perhaps leading to an individual being more gregarious. Dollard and Miller's belief in the importance of acquired drives led them to reconceive Sigmund Freud's theory of psychosexual development. They found themselves to be in agreement with the timing Freud used but believed that these periods corresponded to the successful learning of certain secondary drives.
Dollard and Miller gave many examples of how secondary drives impact our habitual responses – and by extension our personalities, including anger, social conformity, imitativeness or anxiety, to name a few. In the case of anxiety, Dollard and Miller note that people who generalize the situation in which they experience the anxiety drive will experience anxiety far more than they should. These people are often anxious all the time, and anxiety becomes part of their personality. This example shows how drive theory can have ties with other theories of personality – many of them look at the trait of neuroticism or emotional stability in people, which is strongly linked to anxiety.
There are two major types of personality tests, projective and objective.
"Projective tests" assume personality is primarily unconscious and assess individuals by how they respond to an ambiguous stimulus, such as an ink blot. Projective tests have been in use for about 60 years and continue to be used today. Examples of such tests include the Rorschach test and the Thematic Apperception Test.
The Rorschach Test involves showing an individual a series of note cards with ambiguous ink blots on them. The individual being tested is asked to provide interpretations of the blots on the cards by stating everything that the ink blot may resemble based on their personal interpretation. The therapist then analyzes their responses. Rules for scoring the test have been covered in manuals that cover a wide variety of characteristics such as content, originality of response, location of "perceived images" and several other factors. Using these specific scoring methods, the therapist will then attempt to relate test responses to attributes of the individual's personality and their unique characteristics. The idea is that unconscious needs will come out in the person's response, e.g. an aggressive person may see images of destruction.
The Thematic Apperception Test (also known as the TAT) involves presenting individuals with vague pictures/scenes and asking them to tell a story based on what they see. Common examples of these "scenes" include images that may suggest family relationships or specific situations, such as a father and son or a man and a woman in a bedroom. Responses are analyzed for common themes. Responses unique to an individual are theoretically meant to indicate underlying thoughts, processes, and potentially conflicts present within the individual. Responses are believed to be directly linked to unconscious motives. There is very little empirical evidence available to support these methods.
"Objective tests" assume personality is consciously accessible and that it can be measured by self-report questionnaires. Research on psychological assessment has generally found objective tests to be more valid and reliable than projective tests. Critics have pointed to the Forer effect to suggest some of these appear to be more accurate and discriminating than they really are. Issues with these tests include false reporting because there is no way to tell if an individual is answering a question honestly or accurately.
The Myers-Briggs Type Indicator (also known as the MBTI) is self-reporting questionnaire based on Carl Jung's Type theory. However, the MBTI modified Jung's theory into their own by disregarding certain processes held in the unconscious mind and the impact it has on personality.
Psychology has traditionally defined personality through its behavioral patterns, and more recently with neuroscientific studies of the brain. In recent years, some psychologists have turned to the study of inner experiences for insight into personality as well as individuality. Inner experiences are the thoughts and feelings to an immediate phenomenon. Another term used to define inner experiences is qualia. Being able to understand inner experiences assists in understanding how humans behave, act, and respond. Defining personality using inner experiences has been expanding due to the fact that solely relying on behavioral principles to explain one's character may seem incomplete. Behavioral methods allow the subject to be observed by an observer, whereas with inner experiences the subject is its own observer.
Descriptive experience sampling (DES), developed by psychologist Russel Hurlburt. This is an idiographic method that is used to help examine inner experiences. This method relies on an introspective technique that allows an individual's inner experiences and characteristics to be described and measured. A beep notifies the subject to record their experience at that exact moment and 24 hours later an interview is given based on all the experiences recorded. DES has been used in subjects that have been diagnosed with schizophrenia and depression. It has also been crucial to studying the inner experiences of those who have been diagnosed with common psychiatric diseases.
Articulated thoughts in stimulated situations (ATSS): ATSS is a paradigm which was created as an alternative to the TA (think aloud) method. This method assumes that people have continuous internal dialogues that can be naturally attended to. ATSS also assesses a person's inner thoughts as they verbalize their cognitions. In this procedure, subjects listen to a scenario via a video or audio player and are asked to imagine that they are in that specific situation. Later, they are asked to articulate their thoughts as they occur in reaction to the playing scenario. This method is useful in studying emotional experience given that the scenarios used can influence specific emotions. Most importantly, the method has contributed to the study of personality. In a study conducted by Rayburn and Davison (2002), subjects’ thoughts and empathy toward anti-gay hate crimes were evaluated. The researchers found that participants showed more aggressive intentions towards the offender in scenarios which mimicked hate crimes.
Experimental method: This method is an experimental paradigm used to study human experiences involved in the studies of sensation and perception, learning and memory, motivation, and biological psychology. The experimental psychologist usually deals with intact organisms although studies are often conducted with organisms modified by surgery, radiation, drug treatment, or long-standing deprivations of various kinds or with organisms that naturally present organic abnormalities or emotional disorders. Economists and psychologists have developed a variety of experimental methodologies to elicit and assess individual attitudes where each emotion differs for each individual. The results are then gathered and quantified to conclude if specific experiences have any common factors. This method is used to seek clarity of the experience and remove any biases to help understand the meaning behind the experience to see if it can be generalized. | https://en.wikipedia.org/wiki?curid=24984 |
Pronoun
In linguistics and grammar, a pronoun (abbreviated ) is a word that substitutes for a noun or noun phrase. It is a particular case of a pro-form.
Pronouns have traditionally been regarded as one of the parts of speech, but some modern theorists would not consider them to form a single class, in view of the variety of functions they perform cross-linguistically. An example of a pronoun is "you", which is both plural and singular. Subtypes include personal and possessive pronouns, reflexive and reciprocal pronouns, demonstrative pronouns, relative and interrogative pronouns, and indefinite pronouns.
The use of pronouns often involves anaphora, where the meaning of the pronoun is dependent on an antecedent. For example, in the sentence "That poor man looks as if he needs a new coat", the antecedent of the pronoun "he" is dependent on "that poor man".
The adjective associated with "pronoun" is "pronominal". A pronominal is also a word or phrase that acts as a pronoun. For example, in "That's not the one I wanted", the phrase "the one" (containing the prop-word "one") is a pronominal.
Pronouns "(antōnymía)" are listed as one of eight parts of speech in "The Art of Grammar", a treatise on Greek grammar attributed to Dionysius Thrax and dating from the 2nd century BC. The pronoun is described there as "a part of speech substitutable for a noun and marked for a person." Pronouns continued to be regarded as a part of speech in Latin grammar (the Latin term being , from which the English name – through Middle French – ultimately derives), and thus in the European tradition generally.
In more modern approaches, pronouns are less likely to be considered to be a single word class, because of the many different syntactic roles that they play, as represented by the various different types of pronouns listed in the previous sections.
Linguists in particular have trouble classifying pronouns in a single category, and some do not agree that pronouns substitute nouns or noun categories. Certain types of pronouns are often identical or similar in form to determiners with related meaning; some English examples are given in the table on the right. This observation has led some linguists, such as Paul Postal, to regard pronouns as determiners that have had their following noun or noun phrase deleted. (Such patterning can even be claimed for certain personal pronouns; for example, "we" and "you" might be analyzed as determiners in phrases like "we Brits" and "you tennis players".) Other linguists have taken a similar view, uniting pronouns and determiners into a single class, sometimes called "determiner-pronoun", or regarding determiners as a subclass of pronouns or vice versa. The distinction may be considered to be one of subcategorization or valency, rather like the distinction between transitive and intransitive verbs – determiners take a noun phrase complement like transitive verbs do, while pronouns do not. This is consistent with the determiner phrase viewpoint, whereby a determiner, rather than the noun that follows it, is taken to be the head of the phrase. Cross-linguistically, it seems as though pronouns share 3 distinct categories: point of view, person, and number. The breadth of each subcategory however tends to differ among languages.
The use of pronouns often involves anaphora, where the meaning of the pronoun is dependent on another referential element. The referent of the pronoun is often the same as that of a preceding (or sometimes following) noun phrase, called the antecedent of the pronoun. The grammatical behavior of certain types of pronouns, and in particular their possible relationship with their antecedents, has been the focus of studies in binding, notably in the Chomskyan government and binding theory. In this binding context, reflexive and reciprocal pronouns in English (such as "himself" and "each other") are referred to as anaphors (in a specialized restricted sense) rather than as pronominal elements. Under binding theory, specific principles apply to different sets of pronouns.
In English, reflexive and reciprocal pronouns must adhere to Principle A: an anaphor (reflexive or reciprocal, such as "each other") must be bound in its governing category (roughly, the clause). Therefore, in syntactic structure it must be lower in structure (it must have an antecedent) and have a direct relationship with its referent. This is called a C-command relationship. For instance, we see that "John cut himself" is grammatical, but "Himself cut John" is not, despite having identical arguments, since "himself", the reflexive, must be lower in structure to John, its referent. Additionally, we see examples like "John said that Mary cut himself" are not grammatical because there is an intermediary noun, "Mary", that disallows the two referents from having a direct relationship.
On the other hand, personal pronouns (such as "him" or "them") must adhere to Principle B: a pronoun must be free (i.e., not bound) within its governing category (roughly, the clause). This means that although the pronouns can have a referent, they cannot have a direct relationship with the referent where the referent selects the pronoun. For instance, "John said Mary cut him" is grammatical because the two co-referents, "John" and "him" are separated structurally by "Mary". This is why a sentence like "John cut him" where "him" refers to "John" is ungrammatical.
The type of binding that applies to subsets of pronouns varies cross-linguistically. For instance, in German linguistics, pronouns can be split into two distinct categories — personal pronouns and d-pronouns. Although personal pronouns act identically to that of English personal pronouns (i.e. follow Principle B), d-pronouns follow yet another principle, Principle C, and function similarly to nouns in that they cannot have a direct relationship to an antecedent.
The following sentences give examples of particular types of pronouns used with antecedents:
Some other types, such as indefinite pronouns, are usually used without antecedents. Relative pronouns are used without antecedents in free relative clauses. Even third-person personal pronouns are sometimes used without antecedents ("unprecursed") – this applies to special uses such as dummy pronouns and generic "they", as well as cases where the referent is implied by the context.
The table below lists English pronouns across a number of different syntactic contexts (Subject, Object, Possessive, Reflexive) according to the following features:
In addition to the personal pronouns exemplified in the above table, English also has other pronoun types, including demonstrative, relative, indefinite, and interrogative pronouns, as listed in the following table. For more detailed discussion, see the following subsections.
Personal pronouns may be classified by person, number, gender and case. English has three persons (first, second and third) and two numbers (singular and plural); in the third person singular there are also distinct pronoun forms for male, female and neuter gender. Principal forms are shown in the adjacent table (see also English personal pronouns).
English personal pronouns have two cases, "subject" and "object". Subject pronouns are used in subject position (I like to eat chips, but she does not"). Object pronouns are used for the object of a verb or preposition ("John likes me but not her).
Other distinct forms found in some languages include:
Possessive pronouns are used to indicate possession (in a broad sense). Some occur as independent noun phrases: "mine", "yours", "hers", "ours", "theirs". An example is: "Those clothes are mine." Others act as a determiner and must accompany a noun: "my", "your", "her", "our", "your", "their", as in: "I lost my wallet." ("His" and "its" can fall into either category, although "its" is nearly always found in the second.) Those of the second type have traditionally also been described as possessive adjectives, and in more modern terminology as possessive determiners. The term "possessive pronoun" is sometimes restricted to the first type. Both types replace possessive noun phrases. As an example, Their crusade to capture our attention" could replace The advertisers' crusade to capture our attention."
Reflexive pronouns are used when a person or thing acts on itself, for example, "John cut himself." In English they all end in "-self" or "-selves" and must refer to a noun phrase elsewhere in the same clause.
Reciprocal pronouns refer to a reciprocal relationship ("each other", "one another"). They must refer to a noun phrase in the same clause. An example in English is: "They do not like each other." In some languages, the same forms can be used as both reflexive and reciprocal pronouns.
Demonstrative pronouns (in English, "this", "that" and their plurals "these", "those") often distinguish their targets by pointing or some other indication of position; for example, "I'll take these." They may also be "anaphoric", depending on an earlier expression for context, for example, "A kid actor would try to be all sweet, and who needs that?"
Indefinite pronouns, the largest group of pronouns, refer to one or more unspecified persons or things. One group in English includes compounds of "some-", "any-", "every-" and "no-" with "-thing", "-one" and "-body", for example: "Anyone can do that." Another group, including "many", "more", "both", and "most", can appear alone or followed by "of". In addition,
Relative pronouns in English include "who", "whom", "whose", "what", "which" and "that"). They rely on an antecedent, and refer back to people or things previously mentioned: "People who smoke should quit now." They are used in relative clauses. Relative pronouns can also be used as complementizers.
Relative pronouns can be used in an interrogative setting as interrogative pronouns. Interrogative pronouns ask which person or thing is meant. In reference to a person, one may use "who" (subject), "whom" (object) or "whose" (possessive); for example, "Who did that?" In colloquial speech, "whom" is generally replaced by "who". English non-personal interrogative pronouns ("which" and "what") have only one form.
In English and many other languages (e.g. French and Czech), the sets of relative and interrogative pronouns are nearly identical. Compare English: "Who is that?" (interrogative) and "I know the woman who came" (relative). In some other languages, interrogative pronouns and indefinite pronouns are frequently identical; for example, Standard Chinese means "what?" as well as "something" or "anything".
Though the personal pronouns described above are the "contemporary" English pronouns, older forms of "modern" English (as used by Shakespeare, for example) use a slightly different set of personal pronouns as shown in the table. The difference is entirely in the second person. Though one would rarely find these older forms used in literature from recent centuries, they are nevertheless considered "modern".
In English, kin terms like "mother," "uncle," "cousin" are a distinct word class from pronouns; however many Australian Aboriginal languages have more elaborated systems of encoding kinship in language including special kin forms of pronouns. In Murrinh-patha, for example, when selecting a nonsingular exclusive pronoun to refer to a group, the speaker will assess whether or not the members of the group belong to a common class of gender or kinship. If all of the members of the referent group are male, the MASCULINE form will be selected; if at least one is female, the FEMININE is selected, but if all the members are in a sibling-like kinship relation, a third SIBLING form is selected. In Arabana-Wangkangurru, the speaker will use entirely different sets of pronouns depending on whether the speaker and the referent are or are not in a common moiety. See the following example:
See Australian Aboriginal kinship for more details.
Some special uses of personal pronouns include: | https://en.wikipedia.org/wiki?curid=24985 |
Pendulum clock
A pendulum clock is a clock that uses a pendulum, a swinging weight, as its timekeeping element. The advantage of a pendulum for timekeeping is that it is a harmonic oscillator: it swings back and forth in a precise time interval dependent on its length, and resists swinging at other rates. From its invention in 1656 by Christiaan Huygens until the 1930s, the pendulum clock was the world's most precise timekeeper, accounting for its widespread use. Throughout the 18th and 19th centuries pendulum clocks in homes, factories, offices and railroad stations served as primary time standards for scheduling daily life, work shifts, and public transportation, and their greater accuracy allowed for the faster pace of life which was necessary for the Industrial Revolution. The home pendulum clock was replaced by cheaper synchronous electric clocks in the 1930s and '40s, and pendulum clocks are now kept mostly for their decorative and antique value.
Pendulum clocks must be stationary to operate; any motion or accelerations will affect the motion of the pendulum, causing inaccuracies, so other mechanisms must be used in portable timepieces.
The first pendulum clock was invented in 1656 by Dutch scientist and inventor Christiaan Huygens, and patented the following year. Huygens contracted the construction of his clock designs to clockmaker Salomon Coster, who actually built the clock. Huygens was inspired by investigations of pendulums by Galileo Galilei beginning around 1602. Galileo discovered the key property that makes pendulums useful timekeepers: isochronism, which means that the period of swing of a pendulum is approximately the same for different sized swings. Galileo had the idea for a pendulum clock in 1637, which was partly constructed by his son in 1649, but neither lived to finish it. The introduction of the pendulum, the first harmonic oscillator used in timekeeping, increased the accuracy of clocks enormously, from about 15 minutes per day to 15 seconds per day leading to their rapid spread as existing 'verge and foliot' clocks were retrofitted with pendulums.
These early clocks, due to their verge escapements, had wide pendulum swings of 80–100°. In his 1673 analysis of pendulums, "Horologium Oscillatorium", Huygens showed that wide swings made the pendulum inaccurate, causing its period, and thus the rate of the clock, to vary with unavoidable variations in the driving force provided by the movement. Clockmakers' realization that only pendulums with small swings of a few degrees are isochronous motivated the invention of the anchor escapement by Robert Hooke around 1658, which reduced the pendulum's swing to 4–6°. The anchor became the standard escapement used in pendulum clocks. In addition to increased accuracy, the anchor's narrow pendulum swing allowed the clock's case to accommodate longer, slower pendulums, which needed less power and caused less wear on the movement. The seconds pendulum (also called the Royal pendulum), 0.994 m (39.1 in) long, in which the time period is two seconds, became widely used in quality clocks. The long narrow clocks built around these pendulums, first made by William Clement around 1680, became known as grandfather clocks. The increased accuracy resulting from these developments caused the minute hand, previously rare, to be added to clock faces beginning around 1690.
The 18th and 19th century wave of horological innovation that followed the invention of the pendulum brought many improvements to pendulum clocks. The deadbeat escapement invented in 1675 by Richard Towneley and popularized by George Graham around 1715 in his precision "regulator" clocks gradually replaced the anchor escapement and is now used in most modern pendulum clocks. Observation that pendulum clocks slowed down in summer brought the realization that thermal expansion and contraction of the pendulum rod with changes in temperature was a source of error. This was solved by the invention of temperature-compensated pendulums; the mercury pendulum by George Graham in 1721 and the gridiron pendulum by John Harrison in 1726. With these improvements, by the mid-18th century precision pendulum clocks achieved accuracies of a few seconds per week.
Until the 19th century, clocks were handmade by individual craftsmen and were very expensive. The rich ornamentation of pendulum clocks of this period indicates their value as status symbols of the wealthy. The clockmakers of each country and region in Europe developed their own distinctive styles. By the 19th century, factory production of clock parts gradually made pendulum clocks affordable by middle-class families.
During the Industrial Revolution, daily life was organized around the home pendulum clock. More accurate pendulum clocks, called "regulators", were installed in places of business and railroad stations and used to schedule work and set other clocks. The need for extremely accurate timekeeping in celestial navigation to determine longitude drove the development of the most accurate pendulum clocks, called "astronomical regulators". These precision instruments, installed in naval observatories and kept accurate within a second by observation of star transits overhead, were used to set marine chronometers on naval and commercial vessels. Beginning in the 19th century, astronomical regulators in naval observatories served as primary standards for national time distribution services that distributed time signals over telegraph wires. From 1909, US National Bureau of Standards (now NIST) based the US time standard on Riefler pendulum clocks, accurate to about 10 milliseconds per day. In 1929 it switched to the Shortt-Synchronome free pendulum clock before phasing in quartz standards in the 1930s.
With an error of around one second per year, the Shortt was the most accurate commercially produced pendulum clock.
Pendulum clocks remained the world standard for accurate timekeeping for 270 years, until the invention of the quartz clock in 1927, and were used as time standards through World War 2. The French Time Service used pendulum clocks as part of their ensemble of standard clocks until 1954. The home pendulum clock began to be replaced as domestic timekeeper during the 1930s and 1940s by the synchronous electric clock, which kept more accurate time because it was synchronized to the oscillation of the electric power grid. The most accurate experimental pendulum clock ever made may be the Littlemore Clock built by Edward T. Hall in the 1990s
(donated in 2003 to the National Watch and Clock Museum, Columbia, Pennsylvania, USA).
The mechanism which runs a mechanical clock is called the movement. The movements of all mechanical pendulum clocks have these five parts:
Additional functions in clocks besides basic timekeeping are called complications. More elaborate pendulum clocks may include these complications:
In "electromechanical pendulum clocks" such as used in mechanical Master clocks the power source is replaced by an electrically powered solenoid that provides the impulses to the pendulum by magnetic force, and the escapement is replaced by a switch or photodetector that senses when the pendulum is in the right position to receive the impulse. These should not be confused with more recent quartz pendulum clocks in which an electronic quartz clock module swings a pendulum. These are not true pendulum clocks because the timekeeping is controlled by a quartz crystal in the module, and the swinging pendulum is merely a decorative simulation.
The pendulum swings with a period that varies with the square root of its effective length. For small swings the period "T", the time for one complete cycle (two swings), is
where "L" is the length of the pendulum and "g" is the local acceleration of gravity. All pendulum clocks have a means of adjusting the rate. This is usually an adjustment nut under the pendulum bob which moves the bob up or down on its rod. Moving the bob up reduces the length of the pendulum, reducing the pendulum's period so the clock gains time. In some pendulum clocks, fine adjustment is done with an auxiliary adjustment, which may be a small weight that is moved up or down the pendulum rod. In some master clocks and tower clocks, adjustment is accomplished by a small tray mounted on the rod where small weights are placed or removed to change the effective length, so the rate can be adjusted without stopping the clock.
The period of a pendulum increases slightly with the width (amplitude) of its swing. The "rate" of error increases with amplitude, so when limited to small swings of a few degrees the pendulum is nearly "isochronous"; its period is independent of changes in amplitude. Therefore, the swing of the pendulum in clocks is limited to 2° to 4°.
A major source of error in pendulum clocks is thermal expansion; the pendulum rod changes in length slightly with changes in temperature, causing changes in the rate of the clock. An increase in temperature causes the rod to expand, making the pendulum longer, so its period increases and the clock loses time. Many older quality clocks used wooden pendulum rods to reduce this error, as wood expands less than metal.
The first pendulum to correct for this error was the "mercury pendulum" invented by George Graham in 1721, which was used in precision regulator clocks into the 20th century. These had a bob consisting of a container of the liquid metal mercury. An increase in temperature would cause the pendulum rod to expand, but the mercury in the container would also expand and its level would rise slightly in the container, moving the center of gravity of the pendulum up toward the pivot. By using the correct amount of mercury, the centre of gravity of the pendulum remained at a constant height, and thus its period remained constant, despite changes in temperature.
The most widely used temperature-compensated pendulum was the gridiron pendulum invented by John Harrison around 1726. This consisted of a "grid" of parallel rods of high-thermal-expansion metal such as zinc or brass and low-thermal-expansion metal such as steel. If properly combined, the length change of the high-expansion rods compensated for the length change of the low-expansion rods, again achieving a constant period of the pendulum with temperature changes.
This type of pendulum became so associated with quality that decorative "fake" gridirons are often seen on pendulum clocks, that have no actual temperature compensation function.
Beginning around 1900, some of the highest precision scientific clocks had pendulums made of ultra-low-expansion materials such as the nickel steel alloy Invar or fused silica, which required very little compensation for the effects of temperature.
The viscosity of the air through which the pendulum swings will vary with atmospheric pressure, humidity, and temperature. This drag also requires power that could otherwise be applied to extending the time between windings. Traditionally the pendulum bob is made with a narrow streamlined lens shape to reduce air drag, which is where most of the driving power goes in a quality clock. In the late 19th century and early 20th century, pendulums for precision regulator clocks in astronomical observatories were often operated in a chamber that had been pumped to a low pressure to reduce drag and make the pendulum's operation even more accurate by avoiding changes in atmospheric pressure. Fine adjustment of the rate of the clock could be made by slight changes to the internal pressure in the sealed housing.
To keep time accurately, pendulum clocks must be absolutely level. If they are not, the pendulum swings more to one side than the other, upsetting the symmetrical operation of the escapement. This condition can often be heard audibly in the ticking sound of the clock. The ticks or "beats" should be at precisely equally spaced intervals to give a sound of, "tick...tock...tick...tock"; if they are not, and have the sound "tick-tock...tick-tock..." the clock is "out of beat" and needs to be leveled. This problem can easily cause the clock to stop working, and is one of the most common reasons for service calls. A spirit level or watch timing machine can achieve a higher accuracy than relying on the sound of the beat; precision regulators often have a built in spirit level for the task. Older freestanding clocks often have feet with adjustable screws to level them, more recent ones have a leveling adjustment in the movement. Some modern pendulum clocks have 'auto-beat' or 'self-regulating beat adjustment' devices, and don't need this adjustment.
Since the pendulum rate will increase with an increase in gravity, and local gravity varies with latitude and elevation on Earth, precision pendulum clocks must be readjusted to keep time after a move. For example, a pendulum clock moved from sea level to will lose 16 seconds per day. With the most accurate pendulum clocks, even moving the clock to the top of a tall building would cause it to lose measurable time due to lower gravity.
Also called torsion-spring pendulum, this is a wheel-like mass (most often four spheres on cross spokes) suspended from a vertical strip (ribbon) of spring steel, used as the regulating mechanism in torsion pendulum clocks. Rotation of the mass winds and unwinds the suspension spring, with the energy impulse applied to the top of the spring. With a period of 12—15 seconds, compared to the gravity swing pendulum's period of 0.5—2s, it is possible to make clocks that need to be wound only every 30 days, or even only once a year or more. This type is independent of the local force of gravity but is more affected by temperature changes than an uncompensated gravity-swing pendulum.
A clock requiring only annual winding is sometimes called a "400-Day clock" or "anniversary clock", the latter sometimes given as a wedding memorialisation gift. German firms Schatz and Kieninger & Obergfell (known as "Kundo", from "K und O"), were the main manufacturers of this type of clock. The "perpetual motion" clock, called the Atmos because its mechanism was kept wound by changes in atmospheric temperature, also makes use of a torsion pendulum. In this case the oscillation cycle takes a full 60 seconds.
The escapement is a mechanical linkage that converts the force from the clock's wheel train into impulses that keep the pendulum swinging back and forth. It is the part that makes the "ticking" sound in a working pendulum clock. Most escapements consist of a wheel with pointed teeth called the "escape wheel" which is turned by the clock's wheel train, and surfaces the teeth push against, called "pallets". During most of the pendulum's swing the wheel is prevented from turning because a tooth is resting against one of the pallets; this is called the "locked" state. Each swing of the pendulum a pallet releases a tooth of the escape wheel. The wheel rotates forward a fixed amount until a tooth catches on the other pallet. These releases allow the clock's wheel train to advance a fixed amount with each swing, moving the hands forward at a constant rate, controlled by the pendulum.
Although the escapement is necessary, its force disturbs the natural motion of the pendulum, and in precision pendulum clocks this was often the limiting factor on the accuracy of the clock. Different escapements have been used in pendulum clocks over the years to try to solve this problem. In the 18th and 19th century escapement design was at the forefront of timekeeping advances. The anchor escapement (see animation) was the standard escapement used until the 1800s when an improved version, the deadbeat escapement took over in precision clocks. It is used in almost all pendulum clocks today. The remontoire, a small spring mechanism rewound at intervals which serves to isolate the escapement from the varying force of the wheel train, was used in a few precision clocks. In tower clocks the wheel train must turn the large hands on the clock face on the outside of the building, and the weight of these hands, varying with snow and ice buildup, put a varying load on the wheel train. Gravity escapements were used in tower clocks.
By the end of the 19th century specialized escapements were used in the most accurate clocks, called "astronomical regulators", which were employed in naval observatories and for scientific research. The Riefler escapement, used in Clemens-Riefler regulator clocks was accurate to 10 milliseconds per day. Electromagnetic escapements, which used a switch or phototube to turn on a solenoid electromagnet to give the pendulum an impulse without requiring a mechanical linkage, were developed. The most accurate pendulum clock was the Shortt-Synchronome clock, a complicated electromechanical clock with two pendulums developed in 1923 by W.H. Shortt and Frank Hope-Jones, which was accurate to better than one second per year. A slave pendulum in a separate clock was linked by an electric circuit and electromagnets to a master pendulum in a vacuum tank. The slave pendulum performed the timekeeping functions, leaving the master pendulum to swing virtually undisturbed by outside influences. In the 1920s the Shortt-Synchronome briefly became the highest standard for timekeeping in observatories before quartz clocks superseded pendulum clocks as precision time standards.
The indicating system is almost always the traditional dial with moving hour and minute hands. Many clocks have a small third hand indicating seconds on a subsidiary dial. Pendulum clocks are usually designed to be set by opening the glass face cover and manually pushing the minute hand around the dial to the correct time. The minute hand is mounted on a slipping friction sleeve which allows it to be turned on its arbor. The hour hand is driven not from the wheel train but from the minute hand's shaft through a small set of gears, so rotating the minute hand manually also sets the hour hand.
Pendulum clocks were more than simply utilitarian timekeepers; they were status symbols that expressed the wealth and culture of their owners. They evolved in a number of traditional styles, specific to different countries and times as well as their intended use. Case styles somewhat reflect the furniture styles popular during the period. Experts can often pinpoint when an antique clock was made within a few decades by subtle differences in their cases and faces. These are some of the different styles of pendulum clocks: | https://en.wikipedia.org/wiki?curid=24989 |
Programmable logic controller
A programmable logic controller (PLC) or programmable controller is an industrial digital computer which has been ruggedized and adapted for the control of manufacturing processes, such as assembly lines, or robotic devices, or any activity that requires high reliability, ease of programming and process fault diagnosis.
PLCs can range from small modular devices with tens of inputs and outputs (I/O), in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems.
They can be designed for many arrangements of digital and analog I/O, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.
PLCs were first developed in the automobile manufacturing industry to provide flexible, rugged and easily programmable controllers to replace hard-wired relay logic systems. Since then, they have been widely adopted as high-reliability automation controllers suitable for harsh environments.
A PLC is an example of a "hard" real-time system since output results must be produced in response to input conditions within a limited time, otherwise unintended operation will result.
PLC originated in the late 1960s in the automotive industry in the US and were designed to replace relay logic systems. Before, control logic for manufacturing was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers.
The hard-wired nature made it difficult for design engineers to alter the process. Changes would require rewiring and careful updating of the documentation. If even one wire were out of place, or one relay failed, the whole system would become faulty. Often technicians would spend hours troubleshooting by examining the schematics and comparing them to existing wiring. When general-purpose computers became available, they were soon applied to control logic in industrial processes. These early computers were unreliable and required specialist programmers and strict control of working conditions, such as temperature, cleanliness, and power quality.
The PLC was developed with several advantages over earlier designs. It tolerated the industrial environment better than computers and was more reliable, compact and required less maintenance than relay systems. It was easily extensible with additional I/O modules, while relay systems required complicated hardware changes in case of reconfiguration. This allowed for easier iteration over manufacturing process design. Comparing to a computer, PLC in a rack format can be more easily extended with additional I/O in the form of I/O cards. With simple programming language focused on logic and switching operations, it was more user-friendly than computers using general-purpose programming languages. It also permitted for its operation to be monitored.
Early PLCs were programmed in ladder logic, which strongly resembled a schematic diagram of relay logic. This program notation was chosen to reduce training demands for the existing technicians. Other PLCs used a form of instruction list programming, based on a stack-based logic solver.
In 1968, GM Hydramatic (the automatic transmission division of General Motors) issued a request for proposals for an electronic replacement for hard-wired relay systems based on a white paper written by engineer Edward R. Clark. The winning proposal came from Bedford Associates from Bedford, Massachusetts. The result was the first PLC—built in 1969–designated the 084, because it was Bedford Associates' eighty-fourth project.
Bedford Associates started a company dedicated to developing, manufacturing, selling, and servicing this new product, which they named (standing for modular digital controller). One of the people who worked on that project was Dick Morley, who is considered to be the "father" of the PLC. The Modicon brand was sold in 1977 to Gould Electronics and later to Schneider Electric, the current owner.
One of the very first 084 models built is now on display at Schneider Electric's facility in North Andover, Massachusetts. It was presented to Modicon by GM, when the unit was retired after nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its product range until the 984 made its appearance.
In a parallel development Odo Josef Struger is sometimes known as the "father of the programmable logic controller" as well. He was involved in the invention of the AllenBradley programmable logic controller during 1958 to 1960 and is credited with inventing the PLC acronym. Allen-Bradley (now a brand owned by Rockwell Automation) became a major PLC manufacturer in the United States during his tenure. Struger played a leadership role in developing IEC 61131-3 PLC programming language standards.
Many early PLCs were not capable of graphical representation of the logic, and so it was instead represented as a series of logic expressions in some kind of Boolean format, similar to Boolean algebra. As programming terminals evolved, it became more common for ladder logic to be used, because it was a familiar format used for electro-mechanical control panels. Newer formats, such as state logic and Function Block (which is similar to the way logic is depicted when using digital integrated logic circuits) exist, but they are still not as popular as ladder logic. A primary reason for this is that PLCs solve the logic in a predictable and repeating sequence, and ladder logic allows the person writing the logic to see any issues with the timing of the logic sequence more easily than would be possible in other formats.
Up to the mid-1990s, PLCs were programmed using proprietary programming panels or special-purpose programming terminals, which often had dedicated function keys representing the various logical elements of PLC programs. Some proprietary programming terminals displayed the elements of PLC programs as graphic symbols, but plain ASCII character representations of contacts, coils, and wires were common. Programs were stored on cassette tape cartridges. Facilities for printing and documentation were minimal due to a lack of memory capacity. The oldest PLCs used non-volatile magnetic core memory.
A programmable logic controller consists of:
PLCs require programming device which is used to develop and later download the created program into the memory of the controller.
Modern PLCs generally contain a real-time operating system, such as OS-9 or VxWorks.
There are two types of mechanical design for PLC systems. A "single box", or a "brick" is a small programmable controller that fits all units and interfaces into one compact casing, although, typically, additional expansion modules for inputs and outputs are available. Second design type – a "modular" PLC – has a chassis (also called a "rack") that provides space for modules with different functions, such as power supply, processor, selection of I/O modules and communication interfaces – which all can be customized for the particular application. Several racks can be administered by a single processor and may have thousands of inputs and outputs. Either a special high-speed serial I/O link or comparable communication method is used so that racks can be distributed away from the processor, reducing the wiring costs for large plants. Options are also available to mount I/O points directly to the machine and utilize quick disconnecting cables to sensors and valves, saving time for wiring and replacing components.
Discrete (digital) signals can only take "on" or "off" value (1 or 0, "true" or "false"). Examples of devices providing a discrete signal include limit switches, photoelectric sensors and encoders. Discrete signals are sent using either voltage or current, where specific extreme ranges are designated as o"n" and o"ff". For example, a controller might use 24 V DC input with values above 22 V DC representing o"n", values below 2 V DC representing o"ff", and intermediate values undefined.
Analog signals can use voltage or current that is proportional to the size of the monitored variable and can take any value within their scale. Pressure, temperature, flow, and weight are often represented by analog signals. These are typically interpreted as integer values with various ranges of accuracy depending on the device and the number of bits available to store the data. For example, an analog 0 to 10 V or 4-20 mA input would be converted into an integer value of 0 to 32,767. The PLC will take this value and transpose it into the desired units of the process so the operator or program can read it. Proper integration will also include filter times to reduce noise as well as high and low limits to report faults. Current inputs are less sensitive to electrical noise (e.g. from welders or electric motor starts) than voltage inputs. Distance from the device and the controller is also a concern as the maximum traveling distance of a good quality 0-10V signal is very short compared to the 4-20 mA signal. The 4-20 mA signal can also report if the wire is disconnected along the path as it would return a 0 mA signal.
Some special processes need to work permanently with minimum unwanted downtime. Therefore, it is necessary to design a system that is fault-tolerant and capable of handling the process with faulty modules. In such cases to increase the system availability in the event of hardware component failure, redundant CPU or I/O modules with the same functionality can be added to hardware configuration for preventing total or partial process shutdown due to hardware failure. Other redundancy scenarios could be related to safety-critical processes, for example, large hydraulic presses could require that both PLCs turn on an output before the press can come down in case one output does not turn off properly.
Programmable logic controllers are intended to be used by engineers without a programming background. For this reason, a graphical programming language called Ladder Diagram (LD, LAD) was first developed which resembles the schematic diagram of a system built with electromechanical relays. It was adopted by many manufacturers and later standardized in the IEC 61131-3 control systems programming standard. , it is still widely used, thanks to its simplicity.
, the majority of PLC systems adhere to the IEC 61131-3 standard that defines 2 textual programming languages: Structured Text (ST; similar to Pascal) and Instruction List (IL); as well as 3 graphical languages: Ladder Diagram, Function Block Diagram (FBD) and Sequential Function Chart (SFC). Instruction List (IL) was deprecated in the third edition of the standard.
Modern PLCs can be programmed in a variety of ways, from the relay-derived ladder logic to programming languages such as specially adapted dialects of BASIC and C.
While the fundamental concepts of PLC programming are common to all manufacturers, differences in I/O addressing, memory organization, and instruction sets mean that PLC programs are never perfectly interchangeable between different makers. Even within the same product line of a single manufacturer, different models may not be directly compatible.
PLC programs are typically written in programming device which can take the form of a desktop console, special software on a personal computer, or a handheld programming device. Then, the program is downloaded to the PLC directly or over a network. It is stored either in non-volatile flash memory or battery-backed-up RAM. In some programmable controllers, the program is transferred from a personal computer to the PLC through a programming board that writes the program into a removable chip, such as EPROM.
A program written on a personal computer can be easily copied and backed up on external storage. Manufacturers develop programming software for their controllers. In addition to being able to program PLCs in multiple languages, they provide common features like hardware diagnostics and maintenance, software debugging, and offline simulation.
The program can be uploaded for backup and restoration purposes.
In order to properly understand the operation of a PLC, it is necessary to spend considerable time programming, testing and debugging PLC programs. PLC systems are inherently expensive, and down-time is often very costly. In addition, if a PLC is programmed incorrectly it can result in lost productivity and dangerous conditions. PLC simulation software such as PLCLogix can save time in the design of automated control applications and can also increase the level of safety associated with equipment since many "what if" scenarios can be tried and tested before the system is activated.
This is a programming example in ladder diagram which shows the control system. A ladder diagram is a method of drawing control circuits which pre-dates PLCs. The ladder diagram resembles the schematic diagram of a system built with electromechanical relays.
As an example, say a facility needs to store water in a tank. The water is drawn from the tank by another system, as needed, and our example system must manage the water level in the tank by controlling the valve that refills the tank. Shown are:
In a ladder diagram, the contact symbols represent the state of bits in processor memory, which corresponds to the state of physical inputs to the system. If a discrete input is energized, the memory bit is a 1, and a "normally open" contact controlled by that bit will pass a logic "true" signal on to the next element of the ladder. Therefore, the contacts in the PLC program that "read" or look at the physical switch contacts, in this case, must be "opposite" or open in order to return a TRUE for the closed physical switches. Internal status bits, corresponding to the state of discrete outputs, are also available to the program.
In the example, the physical state of the float switch contacts must be considered when choosing "normally open" or "normally closed" symbols in the ladder diagram. The PLC has two discrete inputs from float switches (Low Level and High Level). Both float switches (normally closed) open their contacts when the water level in the tank is above the physical location of the switch.
When the water level is below both switches, the float switch physical contacts are both closed, and a true (logic 1) value is passed to the Fill Valve output. Water begins to fill the tank. The internal "Fill Valve" contact latches the circuit so that even when the "Low Level" contact opens (as the water passes the lower switch), the fill valve remains on. Since the High Level is also normally closed, water continues to flow as the water level remains between the two switch levels. Once the water level rises enough so that the "High Level" switch is off (opened), the PLC will shut the inlet to stop the water from overflowing; this is an example of seal-in (latching) logic. The output is sealed in until a high-level condition breaks the circuit. After that, the fill valve remains off until the level drops so low that the low-level switch is activated, and the process repeats again.
A complete program may contain thousands of rungs, evaluated in sequence. Typically the PLC processor will alternately scan all its inputs and update outputs, then evaluate the ladder logic; input changes during a program scan will not be effective until the next I/O update. A complete program scan may take only a few milliseconds, much faster than changes in the controlled process.
Programmable controllers vary in their capabilities for a "rung" of a ladder diagram. Some only allow a single output bit. There are typically limits to the number of series contacts in line, and the number of branches that can be used. Each element of the rung is evaluated sequentially. If elements change their state during evaluation of a rung, hard-to-diagnose faults can be generated, although sometimes (as above) the technique is useful. Some implementations forced evaluation from left-to-right as displayed and did not allow reverse flow of a logic signal (in multi-branched rungs) to affect the output.
The main difference from most other computing devices is that PLCs are intended-for and therefore tolerant-of more severe conditions (such as dust, moisture, heat, cold), while offering extensive input/output (I/O) to connect the PLC to sensors and actuators. PLC input can include simple digital elements such as limit switches, analog variables from process sensors (such as temperature and pressure), and more complex data such as that from positioning or machine vision systems. PLC output can include elements such as indicator lamps, sirens, electric motors, pneumatic or hydraulic cylinders, magnetic relays, solenoids, or analog outputs. The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a fieldbus or computer network that plugs into the PLC.
The functionality of the PLC has evolved over the years to include sequential relay control, motion control, process control, distributed control systems, and networking. The data handling, storage, processing power, and communication capabilities of some modern PLCs are approximately equivalent to desktop computers. PLC-like programming combined with remote I/O hardware, allow a general-purpose desktop computer to overlap some PLCs in certain applications. Desktop computer controllers have not been generally accepted in heavy industry because the desktop computers run on less stable operating systems than PLCs, and because the desktop computer hardware is typically not designed to the same levels of tolerance to temperature, humidity, vibration, and longevity as the processors used in PLCs. Operating systems such as Windows do not lend themselves to deterministic logic execution, with the result that the controller may not always respond to changes of input status with the consistency in timing expected from PLCs. Desktop logic applications find use in less critical situations, such as laboratory automation and use in small facilities where the application is less demanding and critical.
The most basic function of a programmable controller is to emulate the functions of electromechanical relays. Discrete inputs are given a unique address, and a PLC instruction can test if the input state is on or off. Just as a series of relay contacts perform a logical AND function, not allowing current to pass unless all the contacts are closed, so a series of "examine if on" instructions will energize its output storage bit if all the input bits are on. Similarly, a parallel set of instructions will perform a logical OR. In an electromechanical relay wiring diagram, a group of contacts controlling one coil is called a "rung" of a "ladder diagram ", and this concept is also used to describe PLC logic. Some models of PLC limit the number of series and parallel instructions in one "rung" of logic. The output of each rung sets or clears a storage bit, which may be associated with a physical output address or which may be an "internal coil" with no physical connection. Such internal coils can be used, for example, as a common element in multiple separate rungs. Unlike physical relays, there is usually no limit to the number of times an input, output or internal coil can be referenced in a PLC program.
Some PLCs enforce a strict left-to-right, top-to-bottom execution order for evaluating the rung logic. This is different from electro-mechanical relay contacts, which, in a sufficiently complex circuit, may either pass current left-to-right or right-to-left, depending on the configuration of surrounding contacts. The elimination of these "sneak paths" is either a bug or a feature, depending on programming style.
More advanced instructions of the PLC may be implemented as functional blocks, which carry out some operation when enabled by a logical input and which produce outputs to signal, for example, completion or errors, while manipulating variables internally that may not correspond to discrete logic.
PLCs use built-in ports, such as USB, Ethernet, RS-232, RS-485, or RS-422 to communicate with external devices (sensors, actuators) and systems (programming software, SCADA, HMI). Communication is carried over various industrial network protocols, like Modbus, or EtherNet/IP. Many of these protocols are vendor specific.
PLCs used in larger I/O systems may have peer-to-peer (P2P) communication between processors. This allows separate parts of a complex process to have individual control while allowing the subsystems to co-ordinate over the communication link. These communication links are also often used for HMI devices such as keypads or PC-type workstations.
Formerly, some manufacturers offered dedicated communication modules as an add-on function where the processor had no network connection built-in.
PLCs may need to interact with people for the purpose of configuration, alarm reporting, or everyday control. A human-machine interface (HMI) is employed for this purpose. HMIs are also referred to as man-machine interfaces (MMIs) and graphical user interfaces (GUIs). A simple system may use buttons and lights to interact with the user. Text displays are available as well as graphical touch screens. More complex systems use programming and monitoring software installed on a computer, with the PLC connected via a communication interface.
A PLC works in a program scan cycle, where it executes its program repeatedly. The simplest scan cycle consists of 3 steps:
The program follows the sequence of instructions. It typically takes a time span of tens of milliseconds for the processor to evaluate all the instructions and update the status of all outputs. If the system contains remote I/O—for example, an external rack with I/O modules—then that introduces additional uncertainty in the response time of the PLC system.
As PLCs became more advanced, methods were developed to change the sequence of ladder execution, and subroutines were implemented. This enhanced programming could be used to save scan time for high-speed processes; for example, parts of the program used only for setting up the machine could be segregated from those parts required to operate at higher speed. Newer PLCs now have the option to run the logic program synchronously with the IO scanning. This means that IO is updated in the background and the logic reads and writes values as required during the logic scanning.
Special-purpose I/O modules may be used where the scan time of the PLC is too long to allow predictable performance. Precision timing modules, or counter modules for use with shaft encoders, are used where the scan time would be too long to reliably count pulses or detect the sense of rotation of an encoder. This allows even a relatively slow PLC to still interpret the counted values to control a machine, as the accumulation of pulses is done by a dedicated module that is unaffected by the speed of program execution.
In his book from 1998, E. A. Parr pointed out that even though most programmable controllers require physical keys and passwords, the lack of strict access control and version control systems, as well as an easy to understand programming language make it likely that unauthorized changes to programs will happen and remain unnoticed.
Prior to the discovery of the Stuxnet computer worm in June 2010, security of PLCs received little attention. Modern programmable controllers generally contain a real-time operating systems, which can be vulnerable to exploits in similar way as desktop operating systems, like Microsoft Windows. PLCs can also be attacked by gaining control of a computer they communicate with. , this concerns grow as networking is becoming more commonplace in the PLC environment connecting the previously separate plant floor networks and office networks.
In recent years "safety" PLCs have started to become popular, either as standalone models or as functionality and safety-rated hardware added to existing controller architectures (Allen-Bradley Guardlogix, Siemens F-series etc.). These differ from conventional PLC types as being suitable for use in safety-critical applications for which PLCs have traditionally been supplemented with hard-wired safety relays. For example, a safety PLC might be used to control access to a robot cell with trapped-key access, or perhaps to manage the shutdown response to an emergency stop on a conveyor production line. Such PLCs typically have a restricted regular instruction set augmented with safety-specific instructions designed to interface with emergency stops, light screens, and so forth. The flexibility that such systems offer has resulted in rapid growth of demand for these controllers.
PLCs are well adapted to a range of automation tasks. These are typically industrial processes in manufacturing where the cost of developing and maintaining the automation system is high relative to the total cost of the automation, and where changes to the system would be expected during its operational life. PLCs contain input and output devices compatible with industrial pilot devices and controls; little electrical design is required, and the design problem centers on expressing the desired sequence of operations. PLC applications are typically highly customized systems, so the cost of a packaged PLC is low compared to the cost of a specific custom-built controller design. On the other hand, in the case of mass-produced goods, customized control systems are economical. This is due to the lower cost of the components, which can be optimally chosen instead of a "generic" solution, and where the non-recurring engineering charges are spread over thousands or millions of units.
Programmable controllers are widely used in motion, positioning, or torque control. Some manufacturers produce motion control units to be integrated with PLC so that G-code (involving a CNC machine) can be used to instruct machine movements.
For small machines with low or medium volume. PLCs that can execute PLC languages such as Ladder, Flow-Chart/Grafcet... Similar to traditional PLCs, but their small size allows developers to design them into custom printed circuit boards like a microcontroller, without computer programming knowledge, but with a language that is easy to use, modify and maintain. It's between the classic PLC / Micro-PLC and the Microcontrollers.
For high volume or very simple fixed automation tasks, different techniques are used. For example, a cheap consumer dishwasher would be controlled by an electromechanical cam timer costing only a few dollars in production quantities.
A microcontroller-based design would be appropriate where hundreds or thousands of units will be produced and so the development cost (design of power supplies, input/output hardware, and necessary testing and certification) can be spread over many sales, and where the end-user would not need to alter the control. Automotive applications are an example; millions of units are built each year, and very few end-users alter the programming of these controllers. However, some specialty vehicles such as transit buses economically use PLCs instead of custom-designed controls, because the volumes are low and the development cost would be uneconomical.
Very complex process control, such as used in the chemical industry, may require algorithms and performance beyond the capability of even high-performance PLCs. Very high-speed or precision controls may also require customized solutions; for example, aircraft flight controls. Single-board computers using semi-customized or fully proprietary hardware may be chosen for very demanding control applications where the high development and maintenance cost can be supported. "Soft PLCs" running on desktop-type computers can interface with industrial I/O hardware while executing programs within a version of commercial operating systems adapted for process control needs.
The rising popularity of single board computers has also had an influence on the development of PLCs. Traditional PLCs are generally closed platforms, but some newer PLCs (e.g. ctrlX from Bosch Rexroth, PFC200 from Wago, PLCnext from Phoenix Contact, and Revolution Pi from Kunbus) provide the features of traditional PLCs on an open platform.
PLCs may include logic for single-variable feedback analog control loop, a PID controller. A PID loop could be used to control the temperature of a manufacturing process, for example. Historically PLCs were usually configured with only a few analog control loops; where processes required hundreds or thousands of loops, a distributed control system (DCS) would instead be used. As PLCs have become more powerful, the boundary between DCS and PLC applications has been blurred.
In more recent years, small products called programmable logic relays (PLRs) or smart relays, have become more common and accepted. These are similar to PLCs and are used in light industry where only a few points of I/O are needed, and low cost is desired. These small devices are typically made in a common physical size and shape by several manufacturers and branded by the makers of larger PLCs to fill out their low-end product range. Most of these have 8 to 12 discrete inputs, 4 to 8 discrete outputs, and up to 2 analog inputs. Most such devices include a tiny postage-stamp-sized LCD screen for viewing simplified ladder logic (only a very small portion of the program being visible at a given time) and status of I/O points, and typically these screens are accompanied by a 4-way rocker push-button plus four more separate push-buttons, similar to the key buttons on a VCR remote control, and used to navigate and edit the logic. Most have a small plug for connecting via RS-232 or RS-485 to a personal computer so that programmers can use simple Windows applications for programming instead of being forced to use the tiny LCD and push-button set for this purpose. Unlike regular PLCs that are usually modular and greatly expandable, the PLRs are usually not modular or expandable, but their price can be two orders of magnitude less than a PLC, and they still offer robust design and deterministic execution of the logics.
A variant of PLCs, used in remote locations is the remote terminal unit or RTU. An RTU is typically a low power, ruggedized PLC whose key function is to manage the communications links between the site and the central control system (typically SCADA) or in some modern systems, "The Cloud". Unlike factory automation using high speed Ethernet, communications links to remote sites are often radio based and are less reliable. To account for the reduced reliability, RTU will buffer messages or switch to alternate communications paths. When buffering messages, the RTU will timestamp each message so that a full history of site events can be reconstructed. RTUs, being PLCs, have a wide range of I/O and are fully programmable, typically with languages from the IEC 61131-3 standard that is common to many PLCs, RTUs and DCSs. In remote locations, it is common to use an RTU as a gateway for a PLC, where the PLC is performing all site control and the RTU is managing communications, time-stamping events and monitoring ancillary equipment. On sites with only a handful of I/O, the RTU may also be the site PLC and will perform both communications and control functions. | https://en.wikipedia.org/wiki?curid=24992 |
Peter David
Peter Allen David (born September 23, 1956), often abbreviated PAD, is an American writer of comic books, novels, television, films and video games. His notable comic book work includes an award-winning 12-year run on "The Incredible Hulk", as well as runs on "Aquaman", "Young Justice", "Supergirl", "Fallen Angel", "Spider-Man 2099" and "X-Factor".
His "Star Trek" work includes comic books, novels such as "Imzadi", and co-creation of the "" series. His other novels include film adaptations, media tie-ins, and original works, such as the "Apropos of Nothing" and "Knight Life" series. His television work includes series such as "Babylon 5", "Young Justice", "" and Nickelodeon's "Space Cases", which he co-created with Bill Mumy.
David often jokingly describes his occupation as "Writer of Stuff", and is noted for his prolific writing, characterized by its mingling of real-world issues with humor and references to popular culture, as well as elements of metafiction and self-reference.
David has earned multiple awards for his work, including a 1992 Eisner Award, a 1993 "Wizard" Fan Award, a 1996 Haxtur Award, a 2007 Julie Award and a 2011 GLAAD Media Award.
Peter David's paternal grandparents, Martin and Hela David, and Peter's father, Gunter, came to the United States in the 1930s after the antisemitism in Nazi Germany progressed to the point that Martin's Berlin shoestore became the target of vandalism. David was born September 23, 1956 in Fort Meade, Maryland to Gunter David and Dalia David (née Rojansky), an Israeli-born Jewish mother who had worked with DNA mappers James Watson and Francis Crick, and to whom David credits his sense of humor. He has two siblings, a brother Wally, seven years his junior, who works as an IT Systems Administrator in the financial sector, and a younger sister named Beth.
David first became interested in comics when he was about five years old, reading copies of Harvey Comics' "Casper" and "Wendy" in a barbershop. He became interested in superheroes through the "Adventures of Superman" TV series. Although David's parents approved of his reading Harvey Comics and comics featuring Disney characters, they did not approve of superhero books, especially those published by Marvel Comics, feeling that characters that looked like monsters, such as the Thing or the Hulk, or who wore bug-eyed costumes, like Spider-Man, did not appear heroic. As a result, David read those comics in secret, beginning with his first Marvel book, "Fantastic Four Annual" #3 (November 1965), which saw the wedding of Mister Fantastic and the Invisible Woman. His parents eventually allowed him to start reading superhero titles, his favorite of which was "Superman". He cites John Buscema as his favorite pre-1970s artist. David attended his first comic book convention around the time that Jack Kirby's "New Gods" premiered, after asking his father to take him to one of Phil Seuling's shows in New York, where David obtained Kirby's autograph, his first encounter with a comics professional.
David's earliest interest in writing came through the journalism work of his father, Gunter, who sometimes reviewed movies and took young Peter along (if it was age-appropriate). While Gunter wrote his reviews back at the newspaper's office, David wrote his own, portions of which sometimes found their way into Gunter's published reviews. David began to entertain the notion of becoming a professional writer at age twelve, buying a copy of "The Guide to the Writer's Market", and subscribing to similar-themed magazines, in the hopes of becoming a reporter.
David lived in Bloomfield, New Jersey, in a small house at 11 Albert Terrace, and attended Demarest Elementary School. His family later moved to Verona, New Jersey, where he spent his adolescence. By the time he entered his teens, he had lost interest in comic books, feeling he had outgrown them. David's best friend in junior high and first year in high school, Keith, was gay, and David has described how both of them were targets of ostracism and harassment from homophobes.
Although his family eventually moved to Pennsylvania, his experiences in Verona soured him on that town and shaped his liberal sociopolitical positions regarding LGBT issues. He later made Verona the home location of villain Morgan le Fay in his novel "Knight Life", and has often discussed his progressive views on LGBT issues in his column and on his blog.
David's interest in comics was rekindled when he saw a copy of "Superman vs. Muhammad Ali" (1978) while passing a newsstand, and later, "X-Men" #95 (October 1975), and discovered in that latter book the "All-New, All-Different" team that had first appeared in "Giant-Size X-Men" #1 (May 1975). These two books were the first comics he had purchased in years.
A seminal moment in the course of his aspirations occurred when he met writer Stephen King at a book signing, and told him that he was an aspiring writer. King signed David's copy of "Danse Macabre" with the inscription, "Good luck with your writing career.", which David now inscribes himself onto books presented to him by fans who tell him the same thing. Other authors that David cites as influences include Harlan Ellison, Arthur Conan Doyle, Robert B. Parker, Neil Gaiman, Terry Pratchett, Robert Crais and Edgar Rice Burroughs. Specific books he has mentioned as favorites include "To Kill a Mockingbird", "Tarzan of the Apes", "The Princess Bride", "The Essential Ellison", "A Confederacy of Dunces", "Adams Versus Jefferson", and "Don Quixote". David has singled out Ellison in particular as a writer whom he has tried to emulate.
David attended New York University, where he graduated with a Bachelor of Arts degree in journalism.
David's first professional assignment was covering the World Science Fiction Convention held in Washington in 1974 for the "Philadelphia Bulletin".
David eventually gravitated towards fiction after his attempts at journalism did not meet with success. His first published fiction was in "Asimov's Science Fiction". He sold an op-ed piece to "The New York Times", but overall his submissions were met with rejection that far outnumbered those accepted.
David eventually gave up on a career in writing, and came to work in book publishing. His first publishing job was for the E.P. Dutton imprint Elsevier/Nelson, where he worked mainly as an assistant to the editor-in-chief. He later worked in sales and distribution for Playboy Paperbacks. He subsequently worked for five years in Marvel Comics' Sales Department, first as Assistant Direct Sales Manager under Carol Kalish, who hired him, and then succeeding Kalish as Sales Manager. During this time he made some cursory attempts to sell stories, including submission of some Moon Knight plots to Dennis O'Neil, but his efforts were unfruitful.
Three years into David's tenure as Direct Sales Manager, Jim Owsley became editor of the Spider-Man titles. Although crossing over from sales into editorial was considered a conflict of interest in the Marvel offices, Owsley, whom David describes as a "maverick," was impressed with how David had not previously hesitated to work with him when Owsley was an assistant editor under Larry Hama. When Owsley became an editor, he purchased a Spider-Man story from David, which appeared in "The Spectacular Spider-Man" #103 (June 1985). Owsley subsequently purchased from David "The Death of Jean DeWolff", a violent murder mystery darker in tone than the usually lighter Spider-Man stories that ran in issues #107–110 (October 1985 – January 1986) of that title. Responding to charges of conflict of interest, David made a point of not discussing editorial matters with anyone during his 9-to-5 hours as Direct Sales Manager, and decided not to exploit his position as Sales Manager by promoting the title. Although David attributes the story's poor sales to this decision, he asserts that such crossing over from Sales to Editorial is now common. In the Marvel offices, a rumor circulated that it was actually Owsley who was writing the stories attributed to David. Nonetheless, David says he was fired from "Spectacular Spider-Man" by Owsley due to editorial pressure by Marvel's Editor-in-Chief Jim Shooter, and has commented that the resentment stirred by Owsley's purchase of his stories may have permanently damaged Owsley's career. Months later, Bob Harras offered David "The Incredible Hulk", as it was a struggling title that no one else wanted to write, which gave David free rein to do whatever he wanted with the character.
During his 12-year run on "Hulk", David explored the recurring themes of the Hulk's multiple personality disorder, his periodic changes between the more rageful and less intelligent Green Hulk and the more streetwise, cerebral Gray Hulk, and of being a journeyman hero, which were inspired by "The Incredible Hulk" #312 (October 1985), in which writer Bill Mantlo (and possibly, according to David, Barry Windsor-Smith) had first established that Banner had suffered childhood abuse at the hands of his father. These aspects of the character were later used in the 2003 feature film adaptation by screenwriter Michael France and director Ang Lee. Comic Book Resources credits David with making the formerly poor-selling book "a must-read mega-hit". David collaborated with a number of artists who became fan-favorites on the series, including Todd McFarlane, Dale Keown and Gary Frank. Among the new characters he created during his run on the series were the Riot Squad and the Pantheon. David wrote the first appearance of the Thunderbolts, a team created by Kurt Busiek and Mark Bagley, in "The Incredible Hulk" #449 (January 1997).
It was after he had been freelancing for a year, and into his run on "Hulk", that David felt that his writing career had cemented. After putting out feelers at DC Comics, and being offered the job of writing a four-issue miniseries of The Phantom by editor Mike Gold, David quit his sales position to write full-time. David had a brief tenure writing Green Lantern when the character was exclusive to the short-lived anthology series "Action Comics Weekly" from issues #608–620 in 1988.
David took over "Dreadstar" during its First Comics run, with issue #41 (March 1989) after Jim Starlin left the title, and remained on it until issue #64 (March 1991), the final issue of that run. David's other Marvel Comics work in the late 1980s and 1990s includes runs on "Wolverine", the New Universe series "" and "Justice", a run on the original "X-Factor", and the futuristic series "Spider-Man 2099", about a man in the year 2099 who takes up the mantle of Spider-Man, the title character of which David co-created. David left "X-Factor" after 19 issues, and he wrote the first 44 issues of "Spider-Man 2099" before quitting that book to protest the firing of editor Joey Cavalieri. The book was cancelled two issues later, along with the entire 2099 line.
In 1990, David wrote a seven-issue "Aquaman" miniseries, "The Atlantis Chronicles", for DC Comics, about the history of Aquaman's home of Atlantis, which David has referred to as among the written works of which he is most proud, and his first time writing in the full script format. He later wrote a 1994 "Aquaman" miniseries, "Aquaman: Time and Tide", which led to a relaunched monthly "Aquaman" series, the first 46 issues of which he wrote from 1994–1998. His run on "Aquaman" gained notoriety, for in the book's second issue, Aquaman lost a hand, which was then replaced with a harpoon, a feature of the character that endured for the duration of David's run on the book. More broadly, his run recast the character as an aggressive man of action, one deserving of greater respect, in contrast to the "fish-talking punch line" into which the TV series "Super Friends" had rendered him. David quit that book over creative differences.
David wrote the "Star Trek" comic book for DC from 1988–1991, when that company held the licensing rights to the property, though he has opined that novels are better suited to "Star Trek", whose stories are not highly visual. He and Ron Marz cowrote the "DC vs. Marvel" intercompany crossover in 1996. David enjoyed considerable runs on "Supergirl" and "Young Justice", the latter eventually being canceled so that DC could use that book's characters in a relaunched "Teen Titans" monthly.
David's work for Dark Horse Comics has included the teen spy adventure "SpyBoy", which appeared in a series and a number of miniseries between 1999 and 2004, and the 2007 miniseries "The Scream".
Other 1990s work includes the 1997 miniseries "Heroes Reborn: The Return", for Marvel, and two creator-owned properties: "Soulsearchers and Company", published by Claypool Comics, and the Epic Comics title "Sachs and Violens", which he produced with co-creator/artist George Pérez.
David's early 2000s work includes runs on two volumes of "Captain Marvel" as well as the "Before the Fantastic Four: Reed Richards" limited series.
David and his second wife, Kathleen, wrote the final English-language text for the first four volumes of the manga series "Negima" for Del Rey Manga.
In 2003, David began writing another creator-owned comic, "Fallen Angel", for DC Comics, which he created in order to make use of plans he had devised for Supergirl after the "Many Happy Returns" storyline, but which were derailed by that series' cancellation. That same year, he wrote a "Teenage Mutant Ninja Turtles" series for Dreamwave that tied into the animated television series broadcast that year.
DC canceled "Fallen Angel" after 20 issues, but David restarted the title at IDW Publishing at the end of 2005. Other IDW work included a "" one-shot and the "Spike vs. Dracula" mini-series, both based on the character from the "Buffy the Vampire Slayer" and "Angel" television series.
In 2005, David briefly returned to "The Incredible Hulk", though he left after only 11 issues because of his workload. He started a new series, "Friendly Neighborhood Spider-Man", beginning with a twelve-part crossover storyline called "", which, along with J. Michael Straczynski's run on "The Amazing Spider-Man", and Reginald Hudlin's run on "Marvel Knights Spider-Man," depicted the webslinger as he discovered he was dying, lost an eye during a traumatic fight with Morlun, underwent a metamorphosis and emerged with new abilities and insights into his powers. As tends to be the case when fundamental changes are introduced to long-standing classic comics characters, the storyline caused some controversy among readers for its introduction of retractable stingers in Spider-Man's arms, and the establishment of a "totem" from which his powers are derived. David's final issue of that title was #23.
David wrote a "MadroX" miniseries that year, whose success led to a relaunch of a monthly "X-Factor" volume 3 written by him. This was a revamped version of the title starring both Madrox and other members of the former "X-Factor" title that David had written in the early 1990s, now working as investigators in a detective agency of that name. David's work on the title garnered praise from Ain't it Cool News, and David has stated that the opt in/opt out policy and greater planning with which Marvel now executes crossover storylines has made his second stint on the title far easier. His decision to explicitly establish male characters Shatterstar and Rictor as sharing a sexual attraction to one another (a confirmation of clues that had been established in "X-Force" years earlier in issues such as "X-Force" #25, 34, 43, 49, 56 and "X-Force '99 Annual"), drew criticism from Shatterstar's co-creator, Rob Liefeld, though Editor-in-Chief Joe Quesada supported David's story. David eventually won a 2011 GLAAD Media Award for Outstanding Comic Book for his work on the title.
On February 11, 2006, David announced at the WonderCon convention in California in that he had signed an exclusive contract with Marvel Comics. "Fallen Angel", "Soulsearchers and Company" and David's "Spike" miniseries were "grandfathered" into the contract, so as to not be affected by it. The first new project undertaken by David after entering into the contract, which he announced on April 5, 2006, was writing the dialogue for "", the comic book spin-off of Stephen King's "The Dark Tower" novels, which was to be illustrated by Jae Lee, as well as scripting the subsequent "Dark Tower" comics.
David took over Marvel's "She-Hulk" after writer Dan Slott's departure, beginning with issue #22. His run, which won praise, ended with issue #38, when the series was canceled. He wrote a 2008–09 "Sir Apropos of Nothing" miniseries, based on the character from his novels, which was published by IDW Publishing.
David's other 2000s comics based on licensed or adapted properties include "Halo: Helljumper", a 2009 miniseries based on the "Halo" video game, a 2009 "" manga book published by Del Rey, "Ben Folds Four", a "Little Mermaid" story in Jim Valentino's "Fractured Fables" anthology that was praised by Ain't It Cool News, an adaptation of the 1982 film "Tron" that was released to tie in with that film's , and a "John Carter of Mars" prequel to the 2012 feature film. In 2010, he co-wrote "The Spider-Man Vault: A Museum-in-a-Book with Rare Collectibles Spun from Marvel's Web" with Robert Greenberger. David wrote the script for "Avengers: Season One", an original graphic novel published to promote the DVD release of "The Avengers".
On November 24, 2011, David was one of the balloon handlers who pulled the Spider-Man balloon during the Macy's Thanksgiving Day Parade.
In October 2013, "X-Factor" ended its run with issue #262, concluding the X-Factor Investigations incarnation of the series. The book was then relaunched as "All-New X-Factor", a new series with artist Carmine Di Giandomenico, as a part of the All-New Marvel NOW! initiative announced at the 2013 New York Comic Con. The opening storyline, which continues events from issue #260 of the previous series, establishes the new corporate-sponsored version of the team, and includes Polaris, Quicksilver, and Gambit.
In July 2014, David returned to Spider-Man 2099, writing the second volume of "Spider-Man 2099" with artist Will Sliney. With this series, David was again writing two series, "X-Factor" and "Spider-Man 2099", after having previously done so decades prior, a coincidence that prompted him to joke at the June 2014 Special Edition NYC convention, "I don't know whether to be proud of that or if I'm in a rut!"
In 2014 David wrote a six-part story-arc for "The Phantom" for publishing company Hermes Press, a story that David, reportedly had wanted to write for many years.
In 2015, Simon and Schuster published Stan Lee's autobiographical graphic novel, "Amazing Fantastic Incredible", which David co-wrote, and which became a "New York Times" bestseller in its first week of release.
In April 2017, following the conclusion of the Spider-Man storyline "", which saw the return of Ben Reilly, Marvel premiered the monthly series "", with David as writer. David explained to Syfy Wire that when Marvel offered him the job, he was initially ambivalent, as Ben Reilly had never been his favorite incarnation of Spider-Man, and given Reilly's recent emergence as the villainous Jackal. However, David gave further consideration to the fact that a book whose main character had a skewed, villainous worldview was not something Marvel had historically done much of, and decided that the premise presented itself with opportunities that intrigued him enough to accept the job.
David's career as a novelist developed concurrently with his comic-book writing career. David had been working at a publisher that went out of business, and a former coworker from that publisher became his agent, through whom he sold his first novel, "Knight Life", to Ace Books. Although the sale was made before he wrote any comic books, the novel was not published until eighteen months later, in 1987. The novel depicts the reappearance of King Arthur in modern-day New York City. Another early novel of his, "Howling Mad", is about a wolf that turns into a human being after being bitten by a werewolf. Ace Books hired David to write the "Photon" and "Psi-Man" novels, though they published them under the "house name" David Peters, over David's objections. David updated "Knight Life" years later when Penguin Putnam brought it back into print in 2003, and made it a trilogy with the sequels "One Knight Only" and "Fall of Knight", which were published in 2004 and 2007, respectively. Penguin rereleased "Howling Mad" and the "Psi-Man books" under David's actual name.
David first began writing "Star Trek" novels at the request of Pocket Books editor Dave Stern, who was a fan of David's "Star Trek" comic book work. His "Star Trek" novels are among those for which he is best known, including "Q-in-Law"; "I, Q"; "Vendetta"; "Q-Squared"; and "Imzadi", one of the best-selling Star Trek novels of all time. He created the ongoing novel series, "," a spin-off from "," with John J. Ordover in 1997. "New Frontier" continued until April 2011, with the publication of "Blind Man's Bluff", the final "New Frontier" novel on David's contract at the time, after which the series' future was unclear to David. David's other science fiction tie-in novels include written five "Babylon 5" novels, three of which were originals, and two of which were adaptations of the TV movies "" and "".
His other novel adaptations include those of the movies "The Return of Swamp Thing", "The Rocketeer", "Batman Forever", "Spider-Man", "Spider-Man 2", "Spider-Man 3", "Hulk", "The Incredible Hulk", "Fantastic Four", and "Iron Man". He wrote an original Hulk novel, "The Incredible Hulk: What Savage Beast", and an adaptation of an unused "Alien Nation" television script, "Body and Soul".
David's 2009 novel "Tigerheart" is a re-imagining of Peter Pan with a mix of new and old characters, told as a Victorian bedtime story, much like the classic tale. It was praised by Ain't It Cool News, and honored by the "School Library Journal" as one of 2008's Best Adult Books for High School Students. His "Sir Apropos of Nothing" fantasy trilogy, "Sir Apropos of Nothing", "The Woad to Wuin" and "Tong Lashing", features characters and settings completely of David's own creation, as does his 2007 fantasy novel, "Darkness of the Light", which is the first in a new trilogy of novels titled "The Hidden Earth". The second installment, "The Highness of the Low", was scheduled to be published in September 2009, but David has related on his blog that it has been delayed until the winter of 2012.
David's 2010 novel work includes "Year of the Black Rainbow", a novel cowritten with musician Claudio Sanchez of the band Coheed and Cambria, that was released with the band's album of the same name, and an "Fable" original novel "The Balverine Order", set between the events of "Fable II" and "Fable III". In April 2011, David announced that, in addition to another "Fable" novel, he and a number of other writers, including Glenn Hauman, Mike Friedman and Bob Greenberger, were assembling an electronic publishing endeavor called Crazy Eight Press to publish e-books directly to fans, the first of which would be David's Arthurian story, "The Camelot Papers". David explained that the second book in his "Hidden Earth" trilogy would be published through Crazy Eight. In September 2013, David acknowledged that books published through Crazy Eight are not as lucrative for him as those for publishers that pay him advances, and announced that his then-impending novel, "ARTFUL: Being the Heretofore Secret History of that Unique Individual, The Artful Dodger, Hunter of Vampyres (Amongst Other Things.)", would be published by Amazon.com.
David has stated that he tries to block out different days and different times to work on different projects. He usually works in the morning, for example, on novels, and does comics-related work in the afternoon. Having previously used Smith Corona typewriters, he writes on a Sony Vaio desktop computer, using Microsoft Word for his comics and novel work, and Final Draft for his screenplays. When writing novels, he sometimes outlines the story, and sometimes improvises it as he is writing it. Following his stroke in December 2012, David began using DragonDictate to write. Todd McFarlane's original art for the cover of "The Incredible Hulk" #340, featuring Wolverine, which McFarlane gave to David as a gift, hangs in David's office.
David previously wrote his comic book scripts using the Marvel Method, but due to his tendency to overplot, as during his collaboration with McFarlane on "The Incredible Hulk", he switched to the full script method, which he continues to use . He has stated that he prefers to plot his comics stories in six-month arcs. He has stated that when he works on a particular title, he always does so with a particular person or group of people in mind to which he dedicates it, explaining that he wrote "Supergirl" for his daughters, "Young Justice" for a son he might one day have and "The Incredible Hulk" for his first wife, Myra, who first urged him to accept the job of writing that book. David has further explained that the events of his own life are sometimes reflected in his work, as when, for example, following the breakup of his first marriage, the direction of "The Incredible Hulk" faltered, with the Hulk wandering the world aimlessly, hopelessly looking to be loved.
David has stated that his favorite female character of his own creation is Lee, the protagonist of "Fallen Angel", which he says is derived from the positive female fan reaction to that character. Characters that David has not written but which he has expressed an interest in writing for the comics medium include Batman, Tarzan, Doc Savage, the Dragonriders of Pern, the Steed/Peel Avengers, and Dracula. He has specifically mentioned interest in writing a "Tarzan vs. the Phantom" story.
David has written for several television series and video games. He wrote two scripts for "Babylon 5" (the second-season episodes "Soul Mates" and "There All the Honor Lies"), and the episode "Ruling from the Tomb" for its sequel series, "Crusade". With actor/writer Bill Mumy, he is co-creator of the television series "Space Cases", which ran for two seasons on Nickelodeon, and which proved to be his most lucrative work. David himself appeared as Ben, the father of series regular Bova, in the second-season episode "Long Distance Calls". David's oldest daughter, Shana, later appeared as Pezu, the emotionally disturbed sentient computer in the series finale "A Friend in Need". David has written and co-produced several films for Full Moon Entertainment and has made cameo appearances in some of the films as well.
David wrote an unproduced script for the fifth season of "Babylon 5" called "Gut Reactions", which he wrote with Bill Mumy.
David wrote "In Charm's Way", an episode of "". The script was recorded in early 2009, and the episode premiered November 13, 2009. He later wrote three episodes of the spinoff "", the first of which, "Reflected Glory", premiered October 15, 2010.
David wrote the script for the Xbox 360 video game "Shadow Complex", which debuted in August 2009.
David wrote several episodes of the "Young Justice" animated TV series, which premiered in 2010, and is based on the comic book series he wrote from 1998 to 2003. The first episode he penned is episode #18. The same year, he wrote a graphic novel adaptation of the video game "Epic Mickey", and a prequel digicomic, "Disney's Epic Mickey: Tales of Wasteland".
In 2011 David wrote the video game "".
At the 2012 San Diego Comic-Con International, Stan Lee announced his new YouTube channel, "Stan Lee's World of Heroes", which airs several programs created by Lee and other creators. One of them, "Head Cases", is a superhero sitcom created by David and his wife Kathleen and produced by David M. Uslan. The series centers on Thunderhead, a would-be hero whose inability to utilize his ability to produce loud thunderblasts without injury to himself leads him to become a source of comedic derision in the superhero community. The series, which explores events that occur in between the battles typically seen in comic books, was based on a concept originated by Uslan, and partly inspired by "It's Always Sunny in Philadelphia". David describes "Head Cases" as a 75-minute movie divided into 5-minute webisodes. The series will feature guest appearances by other industry personalities, including Stan Lee, who appears as himself, functioning in a similar manner to Norm Peterson from "Cheers".
On more than one occasion, editorial problems or corporate pressure to modify or re-script his plotlines have prompted David to leave books, particularly his decision to terminate his first run on Marvel's "X-Factor", due to constantly having to constrain his plots to accommodate crossover events with other books. He resigned from "Spider-Man 2099" to protest the firing of editor Joey Cavalieri, and from "Aquaman" over other creative differences. When David abruptly left his first stint on "The Incredible Hulk" due to editorial pressures, some of the plot points of the character that David established were retconned by later creative teams.
In his "But I Digress" column, which began appearing in the "Comics Buyer's Guide" on July 27, 1990, and in his blog, in operation since April 2002, David has been outspoken in many of his views pertaining to the comic book industry and numerous other subjects. He has criticized the low regard in which writers are held, and against copyright infringement, particularly that which is committed through peer-to-peer file sharing and posting literary works in their entirety on the Internet without the permission of the copyright holder.
On many occasions, he has offered criticisms of specific publishers, as when he criticized "Wizard" magazine for ageism. He has criticized companies for not sufficiently compensating the creators of their long-standing and lucrative characters, such as Marvel Comics for its treatment of Blade creator Marv Wolfman and Archie Comics for its treatment of "Josie and the Pussycats" creator Dan DeCarlo. He has criticized publishers for various other business practices, including Marvel | https://en.wikipedia.org/wiki?curid=24994 |
Pretoria
Pretoria (; ; ), also known as Tshwane, is one of South Africa’s three capital cities, serving as the seat of the administrative branch of government, and as the host to all foreign embassies to South Africa. (Cape Town is the legislative capital and Bloemfontein the judicial capital.)
Pretoria straddles the Apies River and extends eastward into the foothills of the Magaliesberg mountains. It has a reputation as an academic city and center of research, being home to the Tshwane University of Technology (TUT), the University of Pretoria (UP), Sefako Makgatho Health Science University (SMU), the University of South Africa (UNISA), the Council for Scientific and Industrial Research (CSIR), and the Human Sciences Research Council. It also hosts the National Research Foundation and the South African Bureau of Standards. Pretoria was one of the host cities of the 2010 FIFA World Cup.
Pretoria is the central part of the Tshwane Metropolitan Municipality which was formed by the amalgamation of several former local authorities, including Centurion and Soshanguve. Some have proposed changing the official name from Pretoria to Tshwane, which has caused some public controversy.
Pretoria is named after the Voortrekker leader Andries Pretorius, and South Africans sometimes call it the "Jacaranda City," because of the thousands of jacaranda trees planted along its streets and in its parks and gardens.
Pretoria was founded in 1855 by Marthinus Pretorius, a leader of the Voortrekkers, who named it after his father Andries Pretorius and chose a spot on the banks of the "Apies rivier" (Afrikaans for "Monkeys river") to be the new capital of the South African Republic (; ZAR). The elder Pretorius had become a national hero of the Voortrekkers after his victory over Dingane and the Zulus in the Battle of Blood River in 1838. The elder Pretorius also negotiated the Sand River Convention (1852), in which the United Kingdom acknowledged the independence of the Transvaal. It became the capital of the South African Republic on 1 May 1860.
The founding of Pretoria as the capital of the South African Republic can be seen as marking the end of the Boers' settlement movements of the Great Trek.
During the First Boer War, the city was besieged by Republican forces in December 1880 and March 1881. The peace treaty which ended the war was signed in Pretoria on 3 August 1881 at the Pretoria Convention.
The Second Boer War resulted in the end of the Transvaal Republic and start of British hegemony in South Africa. The city surrendered to British forces under Frederick Roberts on 5 June 1900 and the conflict was ended in Pretoria with the signing of the Peace of Vereeniging on 31 May 1902 at Melrose House.
The Pretoria Forts were built for the defence of the city just prior to the Second Boer War. Though some of these forts are today in ruins, a number of them have been preserved as national monuments.
The Boer Republics of the ZAR and the Orange River Colony were united with the Cape Colony and Natal Colony in 1910 to become the Union of South Africa. Pretoria then became the administrative capital of the whole of South Africa, with Cape Town the legislative capital and Bloemfontein served as the judicial capital. Between 1910 and 1994, the city was also the capital of the province of Transvaal. (As the capital of the ZAR, Pretoria had superseded Potchefstroom in that role.)
On 14 October 1931, Pretoria achieved official city status. When South Africa became a republic in 1961, Pretoria remained its administrative capital.
Pretoria is situated approximately north-northeast of Johannesburg in the northeast of South Africa, in a transitional belt between the plateau of the Highveld to the south and the lower-lying Bushveld to the north. It lies at an altitude of about above sea level, in a warm, sheltered, fertile valley, surrounded by the hills of the Magaliesberg range.
Pretoria has a humid subtropical climate (Köppen: Cwa) with long hot rainy summers, and short, mild winters. The city experiences the typical winters of South Africa with cold, clear nights and mild to moderately warm days. Although the average lows during winter are mild, it can get cold due to the clear skies, with nighttime low temperatures in recent years in the range of .
The average annual temperature is . This is rather high, considering the city's relatively high altitude of about , and is due mainly to its sheltered valley position, which acts as a heat trap and cuts it off from cool southerly and south-easterly air masses for much of the year.
Rain is chiefly concentrated in the summer months, with drought conditions prevailing over the winter months, when frosts may be sharp. Snowfall is an extremely rare event; snowflakes were spotted in 1959, 1968 and 2012 in the city, but the city has never experienced an accumulation in its history.
During a nationwide heatwave in November 2011, Pretoria experienced temperatures that reached , unusual for that time of the year. Similar record-breaking extreme heat events also occurred in January 2013, when Pretoria experienced temperatures exceeding on several days. The year 2014 was one of the wettest on record for the city. A total of fell up to the end of December, with recorded in this month alone. In 2015, Pretoria saw its worst drought since 1982; the month of November 2015 saw new records broken for high temperatures, with recorded on 11 November after three weeks of temperatures between and . Pretoria reached a new record high of on 7 January 2016.
Depending on the extent of the area understood to constitute "Pretoria", the population ranges from 700,000 to 2.95 million. The main languages spoken in Pretoria are Sepedi, Sesotho, Setswana, Xitsonga, Afrikaans and English. The city of Pretoria has the largest white population in Sub-Saharan Africa. Since its founding, it has been a major Afrikaner population centre, and currently there are roughly 1 million Afrikaners living in or around the city.
Even since the end of Apartheid, Pretoria itself has had a white majority, albeit with an ever-increasing black middle-class. However, in the townships of Soshanguve and Atteridgeville black people make up close to all of the population. The largest white ethnic group are the Afrikaners and the largest black ethnic group are the Northern Sothos.
The lower estimate for the population of Pretoria includes largely former white-designated areas, and there is therefore a white majority. However, including the geographically separate townships increases Pretoria's population beyond a million and makes whites a minority.
Pretoria's Indians were ordered to move from Pretoria to Laudium on 6 June 1958.
Pretoria is known as the "Jacaranda City" due to the approximately 50,000 Jacarandas that line its streets. Purple is a colour often associated with the city and is often included on local council logos and services such as the A Re Yeng rapid bus system and the logo of the local Jacaranda FM radio station.
Pretoria has over the years had very diverse cultural influences and this is reflected in the architectural styles that can be found in the city. It ranges from 19th century Dutch, German and British colonial architecture to modern, postmodern, neomodern, and art deco architecture styles with a good mix of a uniquely South African style.
Some of the notable structures in Pretoria include the late 19th century Palace of Justice, the early 20th century Union Buildings, the post-war Voortrekker Monument, the diverse buildings dotting the main campuses of both the University of Pretoria and the University of South Africa, traditional Cape Dutch style Mahlamba Ndlopfu (the President's House), the more modern Reserve Bank of South Africa (office skyscraper) and the Telkom Lukasrand Tower. Other well-known structures and buildings include the Loftus Versfeld Stadium, The South African State Theatre and the Oliver Tambo building which is the Headquarters of the Department of International Relations and Cooperation.
Despite the many corporate offices, small businesses, shops, and government departments that are situated in Pretoria's sprawling suburbs, its Central Business District still retains its status as the traditional centre of government and commerce. Many banks, businesses, large corporations, shops, shopping centres, and other businesses are situated in the city centre which is towered by several large skyscrapers, the tallest of which is the Poyntons Building ( tall), the ABSA Building ( tall) and the Reserve Bank of South Africa building ( tall).
The area contains a large amount of historical buildings, monuments, and museums that include the Pretoria City Hall, Pretorius Square, Church Square (along with its many historical buildings and statues), and the Ou Raadsaal. There is also the Transvaal Museum (the country's leading natural history museum, which although it has changed venues a number of times, has been around since 1892), the National Zoological Gardens of South Africa (or more colloquially known as the Pretoria Zoo), Melrose House Museum in Jacob Maré Street, the Pretoria Art Museum and the African Window Cultural History Museum.
Several National Departments also have Head Offices in the Central Business district such as the Department of Health, Basic Education, Transport, Higher Education and Training, Sport and Recreation, Justice and Constitutional Development, Public Service and Administration, Water and Environmental Affairs and the National Treasury. The district also has a high number of residential buildings which house people who primarily work in the district.
Pretoria is home to the National Zoological Gardens of South Africa, as well as the Pretoria National Botanical Garden. There are also a number of smaller parks and gardens located throughout the city, including the Austin Roberts Bird Sanctuary, Pretorius Square gardens, the Pretoria Rosarium, Church Square, Pretoria Showgrounds, Springbok Park, Freedom Park, Jan Cilliers Park and Burgers Park, the oldest park in the city and now a national monument. In the suburbs there are also several parks that are notable: Rietondale Park, "Die Proefplaas" in the Queenswood suburb, Magnolia Dell Park, Nelson Mandela Park and Mandela Park Peace Garden and Belgrave Square Park.
Pretoria's nickname "the Jacaranda City" comes from the around 70,000 jacaranda trees that grow in Pretoria and decorate the city each October with their purple blossoms. The first two trees were planted in 1888 in the garden of local gardener, J.D. Cilliers, at Myrtle Lodge on Celliers Street in Sunnyside. He obtained the seedlings from a Cape Town nurseryman who had harvested them in Rio de Janeiro, Brazil. The two trees still stand on the grounds of the Sunnyside Primary School.
The jacaranda comes from tropical South America and belongs to the family Bignoniaceae. There are around fifty species of jacaranda, but the one found most often in the warmer areas of Southern Africa is Jacaranda mimosifolia.
At the end of the 19th century, the flower and tree grower James Clark imported jacaranda seedlings from Australia and began growing them on a large scale. In November 1906, he donated two hundred small saplings to the Pretoria City Council, which planted them on Koch Street (today Bosman Street). The city engineer Walton Jameson, soon known as "Jacaranda Jim," launched a program to plant jacaranda trees throughout Pretoria, and by 1971 there would already be 55,000 of them in the city.
Most jacarandas in Pretoria are lilac in color, but there are also white ones planted on Herbert Baker Street in Groenkloof.
The Jacaranda Carnival is an old tradition that was held from 1939 to 1964. After a hiatus of over twenty years, it resumed in 1985. Festivities include a colorful march and the crowning of the Jacaranda Queen.
Commuter rail services around Pretoria are operated by Metrorail. The routes, originating from the city centre, extend south to Germiston and Johannesburg, west to Atteridgeville, northwest to Ga-Rankuwa, north to Soshanguve and east to Mamelodi. Via the Pretoria–Maputo railway it is possible to access the port of Maputo, in the east.
The Gautrain high-speed railway line runs from the eastern suburb of Hatfield to Pretoria Station and then southwards to Centurion, Midrand, Marlboro, Sandton, OR Tambo International Airport, Rosebank and Johannesburg.
Pretoria Station is a departure point for the Blue Train luxury train. Rovos Rail, a luxury mainline train safari service operates from the colonial-style railway station at Capital Park. The South African Friends of the Rail have recently moved their vintage train trip operations from the Capital Park station to the Hercules station.
Various bus companies exist in Pretoria, of which PUTCO is one of the oldest and most recognised. Tshwane municipality provides the remainder of the bus services.
The N1 is the major freeway that runs through Pretoria. It enters the city from the south as the Ben Schoeman Highway. At the Brakfontein Interchange with the N14 it continues as The N1 Eastern Bypass bisects the large expanse of the eastern suburbs, routing traffic from Johannesburg to Polokwane and the north of the country. The R101 is the original N1, and served the same function before the construction of the highway. It runs through the centre of town rather than the eastern suburbs.
The N4 enters the town as a highway from Witbank in the east, merging with the N1 at the Proefplaas Interchange. It begins again north of the city, branching west from the N1 as the Platinum Highway, forming the Northern Bypass, and heading to Rustenburg. The N4 runs east–west through South Africa, connecting Maputo to Gaborone. Before the Platinum Highway was built, the N4 continued passed the Proefplaas Interchange to the city centre, where it became a regular road, before again becoming a highway west of the city. These roads are now designated the M2 and M4. There is a third, original east–west road: the R104, previously named Church Street. Church Street has been renamed Helen Joseph from Nelson Mandela Church Square, WF Nkomo from Nelson Mandela to R511, Stanza Bopape from Nelson Mandela to the East and Elias Motswaledi from R511 to the West.
The N14 starts in the centre of town from the M4 (former N4). It is a normal road heading south through the centre before becoming the Ben Schoeman highway. At the Brakfontein interchange, the Ben Schoeman highway becomes the N1, but the N14 continues as the intersecting west-south-western highway towards Krugersdorp. The R114 parallels the N14 in its westward journey running just to the north of the highway.
The R21 provides a second north–south highway, further east. It starts from the Fountains Interchange south of the city centre, but is still a road until Monument Park, when it becomes a true highway. It crosses the N1 east of the Brakfontein Interchange at the Flying Saucer Interchange and runs north–south towards Ekurhuleni (specifically Kempton Park and Boksburg). Importantly it links Pretoria with the OR Tambo International Airport in Kempton Park.
A proposed third north–south highway, in the west of the city, the R80 is partially built. At present the highway begins in Soshanguve. It terminates just north of the city centre at an intersection with the M1. Plans have been in place for some time to extend this all the way past the M4 and N14 highways to the N1 in Randburg.
Pretoria is also served by many regional roads. The R55 starts at an interchange with the R80, and runs north–south west of the city to Sandton. The R50 starts from the N1 just after the Flying Saucer Interchange in the south-east of the city, and continues south-east towards Delmas. The R511 runs north–south from Randburg towards Brits and barely by-passes Pretoria to the west. The R514 starts from the M1, north of the city centre, and terminates at the R511. The R513 crosses Pretoria's northern suburbs from east to west. It links Pretoria to Cullinan and Bronkhorstspruit in the east and Hartbeespoort in the west. The R566 takes origin in Pretoria's northern suburbs, and exits the town to the west just north of the R513. It connects Pretoria to Brits. Finally the R573 starts from the R513, just east of the town and heads north-east to Siyabuswa.
Pretoria is also served internally by metropolitan routes.
For scheduled air services, Pretoria is served by Johannesburg's airports: OR Tambo International, south of central Pretoria; and Lanseria, south-west of the city. Wonderboom Airport in the suburb of Wonderboom in the north of Pretoria primarily services light commercial and private aircraft. However, as from August 2015, scheduled flights from Wonderboom Airport to Cape Town International Airport were made available by SA Airlink. There are two military air bases to the south of the city, Swartkop and Waterkloof.
Since Pretoria forms part the Tshwane Metropolitan Municipality, most radio, television and paper media is the same as the rest of the metro area.
There are many radio stations in the greater Pretoria region, some of note are:
Impact Radio, is a Christian Community Radio Station based in Pretoria, and broadcasting on 103FM in the Greater Tshwane Area.
Jacaranda FM, previously known as Jacaranda 94.2, is a commercial South African radio station, broadcasting in English and Afrikaans, with a footprint that covers Gauteng, Limpopo, Mpumalanga and the North West Province and boasts a listening audience of 2 million people a week, and a digital community of more than 1,1 million people a month. The station's format is mainstream adult contemporary with programming constructed around a playlist of hit music from the 1980s, 1990s and now.
Tuks FM is the radio station of the University of Pretoria and one of South Africa's community broadcasters. It was one of the first community broadcasters in South Africa to be given an FM licence. It is known for contemporary music and is operated by UP's student base.
Radio Pretoria is a community-based radio station in Pretoria, South Africa, whose programmes are aimed at Afrikaners. It broadcasts 24 hours a day in stereo on 104.2 FM in the greater Pretoria area. Various other transmitters (with their own frequencies) in South Africa broadcast the station's content further afield, while the station is also available on Sentech's digital satellite platform.
Radio Kuber Kontrei is a community-based Internet (streaming) radio station in Pretoria, South Africa, whose programmes are aimed at Afrikaans-speaking Christians worldwide.
Pretoria is serviced by eTV, SABC, MNET, and SuperSport
The city is serviced by a variety of printed publications namely;
Pretoria News is a daily newspaper established in Pretoria in 1898. It publishes a daily edition from Monday to Friday and a Weekend edition on Saturday and Sunday. It is an independent newspaper in the English language that serves the city and its direct environs. It is available online via the Independent online website.
Beeld is an Afrikaans-language daily newspaper that was launched on 16 September 1974. Beeld is distributed in four provinces of South Africa: Gauteng, Mpumalanga, Limpopo, North West. Die Beeld (English: The Image) was an Afrikaans-language Sunday newspaper in the late 1960s.
Wrapped is an alternative lifestyle magazine from Africa that caters for the entire LGBT community and is not gender-dominated.
Pretoria Sotho (called Sepitori by its speakers) is the urban lingua franca of Pretoria and the Tshwane metropolitan area in South Africa. It is a combination of Tswana and Northern Sotho (Pedi), with influences from Tsotsitaal and other black South African languages. It is a creole language that developed in the city during the years of Apartheid.
A number of popular South African bands and musicians are originally from Pretoria. These include Desmond and the Tutus, Bittereinder, The Black Cat Bones, Seether, popular mostwako rapper JR, Joshua na die Reën and DJ Mujava who was raised in the town of Attridgeville.
The song "Marching to Pretoria" refers to this city. Pretoria was the capital of the South African Republic (a.k.a. Republic of the Transvaal; 1852–1881 and 1884–1902) the principal battleground for the First and Second Boer War, the latter which brought both the Transvaal and the Orange Free State republic under British rule. "Marching to Pretoria" was one of the songs that British soldiers sang as they marched from the Cape Colony, under British Rule since 1814, to the capital of the Southern African Republic (or in Dutch, "Zuid-Afrikaansche Republiek"). As the song's refrain puts it: "We are marching to Pretoria, Pretoria, Pretoria/We are marching to Pretoria, Pretoria, Hurrah."
The opening line of John Lennon's Beatles' song I Am the Walrus, "I am he as you are he as you are me and we are all together," is often believed to be based on the lyric "I'm with you and you're with me and so we are all together" in "Marching to Pretoria." Lennon denied this, insisting his lyrics came from "nothing."
Pretoria is home to an extensive portfolio of public art. A diverse and evolving city, Pretoria boasts a vibrant art scene and a variety of works that range from sculptures to murals to pieces by internationally and locally renowned artists. The Pretoria Art Museum is home to a vast collection of local artworks. After a bequest of 17th century Dutch artworks by Lady Michaelis in 1932 the art collection of Pretoria City Council expanded quickly to include South African works by Henk Pierneef, Pieter Wenning, Frans Oerder, Anton van Wouw and Irma Stern. And according to the museum: "As South African museums in Cape Town and Johannesburg already had good collections of 17th, 18th and 19th century European art, it was decided to focus on compiling a representative collection of South African art" making it somewhat unusual compared to its contemporaries.
Pretoria houses several performing arts venues including:
the South African State Theatre which houses the arts of Opera, musicals, plays and comedic performances.
A 9 metre tall statue of former president Nelson Mandela was unveiled in front of the Union Buildings on 16 December 2013. Since Nelson Mandela's inauguration as South Africa's first majority elected president the Union Buildings have come to represent the new 'Rainbow Nation'. Public art in Pretoria has flourished since the 2010 FIFA World Cup with many areas receiving new public artworks.
One of the most popular sports in Pretoria is rugby union. Loftus Versfeld is home to the Blue Bulls, who compete in the domestic Currie Cup, and also to the Bulls in the international Super Rugby competition. The Bulls Super Rugby team, which is operated by the Blue Bulls, won the competition in 2007, 2009 and 2010. Loftus Versfeld also hosts the football side Mamelodi Sundowns.
Pretoria also hosted matches during the 1995 Rugby World Cup. Loftus Versfeld was used for some matches in the 2010 FIFA World Cup.
Association football is one of the most popular sports in the city. There are currently two football teams in the city playing in South Africa's top-flight football league, the Premier Soccer League. They are Mamelodi Sundowns and Supersport United. Supersport United were the 2008–09 PSL Champions. Following the 2011/2012 season the University of Pretoria F.C. gained promotion to the South African Premier Division, the top domestic league, becoming the third Pretoria-based team in the league. After a poor league finish in the 2015/2016 season, University of Pretoria F.C. were relegated to the National First Division, the second-highest football league in South Africa, in the 2016 Premier Soccer League promotion/relegation play-offs.
Cricket is also a popular game in the city. As there is no international cricket stadium in the city, it does not host any top-class cricket tournaments, although the nearby situated Centurion has Supersport Park which is an international cricket stadium and has hosted many important tournaments such as 2003 Cricket World Cup, 2007 ICC World Twenty20, 2009 IPL and 2009 ICC Champions Trophy. The most local franchise team to Pretoria is the Titans, although Northerns occasionally play in the city in South Africa's provincial competitions. Many Pretoria born cricketers have gone on to play for South Africa, including current captain AB de Villiers and T20 captain Faf du Plessis.
The Pretoria Transnet Blind Cricket Club is situated in Pretoria and is currently the biggest Blind Cricket club in South Africa. Their field is at the Transnet Engineering campus on Lynette Street, home of differently disabled cricket. PTBCC has played many successful blind cricket matches with abled body team such as the South African Indoor Cricket Team and TuksCricket Junior Academy. Northerns Blind Cricket is the Provincial body that governs PTBCC and Filefelfia Secondary School. The Northern Blind Cricket team won the 40 over National Blind Cricket tournament that was held in Cape Town in April 2014.
Among the places of worship, they are predominantly Christian churches and temples : Zion Christian Church, Apostolic Faith Mission of South Africa, Assemblies of God, Baptist Union of Southern Africa (Baptist World Alliance), Methodist Church of Southern Africa (World Methodist Council), Anglican Church of Southern Africa (Anglican Communion), Presbyterian Church of Africa (World Communion of Reformed Churches), Roman Catholic Archdiocese of Pretoria (Catholic Church). There are also Muslim mosques and Hindu temples.
Pretoria has a small Jewish community of around 3,000. Jewish citizens have been in Pretoria since its foundation in the 19th century and played an important role in its industrial and economic growth. A Mr. De Vries, the first Jewish inhabitant of Pretoria, was a prominent citizen and prosecutor, a member of the Volksraad and a pioneer of the Afrikaans language. Another famed Jewish Pretorian was Sammy Marks.
Other early Jewish settlers, many of them immigrants from Lithuania, were not as educated as De Vries and often did not speak Dutch, Afrikaans, or English. Many of them spoke only Yiddish and made a living as shopkeepers in the local retail industry. Most Jewish residents stayed neutral in the Second Boer War, though some joined the South African Republic army.
The first congregation was founded between 1890 and 1895, and in 1898 the first synagogue opened on Paul Kruger Street. A second synagogue, known as the Great Synagogue, opened in 1922. Both synagogues are no longer in operation, but a Reformed synagogue, Temple Menorah, opened in the early 1950s.
The Jewish community of Pretoria's golden age was in the early 20th century, when many Jewish sports clubs, charities, and youth groups flourished. After 1948, many Jews left for Cape Town or Johannesburg.
The synagogue on Paul Kruger Street was purchased by the government in 1952 to become the new home of the High Court where prominent opposition figures in the Anti-Apartheid Movement were tried, including Nelson Mandela, Walter Sisulu, and 26 others were prosecuted for treason from 1 August 1958 to 29 March 1961; the Rivonia Trial was held there in 1963–1964.
Two Jewish schools arose in Pretoria, the Miriam Marks School, which was founded in 1905, and the Carmel School, which opened in 1959. Only the second, currently also operating as a synagogue, remains. Pretoria's Reformed congregation shares a rabbi with the Johannesburg one, though the synagogue no longer operates and services take place in worshipers' private homes.
A Buddhist center, the Jang Chup Chopel Rigme Centre ("Center of Light") was founded in early January 2015 by Duan Pienaar or Gyalten Nyima (his adopted monastic name) in Waverley around Pretoria-Moot. Pienaar is the only Afrikaner ordained in the highly selective Tibetan Tantric Buddhist community in Bylakuppe, in southern India. His instructor Lama Kyabje Choden Rinpoche is the highest tantric master after the Dalai Lama. Pienaar, who studied Buddhist teachers for twenty years, spent two years in India.
The city is a major commercial centre and an important industrial centre. Its main industries are iron and steel works, copper casting, and the manufacture of automobiles, railway carriages and heavy machinery.
Pretoria has a number of industrial areas, business districts and small home businesses. A number of chambers of commerce exist for Pretoria and its business community including Pretoriaweb, a business networking group that meets once a month to discuss the issues of doing business in Pretoria. The members of Pretoriaweb also discuss issues in various social media environments and on the website.
The Pretoria civic arms, designed by Dr. Frans Engelenburg, were granted by the College of Arms on 7 February 1907. They were registered with the Transvaal Provincial Administration in March 1953 and at the Bureau of Heraldry in May 1968. The Bureau provided new artwork, in a more modern style, in 1989.
The arms were: "Gules, on an mimosa tree eradicated proper within an orle of eight bees volant, Or, an inescutcheon Or and thereon a Roman praetor seated proper". In layman's terms : a red shield displaying an uprooted mimosa tree surrounded by a border of eight golden bees, superimposed on the tree is a golden shield depicting a Roman praetor. The tree represented growth, the bees industry, and the praetor (judge) was an heraldic pun on the name.
The crest was a three-towered golden castle; the supporters were an eland and a kudu; and the motto "Praestantia praevaleat Pretoria".
The coat of arms have gone out of favour after the City Council amalgamated with its surrounding councils to form the City of Tshwane Metropolitan Municipality.
Schools for foreign students:
Pretoria is one of South Africa's leading academic cities and is home to both the largest residential university in South Africa, largest distance education university in South Africa and a research intensive university. The three Universities in the city in order of the year founded are as follows:
The University of South Africa (commonly referred to as Unisa), founded in 1873 as the University of the Cape of Good Hope, is the largest university on the African continent and attracts a third of all higher education students in South Africa. It spent most of its early history as an examining agency for Oxford and Cambridge universities and as an incubator from which most other universities in South Africa are descended. In 1946 it was given a new role as a distance education university and in 2012 it had a student headcount of over 300,000 students, including African and international students in 130 countries worldwide, making it one of the world's mega universities. Unisa is a dedicated open distance education institution and offers both vocational and academic programmes.
The University of Pretoria (commonly referred to as UP, Tuks, or Tukkies) is a multi campus public research university. The university was established in 1908 as the Pretoria campus of the Johannesburg based Transvaal University College and is the fourth South African institution in continuous operation to be awarded university status. Established in 1920, the University of Pretoria Faculty of Veterinary Science is the second oldest veterinary school in Africa and the only veterinary school in South Africa. In 1949 the university launched the first MBA programme outside of North America. Since 1997, the university has produced more research outputs every year than any other institution of higher learning in South Africa, as measured by the Department of Education's accreditation benchmark.
The Tshwane University of Technology (commonly referred to as TUT) is a higher education institution, offering vocational oriented diplomas and degrees, and came into being through a merger of Technikon Northern Gauteng, Technikon North-West and Technikon Pretoria. TUT caters for approximately 60,000 students and it has become the largest residential higher education institution in South Africa.
The Council for Scientific and Industrial Research (CSIR) is South Africa's central scientific research and development organisation. It was established by an act of parliament in 1945 and is situated on its own campus in the city. It is the largest research and development organisation in Africa and accounts for about 10% of the entire African R&D budget. It has a staff of approximately 3,000 technical and scientific researchers, often working in multi-disciplinary teams. In 2002, Dr. Sibusiso Sibisi was appointed as the president and CEO of the CSIR
Pretoria has earned a reputation as being the centre of South Africa's Military and is home to several military facilities of the South African National Defence Force:
This complex is the headquarters to the South African Air Force.
A military complex that houses the following:
A military complex located on the corner of Patriot Street and Koraalboom Road that houses the following military headquarters:
This base is situated in the suburb of Salvokop and is divided into two parts:
Thaba Tshwane is a large military area South-West of the Pretoria Central Business District and North of Air Force Base Swartkop. It is the Headquarters of several Army units-
The military base also houses the 1 Military Hospital and the Military Police School. Within Thaba Tshwane a facility known as "TEK Base" exists which houses its own units-
The Wonderboom Military Base is located adjacent to the Wonderboom Airport and is the headquarters of the South African Army Signals Formation. It also houses the School of Signals, 1 Signal Regiment, 2 Signal Regiment, 3 Electronic Workshop, 4 Signal Regiment and 5 Signal Regiment.
The South African Air Force College, the South African Military Health Service School for Military Health Training and the South African Army College are situated in the Thaba Tshwane Military Base and are used to train Commissioned and Non-commissioned Officers to perform effectively in combat/command roles in the various branches of the South African National Defence Force. The South African Defence Intelligence College is also located in the Sterrewag Suburb north of Air Force Base Waterkloof.
While technically not within the city limits of Pretoria, Air Force Base Swartkop and Air Force Base Waterkloof are often used for defence related matters within the city. These may include aerial military transport duties within the city, aerospace monitoring and defence as well as VIP transport to and from the city.
On 26 May 2005 the South African Geographical Names Council (SAGNC), which is linked to the Directorate of Heritage in the Department of Arts and Culture, approved changing the name of Pretoria to Tshwane, which is already the name of the Metropolitan Municipality in which Pretoria, and a number of surrounding cities are located. Although the name change was approved by the SAGNC, it has not yet been approved by the Minister of Arts and Culture. The matter is currently under consideration while he has requested further research on the matter. Should the Minister approve the name change, the name will be published in the Government Gazette, giving the public opportunity to comment on the matter. The Minister can then refer that public response back to the SAGNC, before presenting his recommendation before parliament, who will vote on the change. Various public interest groups have warned that the name change will be challenged in court, should the minister approve the renaming. The long process involved made it unlikely the name would change anytime soon, if ever, even assuming the Minister had approved the change in early 2006.
The Tshwane Metro Council has advertised "Tshwane" as "Africa's leading capital city" since the name change was approved by the SAGNC in 2005. This has led to further controversy, however, as the name of the city had not yet been changed officially, and the council was, at best, acting prematurely. Following a complaint lodged with the Advertising Standards Authority (ASA), it was ruled that such advertisements are deliberately misleading and should be withdrawn from all media. Despite the rulings of the ASA, Tshwane Metro Council failed to discontinue their "City of Tshwane" advertisements. As a result, the ASA requested that Tshwane Metro pay for advertisements in which it admits that it has misled the public. Refusing to abide by the ASA's request, the Metro Council was banned consequently from placing any advertisements in the South African media that refer to Tshwane as the capital. ASA may still place additional sanctions on the Metro Council that would prevent it from placing any advertisements in the South African media, including council notices and employment vacancies.
After the ruling, the Metro Council continued to place "Tshwane" advertisements, but placed them on council-owned advertising boards and busstops throughout the municipal area. In August 2007, an internal memo was leaked to the media in which the Tshwane mayor sought advice from the premier of Gauteng on whether the municipality could be called the "City of Tshwane" instead of just "Tshwane". This could increase confusion about the distinction between the city of Pretoria and the municipality of Tshwane.
In early 2010 it was again rumoured that the South African government would make a decision regarding the name, however, a media briefing regarding name changes, where it may have been discussed, was cancelled shortly before taking place. Rumours of the name change provoked outrage from Afrikaner civil rights and political groups. It later emerged that the registration of the municipality as a geographic place had been published in the government gazette as it had been too late to withdraw the name from the publication, but it was announced that the name had been withdrawn, pending "further work" by officials. The following week, the registration of "Tshwane" was officially withdrawn in the Government Gazette. The retraction had reportedly been ordered at the behest of the Deputy President of South Africa Kgalema Motlanthe, acting on behalf of President Jacob Zuma, as minister of Arts and Culture Lulu Xingwana had acted contrary to the position of the ANC, which is that Pretoria and the municipality are separate entities, which was subsequently articulated by ANC secretary general Gwede Mantashe.
In March 2010, the "Tshwane Royal House Committee", claiming to be descendants of Chief Tshwane, called for the name to be changed, and for the descendants of Chief Tshwane to be recognised, and to be made part of the administration of the municipality.
According to comments made by Mayor Kgosientso Ramokgopa in late 2011, the change would occur in 2012. However, there remained considerable uncertainty about the issue.
, the proposed name change has not occurred.
Pretoria is twinned with: | https://en.wikipedia.org/wiki?curid=25002 |
Psychiatrist
A psychiatrist is a physician who specializes in psychiatry, the branch of medicine devoted to the diagnosis, prevention, study, and treatment of mental disorders. Psychiatrists are medical doctors, unlike psychologists, and must evaluate patients to determine whether their symptoms are the result of a physical illness, a combination of physical and mental ailments, or strictly psychiatric. A psychiatrist usually works as the clinical leader of the multi-disciplinary team, which may comprise psychologists, social workers, occupational therapists, and nursing staff. Psychiatrists have broad training in a bio-psycho-social approach to assessment and management of mental illness.
As part of the clinical assessment process, psychiatrists may employ a mental status examination; a physical examination; brain imaging such as a computerized tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET) scan; and blood testing. Psychiatrists prescribe medicine, and may also use psychotherapy, although they could also primarily concentrate on medical management and refer to a psychologist or other specialized therapist for weekly to bi-monthly psychotherapy.
The field of psychiatry has many subspecialties (also known as fellowships) that require additional training which are certified by the American Board of Psychiatry and Neurology (ABPN) and require Maintenance of Certification Program (MOC) to continue. These include the following:
Further, other specialties that exist include:
The United Council for Neurologic Subspecialties in the United States offers certification and fellowship program accreditation in the subspecialty 'Behavioral Neurology and Neuropsychiatry' (BNNP) - which is open to both neurologists and psychiatrists.
Some psychiatrists specialize in helping certain age groups. Pediatric psychiatry is the area of the profession working with children in addressing psychological problems. Psychiatrists specializing in geriatric psychiatry work with the elderly and are called geriatric psychiatrists or geropsychiatrists. Those who practice psychiatry in the workplace are called occupational psychiatrists in the United States and occupational psychology is the name used for the most similar discipline in the UK. Psychiatrists working in the courtroom and reporting to the judge and jury, in both criminal and civil court cases, are called forensic psychiatrists, who also treat mentally disordered offenders and other patients whose condition is such that they have to be treated in secure units.
Other psychiatrists and mental health professionals in the field of psychiatry may also specialize in psychopharmacology, psychotherapy, psychiatric genetics, neuroimaging, dementia-related disorders such as Alzheimer's disease, attention deficit hyperactivity disorder (ADHD), sleep medicine, pain medicine, palliative medicine, eating disorders, sexual disorders, women's health, global mental health, early psychosis intervention, mood disorders, and anxiety disorders such as obsessive–compulsive disorder (OCD) and posttraumatic stress disorder (PTSD).
Psychiatrists work in a wide variety of settings. Some are full-time medical researchers, many see patients in private medical practices, consult liaison psychiatrists see patients in hospital settings where psychiatric and other medical conditions interact.
While requirements to become a psychiatrist differ from country to country, all require a medical degree.
In the U.S. and Canada one must first attain the degree of M.D. or D.O., followed by practice as a psychiatric resident for another four years (five years in Canada). This extended period involves comprehensive training in psychiatric diagnosis, psychopharmacology, medical care issues, and psychotherapies. All accredited psychiatry residencies in the United States require proficiency in cognitive-behavioral, brief, psychodynamic, and supportive psychotherapies. Psychiatry residents are required to complete at least four post-graduate months of internal medicine or pediatrics, plus a minimum of two months of neurology during their first year of residency, referred to as an "internship". After completing their training, psychiatrists are eligible to take a specialty board examination to become board-certified. The total amount of time required to complete educational and training requirements in the field of psychiatry in the United States is twelve years after high school. Subspecialists in child and adolescent psychiatry are required to complete a two-year fellowship program, the first year of which can run concurrently with the fourth year of the general psychiatry residency program. This adds one to two years of training.
In the United Kingdom, psychiatrists must hold a medical degree. These degrees are often abbreviated MB BChir, MB BCh, MB ChB, BM BS, or MB BS. Following this, the individual will work as a Foundation House Officer for two additional years in the UK, or one year as Intern in the Republic of Ireland to achieve registration as a basic medical practitioner. Training in psychiatry can then begin and it is taken in two parts: three years of Basic Specialist Training culminating in the MRCPsych exam followed by three years of Higher Specialist Training referred to as "ST4-6" in the UK and "Senior Registrar Training" in the Republic of Ireland. Candidates with MRCPsych degree and complete basic training must reinterview for higher specialist training. At this stage, the development of special interests such as forensic, child/adolescent takes place. At the end of 3 years of higher specialist training, candidates are awarded a CCT (UK) or CCST (Ireland), both meaning Certificate of Completion of (Specialist) Training. At this stage, the psychiatrist can register as a specialist, and the qualification of CC(S)T is recognized in all EU/EEA states. As such, training in the UK and Ireland is considerably longer than in the US or Canada and frequently takes around 8–9 years following graduation from medical school. Those with a CC(S)T will be able to apply for Consultant posts. Those with training from outside the EU/EEA should consult local/native medical boards to review their qualifications and eligibility for equivalence recognition (for example, those with a US residency and ABPN qualification).
In the Netherlands, one must complete medical school after which one is certified as a medical doctor. After a strict selection program, one can specialize in psychiatry: a 4.5-year specialization. During this specialization, the resident has to do a 6-month residency in the field of social psychiatry, a 12-month residency in a field of their own choice (which can be child psychiatry, forensic psychiatry, somatic medicine, or medical research). To become an adolescent psychiatrist, one has to do an extra specialization period of 2 more years. In short, this means that it takes at least 10.5 years of study to become a psychiatrist which can go up to 12.5 years if one becomes a children's and adolescent psychiatrist.
In India, an MBBS degree is the basic qualification needed to do Psychiatry. After completing MBBS (including internship) one can attend various PG Medical Entrance Exams and take MD in psychiatry which is a 3-year course. Diploma Course in Psychiatry or DNB Psychiatry can also be taken to become a Psychiatrist.
In Pakistan, one must complete basic medical education, an MBBS, then get registered with Pakistan Medical and Dental Council as a General Practitioner after a one-year mandatory internship, House Job. After registration with PMDC, one has to go for FCPS-I exam, after that four-year training in Psychiatry under College of Physicians and Surgeons Pakistan. Training includes rotations in General Medicine, Neurology, and Clinical Psychology for 3 months each, during first two years. There is a mid-exam IMM (Intermediate Module) and a final exam after 4 years.
Pass the Casc (2012, 2018) by Dr. Seshni Moodliar for the CASC MRCPsych exam | https://en.wikipedia.org/wiki?curid=25004 |
Peano axioms
In mathematical logic, the Peano axioms, also known as the Dedekind–Peano axioms or the Peano postulates, are axioms for the natural numbers presented by the 19th century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of whether number theory is consistent and complete.
The need to formalize arithmetic was not well appreciated until the work of Hermann Grassmann, who showed in the 1860s that many facts in arithmetic could be derived from more basic facts about the successor operation and induction. In 1881, Charles Sanders Peirce provided an axiomatization of natural-number arithmetic. In 1888, Richard Dedekind proposed another axiomatization of natural-number arithmetic, and in 1889, Peano published a simplified version of them as a collection of axioms in his book, "The principles of arithmetic presented by a new method" ().
The Peano axioms contain three types of statements. The first axiom asserts the existence of at least one member of the set of natural numbers. The next four are general statements about equality; in modern treatments these are often not taken as part of the Peano axioms, but rather as axioms of the "underlying logic". The next three axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final axiom is a second order statement of the principle of mathematical induction over the natural numbers. A weaker first-order system called Peano arithmetic is obtained by explicitly adding the addition and multiplication operation symbols and replacing the second-order induction axiom with a first-order axiom schema.
When Peano formulated his axioms, the language of mathematical logic was in its infancy. The system of logical notation he created to present the axioms did not prove to be popular, although it was the genesis of the modern notation for set membership (∈, which comes from Peano's ε) and implication (⊃, which comes from Peano's reversed 'C'.) Peano maintained a clear distinction between mathematical and logical symbols, which was not yet common in mathematics; such a separation had first been introduced in the "Begriffsschrift" by Gottlob Frege, published in 1879. Peano was unaware of Frege's work and independently recreated his logical apparatus based on the work of Boole and Schröder.
The Peano axioms define the arithmetical properties of "natural numbers", usually represented as a set N or formula_1 The non-logical symbols for the axioms consist of a constant symbol 0 and a unary function symbol "S".
The first axiom states that the constant 0 is a natural number:
The next four axioms describe the equality relation. Since they are logically valid in first-order logic with equality, they are not considered to be part of "the Peano axioms" in modern treatments.
The remaining axioms define the arithmetical properties of the natural numbers. The naturals are assumed to be closed under a single-valued "successor" function "S".
Peano's original formulation of the axioms used 1 instead of 0 as the "first" natural number. This choice is arbitrary, as axiom 1 does not endow the constant 0 with any additional properties. However, because 0 is the additive identity in arithmetic, most modern formulations of the Peano axioms start from 0. Axioms 1, 6, 7, 8 define a unary representation of the intuitive notion of natural numbers: the number 1 can be defined as "S"(0), 2 as "S"("S"(0)), etc. However, considering the notion of natural numbers as being defined by these axioms, axioms 1, 6, 7, 8 do not imply that the successor function generates all the natural numbers different from 0. Put differently, they do not guarantee that every natural number other than zero must succeed some other natural number.
The intuitive notion that each natural number can be obtained by applying "successor" sufficiently often to zero requires an additional axiom, which is sometimes called the "axiom of induction".
The induction axiom is sometimes stated in the following form:
In Peano's original formulation, the induction axiom is a second-order axiom. It is now common to replace this second-order principle with a weaker first-order induction scheme. There are important differences between the second-order and first-order formulations, as discussed in the section below.
The Peano axioms can be augmented with the operations of addition and multiplication and the usual total (linear) ordering on N. The respective functions and relations are constructed in set theory or second-order logic, and can be shown to be unique using the Peano axioms.
Addition is a function that maps two natural numbers (two elements of N) to another one. It is defined recursively as:
For example:
The structure is a commutative monoid with identity element 0. is also a cancellative magma, and thus embeddable in a group. The smallest group embedding N is the integers.
Similarly, multiplication is a function mapping two natural numbers to another one. Given addition, it is defined recursively as:
It is easy to see that "S"(0) (or "1", in the familiar language of decimal representation) is the multiplicative right identity:
To show that "S"(0) is also the multiplicative left identity requires the induction axiom due to the way multiplication is defined:
Therefore, by the induction axiom "S"(0) is the multiplicative left identity of all natural numbers. Moreover, it can be shown that multiplication distributes over addition:
Thus, is a commutative semiring.
The usual total order relation ≤ on natural numbers can be defined as follows, assuming 0 is a natural number:
This relation is stable under addition and multiplication: for formula_5, if , then:
Thus, the structure is an ordered semiring; because there is no natural number between 0 and 1, it is a discrete ordered semiring.
The axiom of induction is sometimes stated in the following form that uses a stronger hypothesis, making use of the order relation "≤":
This form of the induction axiom, called "strong induction", is a consequence of the standard formulation, but is often better suited for reasoning about the ≤ order. For example, to show that the naturals are well-ordered—every nonempty subset of N has a least element—one can reason as follows. Let a nonempty be given and assume "X" has no least element.
Thus, by the strong induction principle, for every , . Thus, , which contradicts "X" being a nonempty subset of N. Thus "X" has a least element.
All of the Peano axioms except the ninth axiom (the induction axiom) are statements in first-order logic. The arithmetical operations of addition and multiplication and the order relation can also be defined using first-order axioms. The axiom of induction is in second-order, since it quantifies over predicates (equivalently, sets of natural numbers rather than natural numbers), but it can be transformed into a first-order "axiom schema" of induction. Such a schema includes one axiom per predicate definable in the first-order language of Peano arithmetic, making it weaker than the second-order axiom. The reason that it is weaker is that the number of predicates in first-order language is countable, whereas the number of sets of natural numbers is uncountable. Thus, there exist sets that cannot be described in first-order language (in fact, most sets have this property).
First-order axiomatizations of Peano arithmetic have another technical limitation. In second-order logic, it is possible to define the addition and multiplication operations from the successor operation, but this cannot be done in the more restrictive setting of first-order logic. Therefore, the addition and multiplication operations are directly included in the signature of Peano arithmetic, and axioms are included that relate the three operations to each other.
The following list of axioms (along with the usual axioms of equality), which contains six of the seven axioms of Robinson arithmetic, is sufficient for this purpose:
In addition to this list of numerical axioms, Peano arithmetic contains the induction schema, which consists of a recursively enumerable set of axioms. For each formula in the language of Peano arithmetic, the first-order induction axiom for "φ" is the sentence
where formula_13 is an abbreviation for "y"1...,"y""k". The first-order induction schema includes every instance of the first-order induction axiom, that is, it includes the induction axiom for every formula "φ".
There are many different, but equivalent, axiomatizations of Peano arithmetic. While some axiomatizations, such as the one just described, use a signature that only has symbols for 0 and the successor, addition, and multiplications operations, other axiomatizations use the language of ordered semirings, including an additional order relation symbol. One such axiomatization begins with the following axioms that describe a discrete ordered semiring.
The theory defined by these axioms is known as PA−; the theory PA is obtained by adding the first-order induction schema. An important property of PA− is that any structure formula_29 satisfying this theory has an initial segment (ordered by formula_30) isomorphic to formula_31. Elements in that segment are called standard elements, while other elements are called nonstandard elements.
A model of the Peano axioms is a triple , where N is a (necessarily infinite) set, and satisfies the axioms above. Dedekind proved in his 1888 book, "The Nature and Meaning of Numbers" (, i.e., “What are the numbers and what are they good for?”) that any two models of the Peano axioms (including the second-order induction axiom) are isomorphic. In particular, given two models and of the Peano axioms, there is a unique homomorphism satisfying
and it is a bijection. This means that the second-order Peano axioms are categorical. This is not the case with any first-order reformulation of the Peano axioms, however.
The Peano axioms can be derived from set theoretic constructions of the natural numbers and axioms of set theory such as ZF. The standard construction of the naturals, due to John von Neumann, starts from a definition of 0 as the empty set, ∅, and an operator "s" on sets defined as:
The set of natural numbers N is defined as the intersection of all sets closed under "s" that contain the empty set. Each natural number is equal (as a set) to the set of natural numbers less than it:
and so on. The set N together with 0 and the successor function satisfies the Peano axioms.
Peano arithmetic is equiconsistent with several weak systems of set theory. One such system is ZFC with the axiom of infinity replaced by its negation. Another such system consists of general set theory (extensionality, existence of the empty set, and the axiom of adjunction), augmented by an axiom schema stating that a property that holds for the empty set and holds of an adjunction whenever it holds of the adjunct must hold for all sets.
The Peano axioms can also be understood using category theory. Let "C" be a category with terminal object 1"C", and define the category of pointed unary systems, US1("C") as follows:
Then "C" is said to satisfy the Dedekind–Peano axioms if US1("C") has an initial object; this initial object is known as a natural number object in "C". If is this initial object, and is any other object, then the unique map is such that
This is precisely the recursive definition of 0"X" and "S""X".
Although the usual natural numbers satisfy the axioms of PA, there are other models as well (called "non-standard models"); the compactness theorem implies that the existence of nonstandard elements cannot be excluded in first-order logic. The upward Löwenheim–Skolem theorem shows that there are nonstandard models of PA of all infinite cardinalities. This is not the case for the original (second-order) Peano axioms, which have only one model, up to isomorphism. This illustrates one way the first-order system PA is weaker than the second-order Peano axioms.
When interpreted as a proof within a first-order set theory, such as ZFC, Dedekind's categoricity proof for PA shows that each model of set theory has a unique model of the Peano axioms, up to isomorphism, that embeds as an initial segment of all other models of PA contained within that model of set theory. In the standard model of set theory, this smallest model of PA is the standard model of PA; however, in a nonstandard model of set theory, it may be a nonstandard model of PA. This situation cannot be avoided with any first-order formalization of set theory.
It is natural to ask whether a countable nonstandard model can be explicitly constructed. The answer is affirmative as Skolem in 1933 provided an explicit construction of such a nonstandard model. On the other hand, Tennenbaum's theorem, proved in 1959, shows that there is no countable nonstandard model of PA in which either the addition or multiplication operation is computable. This result shows it is difficult to be completely explicit in describing the addition and multiplication operations of a countable nonstandard model of PA. There is only one possible order type of a countable nonstandard model. Letting "ω" be the order type of the natural numbers, "ζ" be the order type of the integers, and "η" be the order type of the rationals, the order type of any countable nonstandard model of PA is , which can be visualized as a copy of the natural numbers followed by a dense linear ordering of copies of the integers.
A cut in a nonstandard model "M" is a nonempty subset "C" of "M" so that "C" is downward closed ("x" < "y" and "y" ∈ "C" ⇒ "x" ∈ "C") and "C" is closed under successor. A proper cut is a cut that is a proper subset of "M". Each nonstandard model has many proper cuts, including one that corresponds to the standard natural numbers. However, the induction scheme in Peano arithmetic prevents any proper cut from being definable. The overspill lemma, first proved by Abraham Robinson, formalizes this fact.
When the Peano axioms were first proposed, Bertrand Russell and others agreed that these axioms implicitly defined what we mean by a "natural number". Henri Poincaré was more cautious, saying they only defined natural numbers if they were "consistent"; if there is a proof that starts from just these axioms and derives a contradiction such as 0 = 1, then the axioms are inconsistent, and don't define anything. In 1900, David Hilbert posed the problem of proving their consistency using only finitistic methods as the second of his twenty-three problems. In 1931, Kurt Gödel proved his second incompleteness theorem, which shows that such a consistency proof cannot be formalized within Peano arithmetic itself.
Although it is widely claimed that Gödel's theorem rules out the possibility of a finitistic consistency proof for Peano arithmetic, this depends on exactly what one means by a finitistic proof. Gödel himself pointed out the possibility of giving a finitistic consistency proof of Peano arithmetic or stronger systems by using finitistic methods that are not formalizable in Peano arithmetic, and in 1958, Gödel published a method for proving the consistency of arithmetic using type theory. In 1936, Gerhard Gentzen gave a proof of the consistency of Peano's axioms, using transfinite induction up to an ordinal called ε0. Gentzen explained: "The aim of the present paper is to prove the consistency of elementary number theory or, rather, to reduce the question of consistency to certain fundamental principles". Gentzen's proof is arguably finitistic, since the transfinite ordinal ε0 can be encoded in terms of finite objects (for example, as a Turing machine describing a suitable order on the integers, or more abstractly as consisting of the finite trees, suitably linearly ordered). Whether or not Gentzen's proof meets the requirements Hilbert envisioned is unclear: there is no generally accepted definition of exactly what is meant by a finitistic proof, and Hilbert himself never gave a precise definition.
The vast majority of contemporary mathematicians believe that Peano's axioms are consistent, relying either on intuition or the acceptance of a consistency proof such as Gentzen's proof. A small number of philosophers and mathematicians, some of whom also advocate ultrafinitism, reject Peano's axioms because accepting the axioms amounts to accepting the infinite collection of natural numbers. In particular, addition (including the successor function) and multiplication are assumed to be total. Curiously, there are self-verifying theories that are similar to PA but have subtraction and division instead of addition and multiplication, which are axiomatized in such a way to avoid proving sentences that correspond to the totality of addition and multiplication, but which are still able to prove all true formula_41 theorems of PA, and yet can be extended to a consistent theory that proves its own consistency (stated as the non-existence of a Hilbert-style proof of "0=1"). | https://en.wikipedia.org/wiki?curid=25005 |
Procyon
Procyon is the brightest star in the constellation of Canis Minor and usually the eighth-brightest star in the night sky, with a visual apparent magnitude of 0.34. It has the Bayer designation α Canis Minoris, which is Latinised to Alpha Canis Minoris, and abbreviated α CMi or Alpha CMi, respectively. As determined by the European Space Agency Hipparcos astrometry satellite, this system lies at a distance of just , and is therefore one of Earth's nearest stellar neighbours.
A binary star system, Procyon consists of a white-hued main-sequence star of spectral type F5 IV–V, designated component A, in orbit with a faint white dwarf companion of spectral type DQZ, named Procyon B. The pair orbit each other with a period of 40.8 years and an eccentricity of 0.4.
Procyon is usually the eighth-brightest star in the night sky, culminating at midnight on January 14. It forms one of the three vertices of the Winter Triangle asterism, in combination with Sirius and Betelgeuse. The prime period for evening viewing of Procyon is in late winter in the northern hemisphere.
It has a color index of 0.42, and its hue has been described as having a faint yellow tinge to it.
Procyon is a binary star system with a bright primary component, Procyon A, having an apparent magnitude of 0.34, and a faint companion, Procyon B, at magnitude 10.7. The pair orbit each other with a period of 40.82 years along an elliptical orbit with an eccentricity of 0.407, more eccentric than Mercury's. The plane of their orbit is inclined at an angle of 31.1° to the line of sight with the Earth. The average separation of the two components is 15.0 AU, a little less than the distance between Uranus and the Sun, though the eccentric orbit carries them as close as 8.9 AU and as far as 21.0 AU.
The primary has a stellar classification of F5IV–V, indicating that it is a late-stage F-type main-sequence star. Procyon A is bright for its spectral class, suggesting that it is evolving into a subgiant that has nearly fused its hydrogen core into helium, after which it will expand as the nuclear reactions move outside the core. As it continues to expand, the star will eventually swell to about 80 to 150 times its current diameter and become a red or orange color. This will probably happen within 10 to 100 million years.
The effective temperature of the stellar atmosphere is an estimated 6,530 K, giving Procyon A a white hue. It is 1.5 times the solar mass (), twice the solar radius (), and has 7 times the Sun's luminosity (). Both the core and the envelope of this star are convective; the two regions being separated by a wide radiation zone.
In late June 2004, Canada's orbital MOST satellite telescope carried out a 32-day survey of Procyon A. The continuous optical monitoring was intended to confirm solar-like oscillations in its brightness observed from Earth and to permit asteroseismology. No oscillations were detected and the authors concluded that the theory of stellar oscillations may need to be reconsidered. However, others argued that the non-detection was consistent with published ground-based radial velocity observations of solar-like oscillations. Subsequent observations in radial velocity have confirmed that Procyon is indeed oscillating.
Photometric measurements from the NASA Wide Field Infrared Explorer (WIRE) satellite from 1999 and 2000 showed evidence of granulation (convection near the surface of the star) and solar-like oscillations. Unlike the MOST result, the variation seen in the WIRE photometry was in agreement with radial velocity measurements from the ground. Additional observations with MOST taken in 2007 were able to detect oscillations.
Like Sirius B, Procyon B is a white dwarf that was inferred from astrometric data long before it was observed. Its existence had been postulated by German astronomer Friedrich Bessel as early as 1844, and, although its orbital elements had been calculated by his countryman Arthur Auwers in 1862 as part of his thesis, Procyon B was not visually confirmed until 1896 when John Martin Schaeberle observed it at the predicted position using the 36-inch refractor at Lick Observatory. It is more difficult to observe from Earth than Sirius B, due to a greater apparent magnitude difference and smaller angular separation from its primary.
At , Procyon B is considerably less massive than Sirius B; however, the peculiarities of degenerate matter ensure that it is larger than its more famous neighbor, with an estimated radius of 8,600 km, versus 5,800 km for Sirius B. The radius agrees with white dwarf models that assume a carbon core. It has a stellar classification of DQZ, having a helium-dominated atmosphere with traces of heavy elements. For reasons that remain unclear, the mass of Procyon B is unusually low for a white dwarf star of its type. With a surface temperature of 7,740 K, it is also much cooler than Sirius B; this is a testament to its lesser mass and greater age. The mass of the progenitor star for Procyon B was about and it came to the end of its life some billion years ago, after a main-sequence lifetime of million years.
Attempts to detect X-ray emission from Procyon with nonimaging, soft X-ray–sensitive detectors prior to 1975 failed. Extensive observations of Procyon were carried out with the Copernicus and TD-1A satellites in the late 1970s. The X-ray source associated with Procyon AB was observed on April 1, 1979, with the Einstein Observatory high-resolution imager (HRI). The HRI X-ray pointlike source location is ~4" south of Procyon A, on the edge of the 90% confidence error circle, indicating identification with Procyon A rather than Procyon B which was located about 5" north of Procyon A (about 9" from the X-ray source location).
"α Canis Minoris" (Latinised to "Alpha Canis Minoris") is the star's Bayer designation.
The name "Procyon" comes from the Ancient Greek (""), meaning "before the dog", since it precedes the "Dog Star" Sirius as it travels across the sky due to Earth's rotation. (Although Procyon has a greater right ascension, it also has a more northerly declination, which means it will rise above the horizon earlier than Sirius from most northerly latitudes.) In Greek mythology, Procyon is associated with Maera, a hound belonging to Erigone, daughter of Icarius of Athens. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included "Procyon" for the star α Canis Minoris A.
The two dog stars are referred to in the most ancient literature and were venerated by the Babylonians and the Egyptians, In Babylonian mythology, Procyon was known as Nangar (the Carpenter), an aspect of Marduk, involved in constructing and organising the celestial sky.
The constellations in Macedonian folklore represented agricultural items and animals, reflecting their village way of life. To them, Procyon and Sirius were "Volci" "the wolves", circling hungrily around Orion which depicted a plough with oxen.
Rarer names are the Latin translation of Procyon, "Antecanis", and the Arabic-derived names "Al Shira" and "Elgomaisa". Medieval astrolabes of England and Western Europe used a variant of this, "Algomeiza/Algomeyza". "Al Shira" derives from ', "the Syrian sign" (the other sign being Sirius; "Syria" is supposedly a reference to its northern location relative to Sirius); "Elgomaisa" derives from ' "the bleary-eyed (woman)", in contrast to "the teary-eyed (woman)", which is Sirius. (See Gomeisa.) At the same time this name is synonymous with the Turkish name "Rumeysa", and it is a commonly used name in Turkey.
In Chinese, (), meaning "South River", refers to an asterism consisting of Procyon, ε Canis Minoris and β Canis Minoris. Consequently, Procyon itself is known as (, ). It is part of the Vermilion Bird.
The Hawaiians saw Procyon as part of an asterism "Ke ka o Makali'i" ("the canoe bailer of Makali'i") that helped them navigate at sea. Called "Puana" ("blossom"), it formed this asterism with Capella, Sirius, Castor, and Pollux. In Tahitian lore, Procyon was one of the pillars propping up the sky, known as "Anâ-tahu'a-vahine-o-toa-te-manava" ("star-the-priestess-of-brave-heart"), the pillar for elocution. The Maori knew the star as "Puangahori".
Procyon appears on the flag of Brazil, symbolising the state of Amazonas.
The Kalapalo people of Mato Grosso state in Brazil called Procyon and Canopus "Kofongo" ("Duck"), with Castor and Pollux representing his hands. The asterism's appearance signified the coming of the rainy season and increase in food staple manioc, used at feasts to feed guests.
Known as "Sikuliarsiujuittuq" to the Inuit, Procyon was quite significant in their astronomy and mythology. Its eponymous name means "the one who never goes onto the newly formed sea-ice", and refers to a man who stole food from his village's hunters because he was too obese to hunt on ice. He was killed by the other hunters who convinced him to go on the sea ice. Procyon received this designation because it typically appears red (though sometimes slightly greenish) as it rises during the Arctic winter; this red color was associated with Sikuliarsiujuittuq's bloody end.
Were the Sun to be observed from this star system, it would appear to be a magnitude 2.55 star in the constellation Aquila with the exact opposite coordinates at right ascension , declination . It would be as bright as β Scorpii is in our sky. Canis Minor would obviously be missing its brightest star.
Procyon's closest neighboring star is Luyten's Star, about away, and the latter would appear as a visual magnitude 2.7 star in the night sky of a hypothetical planet orbiting Procyon. | https://en.wikipedia.org/wiki?curid=25006 |
Prisoner of war
A prisoner of war (POW) is a non-combatant—whether a military member, an irregular military fighter, or a civilian—who is held captive by a belligerent power during or immediately after an armed conflict. The earliest recorded usage of the phrase "prisoner of war" dates back to 1610.
Belligerents hold prisoners of war in custody for a range of legitimate and illegitimate reasons, such as isolating them from enemy combatants still in the field (releasing and repatriating them in an orderly manner after hostilities), demonstrating military victory, punishing them, prosecuting them for war crimes, exploiting them for their labour, recruiting or even conscripting them as their own combatants, collecting military and political intelligence from them, or indoctrinating them in new political or religious beliefs.
For most of human history, depending on the culture of the victors, enemy combatants on the losing side in a battle who had surrendered and been taken as prisoners of war could expect to be either slaughtered or enslaved. Early Roman gladiators could be prisoners of war, categorised according to their ethnic roots as Samnites, Thracians, and Gauls ("Galli"). Homer's "Iliad" describes Greek and Trojan soldiers offering rewards of wealth to opposing forces who have defeated them on the battlefield in exchange for mercy, but their offers are not always accepted; see Lycaon for example.
Typically, victors made little distinction between enemy combatants and enemy civilians, although they were more likely to spare women and children. Sometimes the purpose of a battle, if not of a war, was to capture women, a practice known as "raptio"; the Rape of the Sabines involved, according to tradition, a large mass-abduction by the founders of Rome. Typically women had no rights, and were held legally as chattels.
In the fourth century AD, Bishop Acacius of Amida, touched by the plight of Persian prisoners captured in a recent war with the Roman Empire, who were held in his town under appalling conditions and destined for a life of slavery, took the initiative in ransoming them by selling his church's precious gold and silver vessels and letting them return to their country. For this he was eventually canonized.
During Childeric's siege and blockade of Paris in 464, the nun Geneviève (later canonised as the city's patron saint) pleaded with the Frankish king for the welfare of prisoners of war and met with a favourable response. Later, Clovis I liberated captives after Genevieve urged him to do so.
Many French prisoners of war were killed during the Battle of Agincourt in 1415. This was done in retaliation for the French killing of the boys and other non-combatants handling the baggage and equipment of the army, and because the French were attacking again and Henry was afraid that they would break through and free the prisoners to fight again.
In the later Middle Ages, a number of religious wars aimed to not only defeat but eliminate their enemies. In Christian Europe, the extermination of heretics was considered desirable. Examples include the 13th century Albigensian Crusade and the Northern Crusades. When asked by a Crusader how to distinguish between the Catholics and Cathars once they'd taken the city of Béziers, the Papal Legate Arnaud Amalric famously replied, ""Kill them all, God will know His own"".
Likewise, the inhabitants of conquered cities were frequently massacred during the Crusades against the Muslims in the 11th and 12th centuries. Noblemen could hope to be ransomed; their families would have to send to their captors large sums of wealth commensurate with the social status of the captive.
In feudal Japan, there was no custom of ransoming prisoners of war, who were for the most part summarily executed.
The expanding Mongol Empire was famous for distinguishing between cities or towns that surrendered, where the population were spared but required to support the conquering Mongol army, and those that resisted, where their city was ransacked and destroyed, and all the population killed. In Termez, on the Oxus: ""all the people, both men and women, were driven out onto the plain, and divided in accordance with their usual custom, then they were all slain"".
The Aztecs were constantly at war with neighbouring tribes and groups, with the goal of this constant warfare being to collect live prisoners for sacrifice. For the re-consecration of Great Pyramid of Tenochtitlan in 1487, "between 10,000 and 80,400 persons" were sacrificed.
During the early Muslim conquests, Muslims routinely captured large number of prisoners. Aside from those who converted, most were ransomed or enslaved. Christians who were captured during the Crusades were usually either killed or sold into slavery if they could not pay a ransom. During his lifetime, Muhammad made it the responsibility of the Islamic government to provide food and clothing, on a reasonable basis, to captives, regardless of their religion; however if the prisoners were in the custody of a person, then the responsibility was on the individual. The freeing of prisoners was highly recommended as a charitable act. On certain occasions where Muhammad felt the enemy had broken a treaty with the Muslims, he ordered the mass execution of male prisoners, such as the Banu Qurayza. Females and children of this tribe were divided up as "ghanima" (spoils of war) by Muhammad.
The 1648 Peace of Westphalia, which ended the Thirty Years' War, established the rule that prisoners of war should be released without ransom at the end of hostilities and that they should be allowed to return to their homelands.
There also evolved the right of "parole", French for "discourse", in which a captured officer surrendered his sword and gave his word as a gentleman in exchange for privileges. If he swore not to escape, he could gain better accommodations and the freedom of the prison. If he swore to cease hostilities against the nation who held him captive, he could be repatriated or exchanged but could not serve against his former captors in a military capacity.
Early historical narratives of captured colonial Europeans, including perspectives of literate women captured by the indigenous peoples of North America, exist in some number. The writings of Mary Rowlandson, captured in the brutal fighting of King Philip's War, are an example. Such narratives enjoyed some popularity, spawning a genre of the captivity narrative, and had lasting influence on the body of early American literature, most notably through the legacy of James Fenimore Cooper's "The Last of the Mohicans". Some Native Americans continued to capture Europeans and use them both as labourers and bargaining chips into the 19th century; see for example John R. Jewitt, an Englishman who wrote a memoir about his years as a captive of the Nootka people on the Pacific Northwest coast from 1802 to 1805.
The earliest known purposely built prisoner-of-war camp was established at Norman Cross, England in 1797 to house the increasing number of prisoners from the French Revolutionary Wars and the Napoleonic Wars. The average prison population was about 5,500 men. The lowest number recorded was 3,300 in October 1804 and 6,272 on 10 April 1810 was the highest number of prisoners recorded in any official document. Norman Cross was intended to be a model depot providing the most humane treatment of prisoners of war. The British government went to great lengths to provide food of a quality at least equal to that available to locals. The senior officer from each quadrangle was permitted to inspect the food as it was delivered to the prison to ensure it was of sufficient quality. Despite the generous supply and quality of food, some prisoners died of starvation after gambling away their rations. Most of the men held in the prison were low-ranking soldiers and sailors, including midshipmen and junior officers, with a small number of privateers. About 100 senior officers and some civilians "of good social standing", mainly passengers on captured ships and the wives of some officers, were given "parole d'honneur" outside the prison, mainly in Peterborough although some further afield in Northampton, Plymouth, Melrose and Abergavenny. They were afforded the courtesy of their rank within English society. During the Battle of Leipzig both sides used the city's cemetery as lazaret and prisoner camp for around 6000 POWs who lived in the vaults and used the coffins for firewood. Food was scarce and prisoners resorted to eating horses, cats, dogs or even human flesh. The bad conditions inside the graveyard contributed to a city-wide epidemic after the battle.
The extensive period of conflict during the American Revolutionary War and Napoleonic Wars (1793–1815), followed by the Anglo-American War of 1812, led to the emergence of a cartel system for the exchange of prisoners, even while the belligerents were at war. A cartel was usually arranged by the respective armed service for the exchange of like-ranked personnel. The aim was to achieve a reduction in the number of prisoners held, while at the same time alleviating shortages of skilled personnel in the home country.
At the start of the civil war a system of paroles operated. Captives agreed not to fight until they were officially exchanged. Meanwhile, they were held in camps run by their own army where they were paid but not allowed to perform any military duties. The system of exchanges collapsed in 1863 when the Confederacy refused to exchange black prisoners. In the late summer of 1864, a year after the Dix–Hill Cartel was suspended; Confederate officials approached Union General Benjamin Butler, Union Commissioner of Exchange, about resuming the cartel and including the black prisoners. Butler contacted Grant for guidance on the issue, and Grant responded to Butler on 18 August 1864 with his now famous statement. He rejected the offer, stating in essence, that the Union could afford to leave their men in captivity, the Confederacy could not. After that about 56,000 of the 409,000 POWs died in prisons during the American Civil War, accounting for nearly 10% of the conflict's fatalities. Of the 45,000 Union prisoners of war confined in Camp Sumter, located near Andersonville, Georgia, 13,000 (28%) died. At Camp Douglas in Chicago, Illinois, 10% of its Confederate prisoners died during one cold winter month; and Elmira Prison in New York state, with a death rate of 25% (2,963), nearly equalled that of Andersonville.
During the 19th century, there were increased efforts to improve the treatment and processing of prisoners. As a result of these emerging conventions, a number of international conferences were held, starting with the Brussels Conference of 1874, with nations agreeing that it was necessary to prevent inhumane treatment of prisoners and the use of weapons causing unnecessary harm. Although no agreements were immediately ratified by the participating nations, work was continued that resulted in new conventions being adopted and becoming recognized as international law that specified that prisoners of war be treated humanely and diplomatically.
Chapter II of the Annex to the 1907 Hague Convention "IV – The Laws and Customs of War on Land" covered the treatment of prisoners of war in detail. These provisions were further expanded in the 1929 Geneva Convention on the Prisoners of War and were largely revised in the Third Geneva Convention in 1949.
Article 4 of the Third Geneva Convention protects captured military personnel, some guerrilla fighters, and certain civilians. It applies from the moment a prisoner is captured until he or she is released or repatriated. One of the main provisions of the convention makes it illegal to torture prisoners and states that a prisoner can only be required to give their name, date of birth, rank and service number (if applicable).
The ICRC has a special role to play, with regards to international humanitarian law, in restoring and maintaining family contact in times of war, in particular concerning the right of prisoners of war and internees to send and receive letters and cards (Geneva Convention (GC) III, art.71 and GC IV, art.107).
However, nations vary in their dedication to following these laws, and historically the treatment of POWs has varied greatly. During World War II, Imperial Japan and Nazi Germany (towards Soviet POWs and Western Allied commandos) were notorious for atrocities against prisoners of war. The German military used the Soviet Union's refusal to sign the Geneva Convention as a reason for not providing the necessities of life to Soviet POWs; and the Soviets similarly killed Axis prisoners or used them as slave labour. The Germans also routinely executed Western Allied commandos captured behind German lines per the Commando Order. North Korean and North and South Vietnamese forces routinely killed or mistreated prisoners taken during those conflicts.
To be entitled to prisoner-of-war status, captured persons must be lawful combatants entitled to combatant's privilege—which gives them immunity from punishment for crimes constituting lawful acts of war such as killing enemy combatants. To qualify under the Third Geneva Convention, a combatant must be part of a chain of command, wear a "fixed distinctive marking, visible from a distance", bear arms openly, and have conducted military operations according to the laws and customs of war. (The Convention recognizes a few other groups as well, such as "[i]nhabitants of a non-occupied territory, who on the approach of the enemy spontaneously take up arms to resist the invading forces, without having had time to form themselves into regular armed units".)
Thus, uniforms and badges are important in determining prisoner-of-war status under the Third Geneva Convention. Under Additional Protocol I, the requirement of a distinctive marking is no longer included. "francs-tireurs", militias, insurgents, terrorists, saboteurs, mercenaries, and spies generally do not qualify because they do not fulfill the criteria of Additional Protocol 1. Therefore, they fall under the category of unlawful combatants, or more properly they are not combatants. Captured soldiers who do not get prisoner of war status are still protected like civilians under the Fourth Geneva Convention.
The criteria are applied primarily to "international" armed conflicts. The application of prisoner of war status in non-international armed conflicts like civil wars is guided by Additional Protocol II, but insurgents are often treated as traitors, terrorists or criminals by government forces and are sometimes executed on spot or tortured. However, in the American Civil War, both sides treated captured troops as POWs presumably out of reciprocity, although the Union regarded Confederate personnel as separatist rebels. However, guerrillas and other irregular combatants generally cannot expect to receive benefits from both civilian and military status simultaneously.
Under the Third Geneva Convention, prisoners of war (POW) must be:
In addition, if wounded or sick on the battlefield, the prisoner will receive help from the International Committee of the Red Cross.
When a country is responsible for breaches of prisoner of war rights, those accountable will be punished accordingly. An example of this is the Nuremberg and Tokyo Trials. German and Japanese military commanders were prosecuted for preparing and initiating a war of aggression, murder, ill treatment, and deportation of individuals, and genocide during World War II. Most were executed or sentenced to life in prison for their crimes.
The United States Military Code of Conduct was promulgated in 1955 via under President Dwight D. Eisenhower to serve as a moral code for United States service members who have been taken prisoner. It was created primarily in response to the breakdown of leadership and organization, specifically when U.S. forces were POWs during the Korean War.
When a military member is taken prisoner, the Code of Conduct reminds them that the chain of command is still in effect (the highest ranking service member eligible for command, regardless of service branch, is in command), and requires them to support their leadership. The Code of Conduct also requires service members to resist giving information to the enemy (beyond identifying themselves, that is, "name, rank, serial number"), receiving special favors or parole, or otherwise providing their enemy captors aid and comfort.
Since the Vietnam War, the official U.S. military term for enemy POWs is EPW (Enemy Prisoner of War). This name change was introduced in order to distinguish between enemy and U.S. captives.
In 2000, the U.S. military replaced the designation "Prisoner of War" for captured American personnel with "Missing-Captured". A January 2008 directive states that the reasoning behind this is since "Prisoner of War" is the international legal recognized status for such people there is no need for any individual country to follow suit. This change remains relatively unknown even among experts in the field and "Prisoner of War" remains widely used in the Pentagon which has a "POW/Missing Personnel Office" and awards the Prisoner of War Medal.
During World War I, about eight million men surrendered and were held in POW camps until the war ended. All nations pledged to follow the Hague rules on fair treatment of prisoners of war, and in general the POWs had a much higher survival rate than their peers who were not captured. Individual surrenders were uncommon; usually a large unit surrendered all its men. At Tannenberg 92,000 Russians surrendered during the battle. When the besieged garrison of Kaunas surrendered in 1915, 20,000 Russians became prisoners. Over half the Russian losses were prisoners as a proportion of those captured, wounded or killed. About 3.3 million men became prisoners.
The German Empire held 2.5 million prisoners; Russia held 2.9 million, and Britain and France held about 720,000, mostly gained in the period just before the Armistice in 1918. The US held 48,000. The most dangerous moment for POWs was the act of surrender, when helpless soldiers were sometimes mistakenly shot down. Once prisoners reached a POW camp conditions were better (and often much better than in World War II), thanks in part to the efforts of the International Red Cross and inspections by neutral nations.
There was however much harsh treatment of POWs in Germany, as recorded by the American ambassador to Germany (prior to America's entry into the war), James W. Gerard, who published his findings in "My Four Years in Germany". Even worse conditions are reported in the book "Escape of a Princess Pat" by the Canadian George Pearson. It was particularly bad in Russia, where starvation was common for prisoners and civilians alike; a quarter of the over 2 million POWs held there died. Nearly 375,000 of the 500,000 Austro-Hungarian prisoners of war taken by Russians perished in Siberia from smallpox and typhus. In Germany, food was short, but only 5% died.
The Ottoman Empire often treated prisoners of war poorly. Some 11,800 British soldiers, most of them Indians, became prisoners after the five-month Siege of Kut, in Mesopotamia, in April 1916. Many were weak and starved when they surrendered and 4,250 died in captivity.
During the Sinai and Palestine campaign 217 Australian and unknown numbers of British, New Zealand and Indian soldiers were captured by Ottoman Empire forces. About 50% of the Australian prisoners were light horsemen including 48 missing believed captured on 1 May 1918 in the Jordan Valley. Australian Flying Corps pilots and observers were captured in the Sinai Peninsula, Palestine and the Levant. One third of all Australian prisoners were captured on Gallipoli including the crew of the submarine AE2 which made a passage through the Dardanelles in 1915. Forced marches and crowded railway journeys preceded years in camps where disease, poor diet and inadequate medical facilities prevailed. About 25% of other ranks died, many from malnutrition, while only one officer died.
The most curious case came in Russia where the Czechoslovak Legion of Czechoslovak prisoners (from the Austro-Hungarian army): they were released in 1917, armed themselves, briefly culminating into a military and diplomatic force during the Russian Civil War.
At the end of the war in 1918 there were believed to be 140,000 British prisoners of war in Germany, including thousands of internees held in neutral Switzerland. The first British prisoners were released and reached Calais on 15 November. Plans were made for them to be sent via Dunkirk to Dover and a large reception camp was established at Dover capable of housing 40,000 men, which could later be used for demobilisation.
On 13 December 1918, the armistice was extended and the Allies reported that by 9 December 264,000 prisoners had been repatriated. A very large number of these had been released "en masse" and sent across Allied lines without any food or shelter. This created difficulties for the receiving Allies and many released prisoners died from exhaustion. The released POWs were met by cavalry troops and sent back through the lines in lorries to reception centres where they were refitted with boots and clothing and dispatched to the ports in trains.
Upon arrival at the receiving camp the POWs were registered and "boarded" before being dispatched to their own homes. All commissioned officers had to write a report on the circumstances of their capture and to ensure that they had done all they could to avoid capture. Each returning officer and man was given a message from King George V, written in his own hand and reproduced on a lithograph. It read as follows:
While the Allied prisoners were sent home at the end of the war, the same treatment was not granted to Central Powers prisoners of the Allies and Russia, many of whom had to serve as forced labour, e.g. in France, until 1920. They were released after many approaches by the ICRC to the Allied Supreme Council.
Historian Niall Ferguson, in addition to figures from Keith Lowe, tabulated the total death rate for POWs in World War II as follows:
The Empire of Japan, which had signed but never ratified the 1929 Geneva Convention on Prisoners of War, did not treat prisoners of war in accordance with international agreements, including provisions of the Hague Conventions, either during the Second Sino-Japanese War or during the Pacific War, because the Japanese viewed surrender as dishonorable. Moreover, according to a directive ratified on 5 August 1937 by Hirohito, the constraints of the Hague Conventions were explicitly removed on Chinese prisoners.
Prisoners of war from China, the United States, Australia, Britain, Canada, India, the Netherlands, New Zealand, and the Philippines held by Japanese imperial armed forces were subject to murder, beatings, summary punishment, brutal treatment, forced labour, medical experimentation, starvation rations, poor medical treatment and cannibalism. The most notorious use of forced labour was in the construction of the Burma–Thailand Death Railway. After 20 March 1943, the Imperial Navy was under orders to execute all prisoners taken at sea.
After the Armistice of Cassibile, Italian soldiers and civilians in East Asia were taken as prisoners by the Japanese armed forces and subject to the same conditions as other POWs.
According to the findings of the Tokyo Tribunal, the death rate of Western prisoners was 27.1%, seven times that of POWs under the Germans and Italians. The death rate of Chinese was much higher. Thus, while 37,583 prisoners from the United Kingdom, Commonwealth, and Dominions, 28,500 from the Netherlands, and 14,473 from the United States were released after the surrender of Japan, the number for the Chinese was only 56. The 27,465 United States Army and United States Army Air Forces POWs in the Pacific Theater had a 40.4% death rate. The War Ministry in Tokyo issued an order at the end of the war to kill all surviving POWs.
No direct access to the POWs was provided to the International Red Cross. Escapes among Caucasian prisoners were almost impossible because of the difficulty of men of Caucasian descent hiding in Asiatic societies.
Allied POW camps and ship-transports were sometimes accidental targets of Allied attacks. The number of deaths which occurred when Japanese "hell ships"—unmarked transport ships in which POWs were transported in harsh conditions—were attacked by US Navy submarines was particularly high. Gavan Daws has calculated that "of all POWs who died in the Pacific War, one in three was killed on the water by friendly fire". Daves states that 10,800 of the 50,000 POWs shipped by the Japanese were killed at sea while Donald L. Miller states that "approximately 21,000 Allied POWs died at sea, about 19,000 of them killed by friendly fire."
Life in the POW camps was recorded at great risk to themselves by artists such as Jack Bridger Chalker, Philip Meninsky, Ashley George Old, and Ronald Searle. Human hair was often used for brushes, plant juices and blood for paint, and toilet paper as the "canvas". Some of their works were used as evidence in the trials of Japanese war criminals.
Female prisoners (detainees) at Changi prisoner of war camp in Singapore, bravely recorded their defiance in seemingly harmless prison quilt embroidery.
Research into the conditions of the camps has been conducted by The Liverpool School of Tropical Medicine.
After the French armies surrendered in summer 1940, Germany seized two million French prisoners of war and sent them to camps in Germany. About one third were released on various terms. Of the remainder, the officers and non-commissioned officers were kept in camps and did not work. The privates were sent out to work. About half of them worked for German agriculture, where food supplies were adequate and controls were lenient. The others worked in factories or mines, where conditions were much harsher.
Germany and Italy generally treated prisoners from the British Commonwealth, France, the US, and other western Allies in accordance with the Geneva Convention, which had been signed by these countries. Consequently, western Allied officers were not usually made to work and some personnel of lower rank were usually compensated, or not required to work either. The main complaints of western Allied prisoners of war in German POW camps—especially during the last two years of the war—concerned shortages of food.
Only a small proportion of western Allied POWs who were Jews—or whom the Nazis believed to be Jewish—were killed as part of the Holocaust or were subjected to other antisemitic policies. For example, Major Yitzhak Ben-Aharon, a Palestinian Jew who had enlisted in the British Army, and who was captured by the Germans in Greece in 1941, experienced four years of captivity under entirely normal conditions for POWs.
However, a small number of Allied personnel were sent to concentration camps, for a variety of reasons including being Jewish. As the US historian Joseph Robert White put it: "An important exception ... is the sub-camp for U.S. POWs at Berga an der Elster, officially called "Arbeitskommando 625" [also known as "Stalag IX-B"]. Berga was the deadliest work detachment for American captives in Germany. 73 men who participated, or 21 percent of the detachment, perished in two months. 80 of the 350 POWs were Jews." Another well-known example was a group of 168 Australian, British, Canadian, New Zealand and US aviators who were held for two months at Buchenwald concentration camp; two of the POWs died at Buchenwald. Two possible reasons have been suggested for this incident: German authorities wanted to make an example of "Terrorflieger" ("terrorist aviators") or these aircrews were classified as spies, because they had been disguised as civilians or enemy soldiers when they were apprehended.
Information on conditions in the stalags is contradictory depending on the source. Some American POWs claimed the Germans were victims of circumstance and did the best they could, while others accused their captors of brutalities and forced labour. In any case, the prison camps were miserable places where food rations were meager and conditions squalid. One American admitted "The only difference between the stalags and concentration camps was that we weren't gassed or shot in the former. I do not recall a single act of compassion or mercy on the part of the Germans." Typical meals consisted of a bread slice and watery potato soup which, however, was still more substantial than what Soviet POWs or concentration camp inmates received. Another prisoner stated that "The German plan was to keep us alive, yet weakened enough that we wouldn't attempt escape."
As Soviet ground forces approached some POW camps in early 1945, German guards forced western Allied POWs to walk long distances towards central Germany, often in extreme winter weather conditions. It is estimated that, out of 257,000 POWs, about 80,000 were subject to such marches and up to 3,500 of them died as a result.
In September 1943 after the Armistice, Italian officers and soldiers that in many places waited for clear superior orders, were arrested by Germans and Italian fascists and taken to German internment camps in Germany or Eastern Europe, where they were held for the duration of World War II. The International Red Cross could do nothing for them, as they were not regarded as POWs, but the prisoners held the status of "military internees". Treatment of the prisoners was generally poor. The author Giovannino Guareschi was among those interned and wrote about this time in his life. The book was translated and published as "My Secret Diary". He wrote about the hungers of semi-starvation, the casual murder of individual prisoners by guards and how, when they were released (now from a German camp), they found a deserted German town filled with foodstuffs that they (with other released prisoners) ate.. It is estimated that of the 700,000 Italians taken prisoner by the Germans, around 40,000 died in detention and more than 13,000 lost their lives during the transportation from the Greek islands to the mainland.
Germany did not apply the same standard of treatment to non-western prisoners, especially many Polish and Soviet POWs who suffered harsh conditions and died in large numbers while in captivity.
Between 1941 and 1945 the Axis powers took about 5.7 million Soviet prisoners. About one million of them were released during the war, in that their status changed but they remained under German authority. A little over 500,000 either escaped or were liberated by the Red Army. Some 930,000 more were found alive in camps after the war. The remaining 3.3 million prisoners (57.5% of the total captured) died during their captivity. Between the launching of Operation Barbarossa in the summer of 1941 and the following spring, 2.8 million of the 3.2 million Soviet prisoners taken died while in German hands. According to Russian military historian General Grigoriy Krivosheyev, the Axis powers took 4.6 million Soviet prisoners, of whom 1.8 million were found alive in camps after the war and 318,770 were released by the Axis during the war and were then drafted into the Soviet armed forces again. By comparison, 8,348 Western Allied prisoners died in German camps during 1939–45 (3.5% of the 232,000 total).
Some Soviet POWs and forced labourers whom the Germans had transported to Nazi Germany were, on their return to the USSR, treated as traitors and sent to gulag prison-camps.
According to some sources, the Soviets captured 3.5 million Axis servicemen (excluding Japanese) of which more than a million died. One specific example is that of the German POWs after the Battle of Stalingrad, where the Soviets captured 91,000 German troops in total (completely exhausted, starving and sick) of whom only 5,000 survived the captivity.
German soldiers were kept as forced labour for many years after the war. The last German POWs like Erich Hartmann, the highest-scoring fighter ace in the history of aerial warfare, who had been declared guilty of war crimes but without due process, were not released by the Soviets until 1955, three years after Stalin died.
As a result of the Soviet invasion of Poland in 1939, hundreds of thousands of Polish soldiers became prisoners of war in the Soviet Union. Thousands of them were executed; over 20,000 Polish military personnel and civilians perished in the Katyn massacre. Out of Anders' 80,000 evacuees from Soviet Union gathered in the United Kingdom only 310 volunteered to return to Poland in 1947.
Out of the 230,000 Polish prisoners of war taken by the Soviet army, only 82,000 survived.
After the Soviet–Japanese War, 560,000 to 760,000 Japanese prisoners of war were captured by the Soviet Union. They were captured in Manchuria, Korea, South Sakhalin and the Kuril Islands, then sent to work as forced labor in the Soviet Union and Mongolia. Of them, it is estimated that between 60,000 and 347,000 died in captivity.
There were stories during the Cold War to the effect that 23,000 Americans who had been held in German POW camps were seized by the Soviets and never repatriated. This myth had been perpetuated after the release of people like John H. Noble. Careful scholarly studies have demonstrated this is a myth based on a misinterpretation of a telegram that was talking about Soviet prisoners held in Italy.
During the war, the armies of Western Allied nations such as Australia, Canada, the UK and the US were ordered to treat Axis prisoners strictly in accordance with the Geneva Convention. Some breaches of the Convention took place, however. According to Stephen E. Ambrose, of the roughly 1,000 US combat veterans that he had interviewed, only one admitted to shooting a prisoner, saying that he "felt remorse, but would do it again". However, one-third told him they had seen US troops kill German prisoners.
In Britain, German prisoners, particularly higher-ranked officers, were kept in buildings where listening devices were installed. A considerable amount of military intelligence was gained from overhearing what they thought were private casual conversations. Much of the listening was done by German refugees, in many cases Jews. Knowledge of the program was not released by the British government for half a century.
Towards the end of the war in Europe, as large numbers of Axis soldiers surrendered, the US created the designation of Disarmed Enemy Forces (DEF) so as not to treat prisoners as POWs. A lot of these soldiers were kept in open fields in makeshift camps in the Rhine valley ("Rheinwiesenlager"). Controversy has arisen about how Eisenhower managed these prisoners (see "Other Losses").
After the surrender of Germany in May 1945, the POW status of the German prisoners was in many cases maintained, and they were for several years used as forced labour in countries such as the UK and France. Many died when forced to clear minefields in Norway, France etc.; "by September 1945 it was estimated by the French authorities that two thousand prisoners were being maimed and killed each month in accidents".
In 1946, the UK had more than 400,000 German prisoners, many had been transferred from POW camps in the US and Canada. Many of these were for over three years after the German surrender used as forced labour, as a form of "reparations". A public debate ensued in the UK, where words such as "forced labour", "slaves", "slave labour" were increasingly used in the media and in the House of Commons. In 1947 the Ministry of Agriculture argued against repatriation of working German prisoners, since by then they made up 25 percent of the land workforce, and they wanted to use them also in 1948.
The "London Cage", an MI19 prisoner of war facility in the UK used for interrogating prisoners before they were sent to prison camps during and immediately after World War II, was subject to allegations of torture.
After the German surrender, the International Red Cross was prohibited from providing aid such as food or visiting prisoner camps in Germany. However, after making approaches to the Allies in the autumn of 1945 it was allowed to investigate the camps in the British and French occupation zones of Germany, as well as to provide relief to the prisoners held there. On 4 February 1946, the Red Cross was permitted to visit and assist prisoners also in the US occupation zone of Germany, although only with very small quantities of food. "During their visits, the delegates observed that German prisoners of war were often detained in appalling conditions. They drew the attention of the authorities to this fact, and gradually succeeded in getting some improvements made".
The Allies also shipped POWs between them, with for example 6,000 German officers transferred from Western Allied camps to the Sachsenhausen concentration camp that now was under Soviet Union administration. The US also shipped 740,000 German POWs as forced labourers to France from where newspaper reports told of very bad treatment. Judge Robert H. Jackson, Chief US prosecutor in the Nuremberg trials, in October 1945 told US President Harry S Truman that the Allies themselves:
have done or are doing some of the very things we are prosecuting the Germans for. The French are so violating the Geneva Convention in the treatment of prisoners of war that our command is taking back prisoners sent to them. We are prosecuting plunder and our Allies are practicing it.
Hungarians became POWs of the Western Allies. Some of these were, like Germans, used as forced labour in France after the cessation of hostilities.
After the war the POWs were handed over to the Soviets, and after the POWs were transported to the USSR for forced labour. It is called even today in Hungary malenkij robot—little work. András Toma, a Hungarian soldier taken prisoner by the Red Army in 1944, was discovered in a Russian psychiatric hospital in 2000. He was probably the last prisoner of war from World War II to be repatriated.
Although thousands of Japanese were taken prisoner, most fought until they were killed or committed suicide. Of the 22,000 Japanese soldiers present at the beginning of the Battle of Iwo Jima, over 20,000 were killed and only 216 were taken prisoner. Of the 30,000 Japanese troops that defended Saipan, fewer than 1,000 remained alive at battle's end. Japanese prisoners sent to camps fared well; however, some Japanese were killed when trying to surrender or were massacred just after they had surrendered (see Allied war crimes during World War II in the Pacific). In some instances, Japanese prisoners were tortured by a variety of methods. A method of torture used by the Chinese National Revolutionary Army (NRA) included suspending the prisoner by the neck in a wooden cage until they died. In very rare cases, some were beheaded by sword, and a severed head was once used as a football by Chinese National Revolutionary Army (NRA) soldiers.
After the war, many Japanese were kept on as Japanese Surrendered Personnel until mid-1947 and used as forced labour doing menial tasks, while 35,000 were kept on in arms within their wartime military organisation and under their own officers and used in combat alongside British troops seeking to suppress the independence movements in the Dutch East Indies and French Indochina.
In 1943, Italy overthrew Mussolini and became a co-belligerent with the Allies. This did not mean any change in status for Italian POWs however, since due to the labour shortages in the UK, Australia and the US, they were retained as POWs there.
On 11 February 1945, at the conclusion of the Yalta Conference, the United States and the United Kingdom signed a Repatriation Agreement with the USSR. The interpretation of this Agreement resulted in the forcible repatriation of all Soviets (Operation Keelhaul) regardless of their wishes. The forced repatriation operations took place in 1945–1947.
The United States handed over 740,000 German prisoners to France, a signatory of the Geneva Convention. The Soviet Union had not signed the Geneva Convention. According to Edward Peterson, the U.S. chose to hand over several hundred thousand German prisoners to the Soviet Union in May 1945 as a "gesture of friendship". U.S. forces also refused to accept the surrender of German troops attempting to surrender to them in Saxony and Bohemia, and handed them over to the Soviet Union instead. It is also known that 6000 of the German officers who were sent from camps in the West to the Soviets were subsequently imprisoned in the Sachsenhausen concentration camp, which at the time was one of the NKVD special camp.
During the Korean War, the North Koreans developed a reputation for severely mistreating prisoners of war (see Crimes against POWs). Their POWs were housed in three camps, according to their potential usefulness to the North Korean army. Peace camps and reform camps were for POWs that were either sympathetic to the cause or who had valued skills that could be useful in the army and thus these enemy soldiers were indoctrinated and sometimes conscripted into the North Korean army. The regular prisoners of war were usually very poorly treated. POWs in peace camps were reportedly treated with more consideration.
In 1952, the 1952 Inter-Camp P.O.W. Olympics were held during 15 and 27 November 1952, in Pyuktong, North Korea. The Chinese hoped to gain worldwide publicity and while some prisoners refused to participate some 500 P.O.W.s of eleven nationalities took part. They were representative of all the prison camps in North Korea and competed in: football, baseball, softball, basketball, volleyball, track and field, soccer, gymnastics, and boxing. For the P.O.W.s this was also an opportunity to meet with friends from other camps. The prisoners had their own photographers, announcers, even reporters, who after each day's competition published a newspaper, the "Olympic Roundup".
Of about 16,500 French soldiers who fought at the Battle of Dien Bien Phu in French Indochina, more than 3,000 were killed in battle, while almost all of the 11,721 men taken prisoner died in the hands of the Viet Minh on death marches to distant POW camps, and in those camps in the last three months of the war.
The Vietcong and the North Vietnamese Army captured many United States service members as prisoners of war during the Vietnam War, who suffered from mistreatment and torture during the war. Some American prisoners were held in the prison called the Hanoi Hilton.
Communist Vietnamese held in custody by South Vietnamese and American forces were also tortured and badly treated. After the war, millions of South Vietnamese servicemen and government workers were sent to "re-education" camps where many perished.
Like in previous conflicts, there has been speculation without evidence that there were a handful of American pilots captured by the North Koreans and the North Vietnamese who were transferred to the Soviet Union and were never repatriated.
Regardless of regulations determining treatment to prisoners, violations of their rights continue to be reported. Many cases of POW massacres have been reported in recent times, including 13 October massacre in Lebanon by Syrian forces and June 1990 massacre in Sri Lanka.
Indian intervention in Bangladesh liberation war in 1971 led to third Indo-Pakistan war ended up with Indian victory with India having over 90,000 Pakistani POWs.
In 1982, during the Falklands War, prisoners were well treated in general by both parties of the conflict, with military commanders dispatching 'enemy' prisoners back to their homelands in record time.
In 1991, during the Persian Gulf War, American, British, Italian, and Kuwaiti POWs (mostly crew members of downed aircraft and special forces) were tortured by the Iraqi secret police. An American military doctor, Major Rhonda Cornum, a 37-year-old flight surgeon captured when her Blackhawk UH-60 was shot down, was also subjected to sexual abuse.
During the 1990s Yugoslav Wars, Serb paramilitary forces supported by JNA forces killed POWs at Vukovar and Škarbrnja while Bosnian Serb forces killed POWs at Srebrenica.
In 2001, there were reports concerning two POWs that India had taken during the Sino-Indian War, Yang Chen and Shih Liang. The two were imprisoned as spies for three years before being interned in a mental asylum in Ranchi, where they spent the next 38 years under a special prisoner status. The last prisoners of Iran–Iraq War (1980–1988) were exchanged in 2003.
This article is a list of nations with the highest number of POWs since the start of World War II, listed in descending order. These are also the highest numbers in any war since the Convention Relative to the Treatment of Prisoners of War entered into force on 19 June 1931. The USSR had not signed the Geneva convention. | https://en.wikipedia.org/wiki?curid=25008 |
Privacy
Privacy is the ability of an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.
When something is private to a "person", it usually means that something is inherently special or sensitive to them. The domain of privacy partially overlaps with security, which can include the concepts of appropriate use, as well as protection of information. Privacy may also take the form of bodily integrity. The right not to be subjected to unsanctioned invasions of privacy by the government, corporations or individuals is part of many countries' privacy laws, and in some cases, constitutions.
In the business world, a person may volunteer personal details, including for advertising, in order to receive some sort of benefit. Public figures may be subject to rules on the public interest. Personal information which is voluntarily shared but subsequently stolen or misused can lead to identity theft.
The concept of universal individual privacy is a modern concept primarily associated with Western culture, British and North American in particular, and remained virtually unknown in some cultures until recent times. Most cultures, however, recognize the ability of individuals to withhold certain parts of their personal information from wider society, such as closing the door to one's home.
In 1890 the United States jurists Samuel D. Warren and Louis Brandeis wrote "The Right to Privacy", an article in which they argued for the "right to be let alone", using that phrase as a definition of privacy. There is extensive commentary over the meaning of being "let alone", and among other ways, it has been interpreted to mean the right of a person to choose seclusion from the attention of others if they wish to do so, and the right to be immune from scrutiny or being observed in private settings, such as one's own home. Although this early vague legal concept did not describe privacy in a way that made it easy to design broad legal protections of privacy, it strengthened the notion of privacy rights for individuals and began a legacy of discussion on those rights.
Limited access refers to a person's ability to participate in society without having other individuals and organizations collect information about them.
Various theorists have imagined privacy as a system for limiting access to one's personal information. Edwin Lawrence Godkin wrote in the late 19th century that "nothing is better worthy of legal protection than private life, or, in other words, the right of every man to keep his affairs to himself, and to decide for himself to what extent they shall be the subject of public observation and discussion." Adopting an approach similar to the one presented by Ruth Gavison Nine years earlier, Sissela Bok said that privacy is "the condition of being protected from unwanted access by others—either physical access, personal information, or attention."
Control over one's personal information is the concept that "privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others." Generally, a person who has consensually formed an interpersonal relationship with another person is not considered "protected" by privacy rights with respect to the person they are in the relationship with. Charles Fried said that "Privacy is not simply an absence of information about us in the minds of others; rather it is the control we have over information about ourselves. Nevertheless, in the era of big data, control over information is under pressure.
Alan Westin defined four states—or experiences—of privacy: solitude, intimacy, anonymity, and reserve. Solitude is a physical separation from others. Intimacy is a "close, relaxed, and frank relationship between two or more individuals" that results from the seclusion of a pair or small group of individuals. Anonymity is the "desire of individuals for times of 'public privacy.'" Lastly, reserve is the "creation of a psychological barrier against unwanted intrusion"; this creation of a psychological barrier requires others to respect an individual's need or desire to restrict communication of information concerning himself or herself.
In addition to the psychological barrier of reserve, Kirsty Hughes identified three more kinds of privacy barriers: physical, behavioral, and normative. Physical barriers, such as walls and doors, prevent others from accessing and experiencing the individual. (In this sense, "accessing" an individual includes accessing personal information about him or her.) Behavioral barriers communicate to others—verbally, through language, or non-verbally, through personal space, body language, or clothing—that an individual does not want them to access or experience him or her. Lastly, normative barriers, such as laws and social norms, restrain others from attempting to access or experience an individual.
Privacy is sometimes defined as an option to have secrecy. Richard Posner said that privacy is the right of people to "conceal information about themselves that others might use to their disadvantage".
In various legal contexts, when privacy is described as secrecy, a conclusion if privacy is secrecy then rights to privacy do not apply for any information which is already publicly disclosed. When privacy-as-secrecy is discussed, it is usually imagined to be a selective kind of secrecy in which individuals keep some information secret and private while they choose to make other information public and not private.
Privacy may be understood as a necessary precondition for the development and preservation of personhood. Jeffrey Reiman defined privacy in terms of a recognition of one's ownership of his or her physical and mental reality and a moral right to his or her self-determination. Through the "social ritual" of privacy, or the social practice of respecting an individual's privacy barriers, the social group communicates to the developing child that he or she has exclusive moral rights to his or her body—in other words, he or she has moral ownership of his or her body. This entails control over both active (physical) and cognitive appropriation, the former being control over one's movements and actions and the latter being control over who can experience one's physical existence and when.
Alternatively, Stanley Benn defined privacy in terms of a recognition of oneself as a subject with agency—as an individual with the capacity to choose. Privacy is required to exercise choice. Overt observation makes the individual aware of himself or herself as an object with a "determinate character" and "limited probabilities." Covert observation, on the other hand, changes the conditions in which the individual is exercising choice without his or her knowledge and consent.
In addition, privacy may be viewed as a state that enables autonomy, a concept closely connected to that of personhood. According to Joseph Kufer, an autonomous self-concept entails a conception of oneself as a "purposeful, self-determining, responsible agent" and an awareness of one's capacity to control the boundary between self and other—that is, to control who can access and experience him or her and to what extent. Furthermore, others must acknowledge and respect the self's boundaries—in other words, they must respect the individual's privacy.
The studies of psychologists such as Jean Piaget and Victor Tausk show that, as children learn that they can control who can access and experience them and to what extent, they develop an autonomous self-concept. In addition, studies of adults in particular institutions, such as Erving Goffman's study of "total institutions" such as prisons and mental institutions, suggest that systemic and routinized deprivations or violations of privacy deteriorate one's sense of autonomy over time.
Privacy may be understood as a prerequisite for the development of a sense of self-identity. Privacy barriers, in particular, are instrumental in this process. According to Irwin Altman, such barriers "define and limit the boundaries of the self" and thus "serve to help define [the self]." This control primarily entails the ability to regulate contact with others. Control over the "permeability" of the self's boundaries enables one to control what constitutes the self and thus to define what is the self.
In addition, privacy may be seen as a state that fosters personal growth, a process integral to the development of self-identity. Hyman Gross suggested that, without privacy—solitude, anonymity, and temporary releases from social roles—individuals would be unable to freely express themselves and to engage in self-discovery and self-criticism. Such self-discovery and self-criticism contributes to one's understanding of oneself and shapes one's sense of identity.
In a way analogous to how the personhood theory imagines privacy as some essential part of being an individual, the intimacy theory imagines privacy to be an essential part of the way that humans have strengthened or intimate relationships with other humans. Because part of human relationships includes individuals volunteering to self-disclose most if not all personal information, this is one area in which privacy does not apply.
James Rachels advanced this notion by writing that privacy matters because "there is a close connection between our ability to control who has access to us and to information about us, and our ability to create and maintain different sorts of social relationships with different people." Protecting intimacy is at the core of the concept of sexual privacy, which law professor Danielle Citron argues should be protected as a unique form of privacy.
Physical privacy could be defined as preventing "intrusions into one's physical space or solitude." An example of the legal basis for the right to physical privacy is the U.S. Fourth Amendment, which guarantees "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures".
Physical privacy may be a matter of cultural sensitivity, personal dignity, and/or shyness. There may also be concerns about safety, if for example one is wary of becoming the victim of crime or stalking.
Government agencies, corporations, groups/societies and other organizations may desire to keep their activities or secrets from being revealed to other organizations or individuals, adopting various security practices and controls in order to keep private information confidential. Organizations may seek legal protection for their secrets. For example, a government administration may be able to invoke executive privilege or declare certain information to be classified, or a corporation might attempt to protect valuable proprietary information as trade secrets.
Privacy has historical roots in philosophical discussions, the most well-known being Aristotle's distinction between two spheres of life: the public sphere of the "polis", associated with political life, and the private sphere of the "oikos", associated with domestic life. More systematic treatises of privacy in the United States did not appear until the 1890s, with the development of privacy law in America.
As technology has advanced, the way in which privacy is protected and violated has changed with it. In the case of some technologies, such as the printing press or the Internet, the increased ability to share information can lead to new ways in which privacy can be breached. It is generally agreed that the first publication advocating privacy in the United States was the article by Samuel Warren and Louis Brandeis, "The Right to Privacy", that was written largely in response to the increase in newspapers and photographs made possible by printing technologies.
New technologies can also create new ways to gather private information. For example, in the United States it was thought that heat sensors intended to be used to find marijuana-growing operations would be acceptable. However, in 2001 in "Kyllo v. United States" (533 U.S. 27) it was decided that the use of thermal imaging devices that can reveal previously unknown information without a warrant does indeed constitute a violation of privacy.
The Internet has brought new concerns about privacy in an age where computers can permanently store records of everything: "where every online photo, status update, Twitter post and blog entry by and about us can be stored forever", writes law professor and author Jeffrey Rosen.
This currently has an effect on employment. Microsoft reports that 75 percent of U.S. recruiters and human-resource professionals now do online research about candidates, often using information provided by search engines, social-networking sites, photo/video-sharing sites, personal web sites and blogs, and Twitter. They also report that 70 percent of U.S. recruiters have rejected candidates based on internet information. This has created a need by many to control various online privacy settings in addition to controlling their online reputations, both of which have led to legal suits against various sites and employers.
The ability to do online inquiries about individuals has expanded dramatically over the last decade. Facebook for example, as of August 2015, was the largest social-networking site, with nearly 1,490 million members, who upload over 4.75 billion pieces of content daily. Over 83.09 million accounts were fake. Twitter has more than 316 million registered users and over 20 million are fake users. The Library of Congress recently announced that it will be acquiring—and permanently storing—the entire archive of public Twitter posts since 2006, reports Rosen.
Importantly, directly observed behaviour, such as browsing logs, search queries, or contents of the Facebook profile can be automatically processed to infer secondary information about an individual, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality.
According to some experts, many commonly used communication devices may be mapping every move of their users. Senator Al Franken has noted the seriousness of iPhones and iPads having the ability to record and store users' locations in unencrypted files, although Apple denied doing so.
Andrew Grove, co-founder and former CEO of Intel Corporation, offered his thoughts on internet privacy in an interview published in May 2000:
As with other concepts about privacy, there are various ways to discuss what kinds of processes or actions remove, challenge, lessen, or attack privacy. In 1960 legal scholar William Prosser created the following list of activities which can be remedied with privacy protection:
Building from this and other historical precedents, Daniel J. Solove presented another classification of actions which are harmful to privacy, including collection of information which is already somewhat public, processing of information, sharing information, and invading personal space to get private information.
In the context of harming privacy, information collection means gathering whatever information can be obtained by doing something to obtain it. Surveillance is an example of this, when someone decides to begin watching and recording someone or something, and interrogation is another example of this, when someone uses another person as a source of information.
It can happen that privacy is not harmed when information is available, but that the harm can come when that information is collected as a set then processed in a way that the collective reporting of pieces of information encroaches on privacy. Actions in this category which can lessen privacy include the following:
Information dissemination is an attack on privacy when information which was shared in confidence is shared or threatened to be shared in a way that harms the subject of the information.
There are various examples of this. Breach of confidentiality is when one entity promises to keep a person's information private, then breaks that promise. Disclosure is making information about a person more accessible in a way that harms the subject of the information, regardless of how the information was collected or the intent of making it available. Exposure is a special type of disclosure in which the information disclosed is emotional to the subject or taboo to share, such as revealing their private life experiences, their nudity, or perhaps private body functions. Increased accessibility means advertising the availability of information without actually distributing it, as in the case of doxxing. Blackmail is making a threat to share information, perhaps as part of an effort to coerce someone. Appropriation is an attack on the personhood of someone, and can include using the value of someone's reputation or likeness to advance interests which are not those of the person being appropriated. Distortion is the creation of misleading information or lies about a person.
Invasion of privacy, a subset of expectation of privacy, is a different concept from the collecting, aggregating, and disseminating information because those three are a misuse of available data, whereas invasion is an attack on the right of individuals to keep personal secrets. An invasion is an attack in which information, whether intended to be public or not, is captured in a way that insults the personal dignity and right to private space of the person whose data is taken.
An intrusion is any unwanted entry into a person's private personal space and solitude for any reason, regardless of whether data is taken during that breach of space. "Decisional interference" is when an entity somehow injects itself into the personal decision making process of another person, perhaps to influence that person's private decisions but in any case doing so in a way that disrupts the private personal thoughts that a person has.
Privacy uses the theory of natural rights, and generally responds to new information and communication technologies. In North America, Samuel D. Warren and Louis D. Brandeis wrote that privacy is the "right to be let alone" (Warren & Brandeis, 1890) focuses on protecting individuals. This citation was a response to recent technological developments, such as photography, and sensationalist journalism, also known as yellow journalism.
In recent years there have been only few attempts to clearly and precisely define a "right to privacy." Some experts assert that in fact the right to privacy "should not be defined as a separate legal right" at all. By their reasoning, existing laws relating to privacy in general should be sufficient. It has therefore proposed a working definition for a "right to privacy":
"The right to privacy is our right to keep a domain around us, which includes all those things that are part of us, such as our body, home, property, thoughts, feelings, secrets and identity. The right to privacy gives us the ability to choose which parts in this domain can be accessed by others, and to control the extent, manner and timing of the use of those parts we choose to disclose."
David Flaherty believes networked computer databases pose threats to privacy. He develops 'data protection' as an aspect of privacy, which involves "the collection, use, and dissemination of personal information". This concept forms the foundation for fair information practices used by governments globally. Flaherty forwards an idea of privacy as information control, "[i]ndividuals want to be left alone and to exercise some control over how information about them is used".
Richard Posner and Lawrence Lessig focus on the economic aspects of personal information control. Posner criticizes privacy for concealing information, which reduces market efficiency. For Posner, employment is selling oneself in the labour market, which he believes is like selling a product. Any 'defect' in the 'product' that is not reported is fraud. For Lessig, privacy breaches online can be regulated through code and law. Lessig claims "the protection of privacy would be stronger if people conceived of the right as a property right", and that "individuals should be able to control information about themselves".
There have been attempts to establish privacy as one of the fundamental human rights, whose social value is an essential component in the functioning of democratic societies. Amitai Etzioni suggests a communitarian approach to privacy. This requires a shared moral culture for establishing social order. Etzioni believes that "[p]rivacy is merely one good among many others", and that technological effects depend on community accountability and oversight (ibid). He claims that privacy laws only increase government surveillance by weakening informal social controls. Furthermore, the government is no longer the only or even principle threat to people's privacy. Etzioni notes that corporate data miners, or "Privacy Merchants," stand to profit by selling massive dossiers personal information, including purchasing decisions and Internet traffic, to the highest bidder. And while some might not find collection of private information objectionable when it is only used commercially by the private sector, the information these corporations amass and process is also available to the government, so that it is no longer possible to protect privacy by only curbing the State.
Priscilla Regan believes that individual concepts of privacy have failed philosophically and in policy. She supports a social value of privacy with three dimensions: shared perceptions, public values, and collective components. Shared ideas about privacy allows freedom of conscience and diversity in thought. Public values guarantee democratic participation, including freedoms of speech and association, and limits government power. Collective elements describe privacy as collective good that cannot be divided. Regan's goal is to strengthen privacy claims in policy making: "if we did recognize the collective or public-good value of privacy, as well as the common and public value of privacy, those advocating privacy protections would have a stronger basis upon which to argue for its protection".
Leslie Regan Shade argues that the human right to privacy is necessary for meaningful democratic participation, and ensures human dignity and autonomy. Privacy depends on norms for how information is distributed, and if this is appropriate. Violations of privacy depend on context. The human right to privacy has precedent in the United Nations Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." Shade believes that privacy must be approached from a people-centered perspective, and not through the marketplace.
Most countries give citizen rights to privacy in their constitutions. Representative examples of this include the Constitution of Brazil, which says "the privacy, private life, honor and image of people are inviolable"; the Constitution of South Africa says that "everyone has a right to privacy"; and the Constitution of the Republic of Korea says "the privacy of no citizen shall be infringed." Among most countries whose constitutions do not explicitly describe privacy rights, court decisions have interpreted their constitutions to intend to give privacy rights.
Many countries have broad privacy laws outside their constitutions, including Australia's Privacy Act 1988, Argentina's Law for the Protection of Personal Data of 2000, Canada's 2000 Personal Information Protection and Electronic Documents Act, and Japan's 2003 Personal Information Protection Law.
Beyond national privacy laws, there are international privacy agreements. The United Nations Universal Declaration of Human Rights says "No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation." The Organisation for Economic Co-operation and Development published its Privacy Guidelines in 1980. The European Union's 1995 Data Protection Directive guides privacy protection in Europe. The 2004 Privacy Framework by the Asia-Pacific Economic Cooperation is a privacy protection agreement for the members of that organization.
In the 1960s people began to consider how changes in technology were bringing changes in the concept of privacy. Vance Packard’s "The Naked Society" was a popular book on privacy from that era and led discourse on privacy at that time.
Approaches to privacy can, broadly, be divided into two categories: free market or consumer protection.
One example of the free market approach is to be found in the voluntary OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. The principles reflected in the guidelines are analysed in an article putting them into perspective with concepts of the GDPR put into law later in the European Union.
In a consumer protection approach, in contrast, it is claimed that individuals may not have the time or knowledge to make informed choices, or may not have reasonable alternatives available. In support of this view, Jensen and Potts showed that most privacy policies are above the reading level of the average person.
The "Privacy Act 1988" is administered by the Office of the Australian Information Commissioner. Privacy law has been evolving in Australia for a number of years. The initial introduction of privacy law in 1998 extended to the public sector, specifically to Federal government departments, under the Information Privacy Principles. State government agencies can also be subject to state based privacy legislation. This built upon the already existing privacy requirements that applied to telecommunications providers (under Part 13 of the "Telecommunications Act 1997"), and confidentiality requirements that already applied to banking, legal and patient / doctor relationships.
In 2008 the Australian Law Reform Commission (ALRC) conducted a review of Australian Privacy Law. The resulting report "For Your Information". This recommendation, and many others, were taken up and implemented by the Australian Government via the Privacy Amendment (Enhancing Privacy Protection) Bill 2012
Although there are comprehensive regulations for data protection, some studies show that despite the laws, there is a lack of enforcement in that no institution feels responsible to control the parties involved and enforce their laws. The European Union is also championing for the 'Right to be Forgotten' concept (which allows individuals to ask that links leading to information about themselves be removed from internet search engine results) to be adopted by other countries.
Due to the introduction of the Aadhaar project inhabitants of India were afraid that their privacy could be invaded. The project was also met with mistrust regarding the safety of the social protection infrastructures. To tackle the fear amongst the people, India's supreme court put a new ruling into action that stated that privacy from then on was seen as a fundamental right.
In Italy the right to privacy is enshrined in Article 15 of the Constitution, which states:
In the United Kingdom, it is not possible to bring an action for invasion of privacy. An action may be brought under another tort (usually breach of confidence) and privacy must then be considered under EC law. In the UK, it is sometimes a defence that disclosure of private information was in the public interest. There is, however, the Information Commissioner's Office (ICO), an independent public body set up to promote access to official information and protect personal information. They do this by promoting good practice, ruling on eligible complaints, giving information to individuals and organisations, and taking action when the law is broken. The relevant UK laws include: Data Protection Act 1998; Freedom of Information Act 2000; Environmental Information Regulations 2004; Privacy and Electronic Communications Regulations 2003. The ICO has also provided a "Personal Information Toolkit" online which explains in more detail the various ways of protecting privacy online.
Although the US Constitution does not explicitly include the right to privacy, individual as well as locational privacy are implicitly granted by the Constitution under the 4th Amendment. The Supreme Court of the United States has found that other guarantees have "penumbras" that implicitly grant a right to privacy against government intrusion, for example in "Griswold v. Connecticut" (1965). In the United States, the right of freedom of speech granted in the First Amendment has limited the effects of lawsuits for breach of privacy. Privacy is regulated in the US by the Privacy Act of 1974, and various state laws. The Privacy Act of 1974 only applies to Federal agencies in the executive branch of the Federal government. Certain privacy rights have been established in the United States via legislation such as the Children's Online Privacy Protection Act (COPPA), the Gramm–Leach–Bliley Act (GLB), and the Health Insurance Portability and Accountability Act (HIPAA).
Unlike the EU and most EU-member states the US does not recognize the right to privacy to others than US citizens.
There are many means to protect one's privacy on the internet. For example, e-mails can be encrypted (via S/MIME or PGP) and anonymizing proxies or anonymizing networks like I2P and Tor can be used to prevent the internet service providers from knowing which sites one visits and with whom one communicates. Covert collection of personally identifiable information has been identified as a primary concern by the U.S. Federal Trade Commission. Although some privacy advocates recommend the deletion of original and third-party HTTP cookies, Anthony Miyazaki, marketing professor at Florida International University and privacy scholar, warns that the "elimination of third-party cookie use by Web sites can be circumvented by cooperative strategies with third parties in which information is transferred after the Web site's use of original domain cookies." As of December 2010, the Federal Trade Commission is reviewing policy regarding this issue as it relates to behavioral advertising.
Another aspect of privacy on the Internet relates to online social networking. Several online social network sites (OSNs) are among the top 10 most visited websites globally. A review and evaluation of scholarly work regarding the current state of the value of individuals' privacy of online social networking show the following results: "first, adults seem to be more concerned about potential privacy threats than younger users; second, policy makers should be alarmed by a large part of users who underestimate risks of their information privacy on OSNs; third, in the case of using OSNs and its services, traditional one-dimensional privacy approaches fall short". This is exacerbated by the research indicating that personal traits such as sexual orientation, race, religious and political views, personality, or intelligence can be inferred based on the wide variety of digital footprint, such as samples of text, browsing logs, or Facebook Likes.
Increasingly, mobile devices facilitate location tracking. This creates user privacy problems. A user's location and preferences constitute personal information. Their improper use violates that user's privacy. A recent MIT study by de Montjoye et al. showed that 4 spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5M people in a mobility database. The study
further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity.
Several methods to protect user privacy in location-based services have been proposed, including the use of anonymizing servers, blurring of information e.a. Methods to quantify privacy have also been proposed, to calculate the equilibrium between the benefit of providing accurate location information and the drawbacks of risking personal privacy.
In recent years, seen with the increasing importance of mobile devices and paired with the "National Do Not Call Registry", telemarketers have turned attention to mobiles.
Additionally, Apple and Google are constantly improving their privacy. With iOS 13, Apple introduced Sign in with Apple in order to protect the user data being taken and Google introduced allowing location access only when the app is in-use.
Privacy self-synchronization is the mode by which the stakeholders of an enterprise privacy program spontaneously contribute collaboratively to the program's maximum success. The stakeholders may be customers, employees, managers, executives, suppliers, partners or investors. When self-synchronization is reached, the model states that the personal interests of individuals toward their privacy is in balance with the business interests of enterprises who collect and use the personal information of those individuals.
The privacy paradox is a phenomenon in which online users state that they are concerned about their privacy but behave as if they were not. While this term was coined as early as 1998, it wasn't used in its current popular sense until the year 2000.
Susan B. Barnes similarly used the term “privacy paradox” to refer to the ambiguous boundary between private and public space on social media. When compared to adults, young people tend to disclose more information on social media. However, this does not mean that they are not concerned about their privacy. Susan B. Barnes gave a case in her article: in a television interview about Facebook, a student addressed her concerns about disclosing personal information online. However, when the reporter asked to see her Facebook page, she put her home address, phone numbers, and pictures of her young son on the page.
The privacy paradox has been studied and scripted in different research settings. Although several studies have shown this inconsistency between privacy attitudes and behavior among online users, the reason for the paradox still remains unclear. A main explanation for the privacy paradox is that users lack awareness of the risks and the degree of protection. Users may underestimate the harm of disclosing information online. On the other hand, some researchers argue the privacy paradox comes from lack of technology literacy and from the design of sites. For example, users may not know how to change their default settings even though they care about their privacy. Psychologists particularly pointed out that the privacy paradox occurs because users must trade-off between their privacy concerns and impression management.
Some researchers believe that decision making takes place on irrational level especially when it comes to mobile computing. Mobile applications are built up in a way that decision making is fast. Restricting one's profile on social networks is the easiest way to protect against privacy threats and security intrusions. However, such protection measures are not easily accessible while downloading and installing apps. Even if there would be mechanisms to protect your privacy then most of the users do not have the knowledge or experience to protective behavior. Mobile applications consumers also have very little knowledge of how their personal data are used, they do not rely on the information provided by application vendors on the collection and use of personal data, when they decide which application to download. Users claim that permissions are important while downloading app, but research shows that users do not value privacy and security related aspects to be important when downloading and installing app. Users value cost, functionality, design, ratings, reviews and downloads more important than requested permissions.
A study by Zafeiropoulou specifically examined location data, which is a form of personal information increasingly used by mobile applications. Their survey also found evidence that supports the existence of privacy paradox for location data. Privacy risk perception in relation to the use of privacy-enhancing technologies survey data indicates that a high perception of privacy risk is an insufficient motivator for people to adopt privacy protecting strategies, while knowing they exist. It also raises a question on what the value of data is, as there is no equivalent of a stock-market for personal information.
The willingness to incur a privacy risk is driven by a complex array of factors including risk attitudes, self-reported value for private information, and general attitudes to privacy (derived from surveys). Experiments aiming to determine the monetary value of several types of personal information indicate low evaluations of personal information. On the other hand, it appears that consumers are willing to pay a premium for privacy, albeit a small one. Users do not always act in accordance with their professed privacy concerns and they are sometimes willing to trade private information for convenience, functionality, or financial gain, even when the gains are very small. One of the studies suggest that people think their browser history is worth the equivalent of a cheap meal. Attitudes to privacy risk do not appear to depend on whether it is already under threat or not. People do not either get discouraged in protecting their information, or come to value it more if it is under threat.
Concrete solutions on how to solve paradoxical behavior still do not exist. Many efforts are focused on processes of decision making like restricting data access permissions during the applications installation. However, nothing that would solve the gap between user intention and behavior. Susanne Barth and Menno D.T. de Jong believe that for users to make more conscious decisions on privacy matters the design needs to be more user oriented. Meaning, the ownership of data related risks will be better perceived if psychological ownership of data is being considered as ‘mine’ rather than ‘not mine’.
There are many opinions related to privacy paradox. It is also suggested that it should not be considered a paradox anymore. It's maybe more of a privacy dilemma, because people would like to do more but they also want to use services that would not exist without sharing their data. It is suggested to be, that people do understand that they pay with personal data, but believe they get a fair deal.
Selfies are popular today. A search for photos with the hashtag #selfie retrieves over 23 million results on Instagram and "a whopping 51 million with the hashtag #me" However, due to modern corporate and governmental surveillance, this may pose a risk to privacy. In a research which takes a sample size of 3763, researchers found that for selfies, females generally have greater concerns than male social media users. Users who have greater concerns inversely predict their selfie behavior and activity. | https://en.wikipedia.org/wiki?curid=25009 |
Proton–proton chain reaction
The proton–proton chain reaction is one of two known sets of nuclear fusion reactions by which stars convert hydrogen to helium. It dominates in stars with masses less than or equal to that of the Sun, whereas the CNO cycle, the other known reaction, is suggested by theoretical models to dominate in stars with masses greater than about 1.3 times that of the Sun.
In general, proton–proton fusion can occur only if the kinetic energy (i.e. temperature) of the protons is high enough to overcome their mutual electrostatic repulsion.
In the Sun, deuterium-producing events are rare. Diprotons are the much more common result of proton–proton reactions within the star, and diprotons almost immediately decay back into two protons. Since the conversion of hydrogen to helium is slow, the complete conversion of the hydrogen in the core of the Sun is calculated to take more than ten billion years.
Although called the "proton–proton chain reaction", it is not a chain reaction in the normal sense. In most nuclear reactions, a chain reaction designates a reaction that produces a product, such as neutrons given off during fission, that quickly induces another such reaction.
The proton-proton chain is, like a decay chain, a series of reactions. The product of one reaction is the starting material of the next reaction. There are two such chains leading from Hydrogen to Helium in the Sun. One chain has five reactions, the other chain has six.
The theory that proton–proton reactions are the basic principle by which the Sun and other stars burn was advocated by Arthur Eddington in the 1920s. At the time, the temperature of the Sun was considered to be too low to overcome the Coulomb barrier. After the development of quantum mechanics, it was discovered that tunneling of the wavefunctions of the protons through the repulsive barrier allows for fusion at a lower temperature than the classical prediction.
In 1939, Hans Bethe attempted to calculate the rates of various reactions in stars. Starting with two protons combining to give deuterium and a positron he found what we now call Branch II of the proton-proton chain reaction. But he did not consider the reaction of two nuclei (Branch I) which we now know to be important. This was part of the body of work in stellar nucleosynthesis for which Bethe won the Nobel Prize in Physics in 1967.
The first step in all the branches is the fusion of two protons into deuterium. As the protons fuse, one of them undergoes beta plus decay, converting into a neutron by emitting a positron and an electron neutrino.
The positron will probably annihilate with an electron from the environment into two gamma rays. Including this annihilation and the energy of the neutrino, the whole reaction has a "Q" value (released energy) of 1.442 MeV. The relative amounts of energy going to the neutrino and to the other products is variable.
This reaction is extremely slow due to it being initiated by the weak nuclear force. The average proton in the core of the Sun waits 9 billion years before it successfully fuses with another proton. It has not been possible to measure the cross-section of this reaction experimentally because of these long time scales.
After it is formed, the deuterium produced in the first stage can fuse with another proton to produce the light isotope of helium, :
This process, mediated by the strong nuclear force rather than the weak force, is extremely fast by comparison to the first step. It is estimated that, under the conditions in the Sun's core, each newly created deuterium nucleus exists for only about four seconds before it is converted to helium-3.
In the Sun, each helium-3 nucleus produced in these reactions exists for only about 400 years before it is converted into helium-4. Once the helium-3 has been produced, there are four possible paths to generate . In p–p I, helium-4 is produced by fusing two helium-3 nuclei; the p–p II and p–p III branches fuse with pre-existing to form beryllium-7, which undergoes further reactions to produce two helium-4 nuclei.
In the Sun, synthesis via branch p–p I occurs with a frequency of 83.30 percent, p–p II with 16.68 percent, and p–p III with 0.02 percent.
There is also the extremely rare p–p IV branch. Other even rarer reactions may occur. The rate of these reactions is very low due to very small cross-sections, or because the number of reacting particles is so low that any reactions that might happen are statistically insignificant. This is partly why no mass-5 or mass-8 elements are seen. While the reactions that would produce them, such as a proton + helium-4 producing lithium-5, or two helium-4 nuclei coming together to form beryllium-8, may "actually" happen, these elements are not detected because there are no stable (or even particle-bound) isotopes of atomic masses 5 or 8; the resulting products immediately decay into their initial reactants.
The overall reaction is:
releasing 26.73 MeV of energy, some of which is lost to the neutrinos.
The complete p–p I chain reaction releases a net energy of . Two percent of this energy is lost to the neutrinos that are produced.
The p–p I branch is dominant at temperatures of 10 to .
Below , the p–p chain does not produce much .
The p–p II branch is dominant at temperatures of 14 to .
Note that the energies in the second reaction above are the energies of the neutrinos that are produced by the reaction. 90 percent of the neutrinos produced in the reaction of to carry an energy of , while the remaining 10 percent carry . The difference is whether the lithium-7 produced is in the ground state or an excited (metastable) state, respectively. The total energy released going from to stable is about 0.862 MeV, almost all of which is lost to the neutrino if the decay goes directly to the stable lithium.
The last three stages of this chain contribute a total of 18.21 MeV.
The p–p III chain is dominant if the temperature exceeds .
The p–p III chain is not a major source of energy in the Sun (only 0.11 percent), but it was very important in the solar neutrino problem because it generates very high energy neutrinos (up to ).
This reaction is predicted theoretically, but it has never been observed due to its rarity (about in the Sun). In this reaction, helium-3 captures a proton directly to give helium-4, with an even higher possible neutrino energy (up to 18.8 MeV).
The mass-energy relationship gives 19.795 MeV for the energy released by this reaction, some of which is lost to the neutrino.
Comparing the mass of the final helium-4 atom with the masses of the four protons reveals that 0.7 percent of the mass of the original protons has been lost. This mass has been converted into energy, in the form of gamma rays and neutrinos released during each of the individual reactions. The total energy yield of one whole chain is .
Energy released as gamma rays will interact with electrons and protons and heat the interior of the Sun. Also kinetic energy of fusion products (e.g. of the two protons and the from the p–p I reaction) increases the temperature of plasma in the Sun. This heating supports the Sun and prevents it from collapsing under its own weight.
Neutrinos do not interact significantly with matter and therefore do not help support the Sun against gravitational collapse. Their energy is lost: the neutrinos in the p–p I, p–p II, and p–p III chains carry away 2.0%, 4.0%, and 28.3% of the energy in those reactions, respectively.
Deuterium can also be produced by the rare pep (proton–electron–proton) reaction (electron capture):
In the Sun, the frequency ratio of the pep reaction versus the p–p reaction is 1:400. However, the neutrinos released by the pep reaction are far more energetic: while neutrinos produced in the first step of the p–p reaction range in energy up to , the pep reaction produces sharp-energy-line neutrinos of . Detection of solar neutrinos from this reaction were reported by the Borexino collaboration in 2012.
Both the pep and p–p reactions can be seen as two different Feynman representations of the same basic interaction, where the electron passes to the right side of the reaction as a positron. This is represented in the figure of proton–proton and electron-capture chain reactions in a star, available at the NDM'06 web site. | https://en.wikipedia.org/wiki?curid=25010 |
Plankton
Plankton are the diverse collection of organisms that live in large bodies of water and are unable to swim against a current. The individual organisms constituting plankton are called plankters. They provide a crucial source of food to many small and large aquatic organisms, such as bivalves, fish and whales.
Planktonic organisms include bacteria, archaea, algae, protozoa and drifting or floating animals that inhabit—for example—the pelagic zone of oceans, seas, or bodies of fresh water. Essentially, plankton are defined by their ecological niche rather than any phylogenetic or taxonomic classification.
Though many planktonic species are microscopic in size, "plankton" includes organisms over a wide range of sizes, including large organisms such as jellyfish.
Technically the term does not include organisms on the surface of the water, which are called "pleuston"—or those that swim actively in the water, which are called "nekton".
The name "plankton" is derived from the Greek adjective πλαγκτός (), meaning "errant", and by extension, "wanderer" or "drifter", and was coined by Victor Hensen in 1887. While some forms are capable of independent movement and can swim hundreds of meters vertically in a single day (a behavior called diel vertical migration), their horizontal position is primarily determined by the surrounding water movement, and plankton typically flow with ocean currents. This is in contrast to nekton organisms, such as fish, squid and marine mammals, which can swim against the ambient flow and control their position in the environment.
Within the plankton, holoplankton spend their entire life cycle as plankton (e.g. most algae, copepods, salps, and some jellyfish). By contrast, meroplankton are only planktic for part of their lives (usually the larval stage), and then graduate to either a nektic (swimming) or benthic (sea floor) existence. Examples of meroplankton include the larvae of sea urchins, starfish, crustaceans, marine worms, and most fish.
The amount and distribution of plankton depends on available nutrients, the state of water and a large amount of other plankton.
The study of plankton is termed planktology and a planktonic individual is referred to as a plankter. The adjective "planktonic" is widely used in both the scientific and popular literature, and is a generally accepted term. However, from the standpoint of prescriptive grammar, the less-commonly used "planktic" is more strictly the correct adjective. When deriving English words from their Greek or Latin roots, the gender-specific ending (in this case, "-on" which indicates the word is neuter) is normally dropped, using only the root of the word in the derivation.
Plankton are primarily divided into broad functional (or trophic level) groups:
Recognition of the importance of mixotrophy as an ecological strategy is increasing, as well as the wider role this may play in marine biogeochemistry. Studies have shown that mixotrophs are much more important for the marine ecology than previously assumed, and comprise more than half of all microscopic plankton. Their presence act as a buffer that prevents the collapse of ecosystems during times with little to no light.
Plankton are also often described in terms of size. Usually the following divisions are used:
However, some of these terms may be used with very different boundaries, especially on the larger end. The existence and importance of nano- and even smaller plankton was only discovered during the 1980s, but they are thought to make up the largest proportion of all plankton in number and diversity.
The microplankton and smaller groups are microorganisms and operate at low Reynolds numbers, where the viscosity of water is much more important than its mass or inertia.
Plankton inhabit oceans, seas, lakes, ponds. Local abundance varies horizontally, vertically and seasonally. The primary cause of this variability is the availability of light. All plankton ecosystems are driven by the input of solar energy (but see chemosynthesis), confining primary production to surface waters, and to geographical regions and seasons having abundant light.
A secondary variable is nutrient availability. Although large areas of the tropical and sub-tropical oceans have abundant light, they experience relatively low primary production because they offer limited nutrients such as nitrate, phosphate and silicate. This results from large-scale ocean circulation and water column stratification. In such regions, primary production usually occurs at greater depth, although at a reduced level (because of reduced light).
Despite significant macronutrient concentrations, some ocean regions are unproductive (so-called HNLC regions). The micronutrient iron is deficient in these regions, and adding it can lead to the formation of phytoplankton blooms. Iron primarily reaches the ocean through the deposition of dust on the sea surface. Paradoxically, oceanic areas adjacent to unproductive, arid land thus typically have abundant phytoplankton (e.g., the eastern Atlantic Ocean, where trade winds bring dust from the Sahara Desert in north Africa).
While plankton are most abundant in surface waters, they live throughout the water column. At depths where no primary production occurs, zooplankton and bacterioplankton instead consume organic material sinking from more productive surface waters above. This flux of sinking material, so-called marine snow, can be especially high following the termination of spring blooms.
The local distribution of plankton can be affected by wind-driven Langmuir circulation and the biological effects of this physical process.
Aside from representing the bottom few levels of a food chain that supports commercially important fisheries, plankton ecosystems play a role in the biogeochemical cycles of many important chemical elements, including the ocean's carbon cycle.
Primarily by grazing on phytoplankton, zooplankton provide carbon to the planktic foodweb, either respiring it to provide metabolic energy, or upon death as biomass or detritus. Organic material tends to be denser than seawater, so it sinks into open ocean ecosystems away from the coastlines, transporting carbon along with it. This process, called the "biological pump", is one reason that oceans constitute the largest carbon sink on Earth. However, it has been shown to be influenced by increments of temperature. In 2019, a study indicated that at current rates of seawater acidification, we could see Antarctic phytoplanktons smaller and less effective at storing carbon before the end of the century.
It might be possible to increase the ocean's uptake of carbon dioxide () generated through human activities by increasing plankton production through "seeding", primarily with the micronutrient iron. However, this technique may not be practical at a large scale. Ocean oxygen depletion and resultant methane production (caused by the excess production remineralising at depth) is one potential drawback.
Phytoplankton absorb energy from the Sun and nutrients from the water to produce their own nourishment or energy. In the process of photosynthesis, phytoplankton release molecular oxygen () into the water as a waste byproduct. It is estimated that about 50% of the world's oxygen is produced via phytoplankton photosynthesis. The rest is produced via photosynthesis on land by plants. Furthermore, phytoplankton photosynthesis has controlled the atmospheric / balance since the early Precambrian Eon.
The growth of phytoplankton populations is dependent on light levels and nutrient availability. The chief factor limiting growth varies from region to region in the world's oceans. On a broad scale, growth of phytoplankton in the oligotrophic tropical and subtropical gyres is generally limited by nutrient supply, while light often limits phytoplankton growth in subarctic gyres. Environmental variability at multiple scales influences the nutrient and light available for phytoplankton, and as these organisms form the base of the marine food web, this variability in phytoplankton growth influences higher trophic levels. For example, at interannual scales phytoplankton levels temporarily plummet during El Niño periods, influencing populations of zooplankton, fishes, sea birds, and marine mammals.
The effects of anthropogenic warming on the global population of phytoplankton is an area of active research. Changes in the vertical stratification of the water column, the rate of temperature-dependent biological reactions, and the atmospheric supply of nutrients are expected to have important impacts on future phytoplankton productivity. Additionally, changes in the mortality of phytoplankton due to rates of zooplankton grazing may be significant.
Freshly hatched fish larvae are also plankton for a few days, as long as it takes before they can swim against currents.
Zooplankton are the initial prey item for almost all fish larvae as they switch from their yolk sacs to external feeding. Fish rely on the density and distribution of zooplankton to match that of new larvae, which can otherwise starve. Natural factors (e.g., current variations) and man-made factors (e.g. river dams, ocean acidification, rising temperatures) can strongly affect zooplankton, which can in turn strongly affect larval survival, and therefore breeding success.
The importance of both phytoplankton and zooplankton is also well-recognized in extensive and semi-intensive pond fish farming. Plankton population based pond management strategies for fish rearing have been practised by traditional fish farmers for decades, illustrating the importance of plankton even in man-made environments. | https://en.wikipedia.org/wiki?curid=25011 |
Pi Day
Pi Day is an annual celebration of the mathematical constant (pi). Pi Day is observed on March 14 (3/14 in the "month/day" format) since 3, 1, and 4 are the first three significant digits of . In 2009, the United States House of Representatives supported the designation of Pi Day. UNESCO's 40th General Conference decided Pi Day as the International Day of Mathematics in November 2019.
Pi Approximation Day is observed on July 22 (22/7 in the "day/month" format), since the fraction is a common approximation of, which is accurate to two decimal places and dates from Archimedes.
Two Pi Day, also known as Tau Day for the mathematical constant Tau, is observed on June 28 (6/28 in the "month/day" format).
In 1988, the earliest known official or large-scale celebration of Pi Day was organized by Larry Shaw at the San Francisco Exploratorium, where Shaw worked as a physicist, with staff and public marching around one of its circular spaces, then consuming fruit pies. The Exploratorium continues to hold Pi Day celebrations.
On March 12, 2009, the U.S. House of Representatives passed a non-binding resolution (), recognizing March 14, 2009 as National Pi Day. For Pi Day 2010, Google presented a Google Doodle celebrating the holiday, with the word Google laid over images of circles and pi symbols; and for the 30th anniversary in 2018, it was a Dominique Ansel pie with the circumference divided by its diameter.
The entire month of March 2014 (3/14) was observed by some as "Pi Month". In the year 2015, March 14 was celebrated as "Super Pi Day". It had special significance, as the date is written as 3/14/15 in month/day/year format. At 9:26:53, the date and time together represented the first 10 digits of .
Pi Day has been observed in many ways, including eating pie, throwing pies and discussing the significance of the number , due to a pun based on the words "pi" and "pie" being homophones in English (), and the coincidental circular shape of many pies. Also, some schools hold competitions as to which student can recall pi to the highest number of decimal places. In 2020, some events were canceled or modified due to COVID-19 concerns.
Massachusetts Institute of Technology has often mailed its application decision letters to prospective students for delivery on Pi Day. Starting in 2012, MIT has announced it will post those decisions (privately) online on Pi Day at exactly 6:28 pm, which they have called "Tau Time", to honor the rival numbers pi and tau equally. In 2015, the regular decisions were put online at 9:26 am, following that year's "pi minute", and in 2020, regular decisions are set to be released at 1:59 pm, making the first six digits of pi.
June 28 is "Two Pi Day", also known as "Tau Day". 2, also known by the Greek letter tau (τ) is a common multiple in mathematical formulae. Some have argued that τ is the more fundamental constant, and that Tau Day should be celebrated instead. Celebrations of this date jokingly suggest eating "twice the pie".
Princeton, New Jersey, hosts numerous events in a combined celebration of Pi Day and Albert Einstein's birthday, which is also March 14. Einstein lived in Princeton for more than twenty years while working at the Institute for Advanced Study. In addition to pie eating and recitation contests, there is an annual Einstein look-alike contest. | https://en.wikipedia.org/wiki?curid=25013 |
Pauli effect
The Pauli effect or Pauli's Device Corollary is the supposed tendency of technical equipment to encounter critical failure in the presence of certain people. The term was coined after mysterious anecdotal stories involving Austrian theoretical physicist Wolfgang Pauli, describing numerous instances in which demonstrations involving equipment suffered technical problems only when he was present.
The Pauli effect is not related with the Pauli exclusion principle, which is a bona fide physical phenomenon named after Pauli. However the Pauli effect was humorously tagged as a second Pauli exclusion principle, according to which "a functioning device and Wolfgang Pauli may not occupy the same room". Pauli himself was convinced that the effect named after him was real. Pauli corresponded with Hans Bender and Carl Jung and saw the effect as an example of the concept of synchronicity.
Since the 20th century, the work of physics research has been divided between theorists and experimentalists (see scientific method). Only a few physicists, such as Enrico Fermi, have been successful in both roles. Lacking an aptitude or interest in experimental work, many theorists have earned a reputation for accidentally breaking experimental equipment. Pauli was exceptional in this regard: it was postulated that he was such a good theorist that any experiments would be compromised by virtue of his presence in the vicinity. For fear of the Pauli effect, experimental physicist Otto Stern banned Pauli from his laboratory located in Hamburg despite their friendship. Pauli was convinced that the effect named after him was real. He corresponded with Carl Jung and Marie-Louise von Franz about the concept of synchronicity and did so as well with Hans Bender, lecturer at Freiburg university Institut für Grenzgebiete der Psychologie und Psychohygiene, the only parapsychology chair in Germany.
Jung and Pauli saw some parallels between physics and depth psychology. Pauli was among the honored guests at the foundation festivities of the C.G. Jung Institute in Zürich 1948. A famous Pauli effect at the ceremony— as he entered, a china flower vase fell on the floor without any obvious reason—caused Pauli to write his article "Background-Physics", in which he tries to find complementary relationships between physics and depth psychology.
An incident occurred in the physics laboratory at the University of Göttingen. An expensive measuring device, for no apparent reason, suddenly stopped working, although Pauli was in fact "absent". James Franck, the director of the institute, reported the incident to his colleague Pauli in Zürich with the humorous remark that at least this time Pauli was innocent. However, it turned out that Pauli had been on a railway journey to Copenhagen and had switched trains in the Göttingen rail station at about the time of the failure. The incident is reported in George Gamow's book "Thirty Years That Shook Physics", where it is also claimed the more talented the theoretical physicist, the stronger the effect.
R. Peierls describes a case when at one reception this effect was to be parodied by deliberately crashing a chandelier upon Pauli's entrance. The chandelier was suspended on a rope to be released, but it stuck instead, thus becoming a real example of the Pauli effect.
In February 1950, when he was at Princeton University, the cyclotron burnt, and he asked himself if this mischief belonged to such a Pauli effect, named after him.
Philip K. Dick makes reference to "Pauli's synchronicity" in his 1963 science fiction novel "The Game-Players of Titan" in reference to pre-cognitive psionic abilities being interfered with by other psionic abilities such as psychokinesis: "an acausal connective event."
Tatsuhisa Kamijō is a self-claimed 'demon-embodied' human who can randomly cause electronic devices such as phones and drones to self destruct with his hands in the series Yu-Gi-Oh! Sevens. It is not explained why but the main character Yuga attributes this to the Pauli Effect. | https://en.wikipedia.org/wiki?curid=25018 |
Pat Mills
Pat Eamon Mills (born 1949) is a British comics writer and editor who, along with John Wagner, revitalised British boys comics in the 1970s, and has remained a leading light in British comics ever since. He has been called "the godfather of British comics".
His comics are notable for their violence and anti-authoritarianism. He is best known for creating "2000 AD" and playing a major part in the development of "Judge Dredd".
Mills started his career as a sub-editor for D. C. Thomson & Co. Ltd, where he met Wagner. In 1971 both left to go freelance, and were soon writing scripts for IPC's girls' and humour comics. After D.C. Thomson launched "Warlord", a successful war-themed weekly, Mills was asked in 1975 to develop a rival title for IPC. Based in the girls' comics department to avoid the attention of the staff of the boys' department, Mills, along with Wagner and Gerry Finley-Day, worked in secret to create "Battle Picture Weekly". "Battle"'s stories were more violent and its characters more working class than IPC's traditional fare, and it was an immediate hit. Having made the comic ready for launch, Mills resigned as editor. He would later write the celebrated First World War series "Charley's War", drawn by Joe Colquhoun, for the title.
After launching "Battle", Mills began developing a new boys' title, "Action", launched in 1976. "Action"'s mix of violence and anti-authoritarianism proved controversial and the title lasted less than a year before being withdrawn in the face of media protests. It was briefly revived in neutered form before being merged into "Battle".
His next creation was the science fiction-themed weekly "2000 AD", launched in 1977. As with "Battle" and "Action" he developed most of the early series before handing them over to other writers. He took over the development of "Judge Dredd" when creator John Wagner temporarily walked out, and wrote many of the early stories, establishing the character and his world, before Wagner returned.
In 1978 IPC launched "Starlord", a short-lived companion title for "2000 AD". Mills contributed "Ro-Busters", a series about a robot disaster squad, which moved to "2000 AD" when "Starlord" was cancelled. "Ro-Busters" was the beginning of a mini-universe of interrelated stories Mills was to create for "2000 AD", including "ABC Warriors" and "Nemesis the Warlock". Artist Kevin O'Neill was involved in the creation of all three. "Nemesis" in particular, featuring a morally ambiguous alien hero fighting a despotic human empire, allowed Mills to work out his feelings towards religion and imperialism. Another strand of his "2000 AD" work was "Sláine", a barbarian fantasy based on Celtic mythology and neo-paganism, which he co-created with his then wife Angela Kincaid (with whom he also created the children's series of books, "The Butterfly Children").
Mills also had a hand in IPC's line of comics aimed at girl,s such as "Chiller" (a horror comic), "Misty" (supernatural stories) and "Jinty" (science fiction).
He has had little success in American comics, with the exception of "Metalzoic" and "Marshal Law", published by DC and Epic comics respectively in the late 1980s, both drawn by O'Neill.
In 1986 he edited the short-lived comic "Diceman", which featured characters from "2000 AD". He wrote nearly every story.
In 1988 he was involved in the launch of "Crisis", a politically aware "2000 AD" spin-off aimed at older readers. For it he wrote "Third World War", drawn initially by Carlos Ezquerra, a polemical critique of global capitalism and the ways it exploits the developing world. The title lasted until 1991 and launched the careers of talents such as Garth Ennis, John Smith and Sean Phillips.
In 1991 Mills launched "Toxic!", an independent colour newsstand weekly comic with a violent, anarchic tone, perhaps as a reaction against the politically worthy "Crisis", and a creator-owned ideal. Many of the stories were created by Mills and co-writer Tony Skinner, including "Accident Man", an assassin who makes his hits look like accidents. "Toxic!" lasted less than a year, but gave a start to talents such as Duke Mighten and Martin Emond.
In 1995, he broke into the French market, one of his life's goals, with "Sha", created with French artist Olivier Ledroit.
He continues to write "Sláine", "Bill Savage", "Black Siddha" and "ABC Warriors" for "2000 AD", and also the Franco-Belgian comic "Requiem Vampire Knight", with art by Olivier Ledroit, and its spin-off "Claudia Chevalier Vampire", with art by Franck Tacito.
Two new series, "Greysuit", a super-powered government agent drawn by John Higgins, and "Defoe", a 17th-century zombie hunter drawn by Leigh Gallagher, began in "2000 AD" prog 1540.
Mills has formed Repeat Offenders with artist Clint Langley and Jeremy Davis "to develop graphic novel concepts with big-screen potential" and the first project is a graphic novel called "American Reaper", serialised in the "Judge Dredd Megazine" (2011–2015). It has been optioned by Trudie Styler's Xingu Films and Mills has written the screenplay.
He has also written two "Doctor Who" audio plays, "Dead London" (2008) and "The Scapegoat" (2009) for Big Finish Productions, featuring the Eighth Doctor and Lucie Miller. The first audio play was released as the first part of the second season of the Eighth Doctor Adventures and the second as part of the third season. In 2010 Mills adapted a story that had been started by him and Wagner for Doctor Who in the 1980s and was produced by Big Finish as "The Song of Megaptera".
In 2017 he wrote, with Kevin O'Neill, and published two novels, "Serial Killer" and "Goodnight, John-Boy", part of a planned series of four books. Also in that year, he published his memoirs, "Be Pure! Be Vigilant! Behave! 2000 AD and Judge Dredd: The Secret History" in print and as an e-book. Mills also narrated the audiobook version himself. (The title is the catchphrase of the villain in his series "Nemesis the Warlock".)
In 2018 the film "Accident Man" was released, based on his comic strip for "Toxic!"
In 2019 Mills announced that he would publish a new all-ages science fiction anthology comic called "Spacewarp", to be released in 2020, and that the artists would retain the copyright on their work.
As well as his influential role in creating and contributing to numerous of British comics, Mills has produced work in both America and Europe. | https://en.wikipedia.org/wiki?curid=25020 |
Pearl Index
The Pearl Index, also called the Pearl rate, is the most common technique used in clinical trials for reporting the effectiveness of a birth control and tampon selection.
formula_1
Three kinds of information are needed to calculate a Pearl Index for a particular study:
There are two calculation methods for determining the Pearl Index:
In the first method, the relative number of pregnancies in the study is divided by the number of months of exposure, and then multiplied by 1200.
In the second method, the number of pregnancies in the study is divided by the number of menstrual cycles experienced by women in the study, and then multiplied by 1300. 1300 instead of 1200 is used on the basis that the length of the average menstrual cycle is 28 days, or 13 cycles per year.
The Pearl Index is sometimes used as a statistical estimation of the number of unintended pregnancies in 100 woman-years of exposure (e.g. 100 women over one year of use, or 10 women over 10 years). It is also sometimes used to compare birth control methods, a lower Pearl index representing a lower chance of getting unintentionally pregnant.
Usually two Pearl Indexes are published from studies of birth control methods:
The index was introduced by Raymond Pearl in 1934. It has remained popular for over eighty years, in large part because of the simplicity of the calculation.
Like all measures of birth control effectiveness, the Pearl Index is a calculation based on the observations of a given sample population. Thus studies of different populations using the same contraceptive will yield different values for the index. The culture and demographics of the population being studied, and the instruction technique used to teach the method, have significant effects on its failure rate.
The Pearl Index has unique shortcomings, however. It assumes a constant failure rate over time. That is an incorrect assumption for two reasons: first, the most fertile couples will get pregnant first. Couples remaining later in the study are, on average, of lower fertility. Second, most birth control methods have better effectiveness in more experienced users. The longer a couple is in the study, the better they are at using the method. So the longer the study length, the lower the Pearl Index will be - and comparisons of Pearl Indexes from studies of different lengths cannot be accurate.
The Pearl Index also provides no information on factors other than accidental pregnancy which may influence effectiveness calculations, such as:
A common misperception is that the highest possible Pearl Index is 100 - i.e. 100% of women in the study conceive in the first year. However, if all the women in the study conceived in the first month, the study would yield a Pearl Index of 1200 or 1300. The Pearl Index is only accurate as a statistical estimation of per-year risk of pregnancy if the pregnancy rate in the study was very low.
In 1966, two birth control statisticians advocated abandonment of the Pearl Index: | https://en.wikipedia.org/wiki?curid=25021 |
Paul Auster
Paul Benjamin Auster (born February 3, 1947) is an American writer and film director. His notable works include "The New York Trilogy" (1987), "Moon Palace" (1989), "The Music of Chance" (1990), "The Book of Illusions" (2002), "The Brooklyn Follies" (2005), "Invisible" (2009), "Sunset Park" (2010), "Winter Journal" (2012), and "4 3 2 1" (2017). His books have been translated into more than forty languages.
Paul Auster was born in Newark, New Jersey, to Jewish middle-class parents of Polish descent, Queenie (née Bogat) and Samuel Auster. He grew up in South Orange, New Jersey and Newark and graduated from Columbia High School in Maplewood.
After graduating from Columbia University with B.A. and M.A. degrees in 1970, he moved to Paris, France where he earned a living translating French literature. Since returning to the U.S. in 1974, he has published poems, essays, and novels, as well as translations of French writers such as Stéphane Mallarmé and Joseph Joubert.
Following his acclaimed debut work, a memoir entitled "The Invention of Solitude", Auster gained renown for a series of three loosely connected stories published collectively as "The New York Trilogy". Although these books allude to the detective genre they are not conventional detective stories organized around a mystery and a series of clues. Rather, he uses the detective form to address existential issues and questions of identity, space, language, and literature, creating his own distinctively postmodern (and critique of postmodernist) form in the process. According to Auster, "...the "Trilogy" grows directly out of "The Invention of Solitude"."
The search for identity and personal meaning has permeated Auster's later publications, many of which concentrate heavily on the role of coincidence and random events ("The Music of Chance") or increasingly, the relationships between people and their peers and environment ("The Book of Illusions", "Moon Palace"). Auster's heroes often find themselves obliged to work as part of someone else's inscrutable and larger-than-life schemes. In 1995, Auster wrote and co-directed the films "Smoke" (which won him the Independent Spirit Award for Best First Screenplay) and "Blue in the Face". Auster's more recent works, from "Oracle Night" (2003) to "4 3 2 1" (2017), have also met with critical acclaim.
He was on the PEN American Center Board of Trustees from 2004 to 2009, and Vice President during 2005 to 2007.
In 2012, Auster was quoted as saying in an interview that he would not visit Turkey, in protest of its treatment of journalists. The Turkish Prime Minister Recep Tayyip Erdoğan replied: "As if we need you! Who cares if you come or not?" Auster responded: "According to the latest numbers gathered by International PEN, there are nearly one hundred writers imprisoned in Turkey, not to speak of independent publishers such as Ragıp Zarakolu, whose case is being closely watched by PEN Centers around the world".
Auster's most recent book, "A Life in Words," was published in October 2017 by Seven Stories Press. It brings together three years of conversations with the Danish scholar I.B. Siegumfeldt about each one of his works, both fiction and non-fiction. It is a primary source for understanding Auster's approach to his work.
Auster is willing to give Iranian translators permission to write Persian versions of his works in exchange for a small fee; Iran does not recognize international copyright laws.
Much of the early scholarship about Auster's work saw links between it and the theories of such French writers as Jacques Lacan, Jacques Derrida, and others. Auster himself has denied these influences and has asserted in print that "I've read only one short essay by Lacan, the "Purloined Letter," in the "Yale French Studies" issue on poststructuralism—all the way back in 1966." Other scholars have seen influences in Auster's work of the American transcendentalists of the nineteenth century, as exemplified by Henry David Thoreau and Ralph Waldo Emerson. The transcendentalists believed that the symbolic order of civilization has separated us from the natural order of the world, and that by moving into nature, as Thoreau did, as he described in "Walden", it would be possible to return to this natural order.
Edgar Allan Poe, Samuel Beckett, and Nathaniel Hawthorne have also had a strong influence on Auster's writing. Auster has specifically referred to characters from Poe and Hawthorne in his novels, for example William Wilson in "City of Glass" or Hawthorne's Fanshawe in "The Locked Room", both from "The New York Trilogy".
Paul Auster's reappearing subjects are:
"Over the past twenty-five years," opined Michael Dirda in "The New York Review of Books" in 2008, "Paul Auster has established one of the most distinctive niches in contemporary literature." Dirda also has extolled his loaded virtues in "The Washington Post":
Ever since "City of Glass", the first volume of his "New York Trilogy", Auster has perfected a limpid, confessional style, then used it to set disoriented heroes in a seemingly familiar world gradually suffused with mounting uneasiness, vague menace and possible hallucination. His plots – drawing on elements from suspense stories, existential récit, and autobiography – keep readers turning the pages, but sometimes end by leaving them uncertain about what they've just been through.
Writing about Auster's most recent novel, "4 3 2 1", "Booklist" critic Donna Seaman remarked:Auster has been turning readers' heads for three decades, bending the conventions of storytelling, blurring the line between fiction and autobiography, infusing novels with literary and cinematic allusions, and calling attention to the art of storytelling itself, not with cool, intellectual remove, but rather with wonder, gratitude, daring, and sly humor. ... Auster's fiction is rife with cosmic riddles and rich in emotional complexity. He now presents his most capacious, demanding, eventful, suspenseful, erotic, structurally audacious, funny, and soulful novel to date. ... Auster is conducting a grand experiment, not only in storytelling, but also in the endless nature-versus-nurture debate, the perpetual dance between inheritance and free will, intention and chance, dreams and fate. This elaborate investigation into the big what-if is also a mesmerizing dramatization of the multitude of clashing selves we each harbor within. ... A paean to youth, desire, books, creativity, and unpredictability, it is a four-faceted bildungsroman and an ars poetica, in which Auster elucidates his devotion to literature and art. He writes, 'To combine the strange with the familiar: that was what Ferguson aspired to, to observe the world as closely as the most dedicated realist and yet to create a way of seeing the world through a different, slightly distorting lens.' Auster achieves this and much more in his virtuoso, magnanimous, and ravishing opus.
The English critic James Wood, however, offered Auster little praise:
Clichés, borrowed language, bourgeois bêtises are intricately bound up with modern and postmodern literature. For Flaubert, the cliché and the received idea are beasts to be toyed with and then slain. "Madame Bovary" actually italicizes examples of foolish or sentimental phrasing. Charles Bovary's conversation is likened to a pavement, over which many people have walked; twentieth-century literature, violently conscious of mass culture, extends this idea of the self as a kind of borrowed tissue, full of other people's germs. Among modern and postmodern writers, Beckett, Nabokov, Richard Yates, Thomas Bernhard, Muriel Spark, Don DeLillo, Martin Amis, and David Foster Wallace have all employed and impaled cliché in their work. Paul Auster is probably America's best-known postmodern novelist; his "New York Trilogy" must have been read by thousands who do not usually read avant-garde fiction. Auster clearly shares this engagement with mediation and borrowedness—hence, his cinematic plots and rather bogus dialogue—and yet he does nothing with cliché except use it. This is bewildering, on its face, but then Auster is a peculiar kind of postmodernist. Or is he a postmodernist at all? Eighty per cent of a typical Auster novel proceeds in a manner indistinguishable from American realism; the remaining twenty per cent does a kind of postmodern surgery on the eighty per cent, often casting doubt on the veracity of the plot. Nashe, in "The Music of Chance" (1990), sounds as if he had sprung from a Raymond Carver story (although Carver would have written more interesting prose) ... One reads Auster's novels very fast, because they are lucidly written, because the grammar of the prose is the grammar of the most familiar realism (the kind that is, in fact, comfortingly artificial), and because the plots, full of sneaky turns and surprises and violent irruptions, have what the Times once called "all the suspense and pace of a bestselling thriller." There are no semantic obstacles, lexical difficulties, or syntactical challenges. The books fairly hum along. The reason Auster is not a realist writer, of course, is that his larger narrative games are anti-realist or surrealist.
Wood also bemoaned Auster's 'b-movie dialogue', 'absurdity', 'shallow skepticism', 'fake realism' and 'balsa-wood backstories'. Wood highlighted what he saw as the issues in Auster's fiction in a parody:
Roger Phaedo had not spoken to anyone for ten years. He confined himself to his Brooklyn apartment, obsessively translating and retranslating the same short passage from Rousseau's "Confessions." A decade earlier, a mobster named Charlie Dark had attacked Phaedo and his wife. Phaedo was beaten to within an inch of his life; Mary was set on fire, and survived just five days in the I.C.U. By day, Phaedo translated; at night, he worked on a novel about Charlie Dark, who was never convicted. Then Phaedo drank himself senseless with Scotch. He drank to drown his sorrows, to dull his senses, to forget himself. The phone rang, but he never answered it. Sometimes, Holly Steiner, an attractive woman across the hall, would silently enter his bedroom, and expertly rouse him from his stupor. At other times, he made use of the services of Aleesha, a local hooker. Aleesha's eyes were too hard, too cynical, and they bore the look of someone who had already seen too much. Despite that, Aleesha had an uncanny resemblance to Holly, as if she were Holly's double. And it was Aleesha who brought Roger Phaedo back from the darkness. One afternoon, wandering naked through Phaedo's apartment, she came upon two enormous manuscripts, neatly stacked. One was the Rousseau translation, each page covered with almost identical words; the other, the novel about Charlie Dark. She started leafing through the novel. "Charlie Dark!" she exclaimed. "I knew Charlie Dark! He was one tough cookie. That bastard was in the Paul Auster gang. I'd love to read this book, baby, but I'm always too lazy to read long books. Why don't you read it to me?" And that is how the ten-year silence was broken. Phaedo decided to please Aleesha. He sat down, and started reading the opening paragraph of his novel, the novel you have just read.
Auster was married to the writer Lydia Davis. They have one son together, Daniel Auster.
Auster and his second wife, writer Siri Hustvedt (the daughter of professor and scholar Lloyd Hustvedt), were married in 1981, and they live in Brooklyn. Together they have one daughter, Sophie Auster.
He has said his politics are "far to the left of the Democratic Party" but that he votes Democratic because he doubts a socialist candidate could win. He has described right-wing Republicans as "jihadists" and the election of Donald Trump as "the most appalling thing I've seen in politics in my life." | https://en.wikipedia.org/wiki?curid=25022 |
Plain text
In computing, plain text is a loose term for data (e.g. file contents) that represent only characters of readable material but not its graphical representation nor other objects (floating-point numbers, images, etc.). It may also include a limited number of characters that control simple arrangement of text, such as spaces, line breaks, or tabulation characters (although tab characters can "mean" many different things, so are hardly "plain"). Plain text is different from formatted text, where style information is included; from structured text, where structural parts of the document such as paragraphs, sections, and the like are identified; and from binary files in which some portions must be interpreted as binary objects (encoded integers, real numbers, images, etc.).
The term is sometimes used quite loosely, to mean files that contain "only" "readable" content (or just files with nothing that the speaker doesn't prefer). For example, that could exclude any indication of fonts or layout (such as markup, markdown, or even tabs); characters such as curly quotes, non-breaking spaces, soft hyphens, em dashes, and/or ligatures; or other things.
In principle, plain text can be in any encoding, but occasionally the term is taken to imply ASCII. As Unicode-based encodings such as UTF-8 and UTF-16 become more common, that usage may be shrinking.
Plain text is also sometimes used only to exclude "binary" files: those in which at least some parts of the file cannot be correctly interpreted via the character encoding in effect. For example, a file or string consisting of "hello" (in whatever encoding), following by 4 bytes that express a binary integer that is "not" just a character, is a binary file, not plain text by even the loosest common usages. Put another way, translating a plain text file to a character encoding that uses entirely different number to represent characters, does not change the meaning (so long as you know what encoding is in use), but for binary files such a conversion "does" change the meaning of at least some parts of the file.
Files that contain markup or other meta-data are generally considered plain text, so long as the markup is also in directly human-readable form (as in HTML, XML, and so on). As Coombs, Renear, and DeRose argue, punctuation is itself markup, and no one considers punctuation to disqualify a file from being plain text.
The use of plain text rather than binary files enables files to survive much better "in the wild", in part by making them largely immune to computer architecture incompatibilities. For example, all the problems of Endianness can be avoided (with encodings such as UCS-2 rather than UTF-8, endianness matters, but uniformly for every character, rather than for potentially-unknown subsets of it).
According to The Unicode Standard,
Thus, representations such as SGML, RTF, HTML, XML, wiki markup, and TeX, as well as nearly all programming language source code files, are considered plain text. The particular content is irrelevant to whether a file is plain text. For example, an SVG file can express drawings or even bitmapped graphics, but is still plain text.
According to The Unicode Standard, plain text has two main properties in regard to rich text:
The purpose of using plain text today is primarily independence from programs that require their very own special encoding or formatting or file format. Plain text files can be opened, read, and edited with ubiquitous text editors and utilities.
A command-line interface allows people to give commands in plain text and get a response, also typically in plain text.
Many other computer programs are also capable of processing or creating plain text, such as countless programs in DOS, Windows, classic Mac OS, and Unix and its kin; as well as web browsers (a few browsers such as Lynx and the Line Mode Browser produce only plain text for display) and other e-text readers.
Plain text files are almost universal in programming; a source code file containing instructions in a programming language is almost always a plain text file. Plain text is also commonly used for configuration files, which are read for saved settings at the startup of a program.
Plain text is used for much e-mail.
A comment, a ".txt" file, or a TXT Record generally contains only plain text (without formatting) intended for humans to read.
The best format for storing knowledge persistently is plain text, rather than some binary format.
Before the early 1960s, computers were mainly used for number-crunching rather than for text, and memory was extremely expensive. Computers often allocated only 6 bits for each character, permitting only 64 characters—assigning codes for A-Z, a-z, and 0-9 would leave only 2 codes: nowhere near enough. Most computers opted not to support lower-case letters. Thus, early text projects such as Roberto Busa's Index Thomisticus, the Brown Corpus, and others had to resort to conventions such as keying an asterisk preceding letters actually intended to be upper-case.
Fred Brooks of IBM argued strongly for going to 8-bit bytes, because someday people might want to process text; and won. Although IBM used EBCDIC, most text from then on came to be encoded in ASCII, using values from 0 to 31 for (non-printing) control characters, and values from 32 to 127 for graphic characters such as letters, digits, and punctuation. Most machines stored characters in 8 bits rather than 7, ignoring the remaining bit or using it as a checksum.
The near-ubiquity of ASCII was a great help, but failed to address international and linguistic concerns. The dollar-sign ("$") was not so useful in England, and the accented characters used in Spanish, French, German, and many other languages were entirely unavailable in ASCII (not to mention characters used in Greek, Russian, and most Eastern languages). Many individuals, companies, and countries defined extra characters as needed—often reassigning control characters, or using value in the range from 128 to 255. Using values above 128 conflicts with using the 8th bit as a checksum, but the checksum usage gradually died out.
These additional characters were encoded differently in different countries, making texts impossible to decode without figuring out the originator's rules. For instance, a browser might display ¬A rather than ` if it tried to interpret one character set as another. The International Organisation for Standardisation (ISO) eventually developed several code pages under ISO 8859, to accommodate various languages. The first of these (ISO 8859-1) is also known as "Latin-1", and covers the needs of most (not all) European languages that use Latin-based characters (there was not quite enough room to cover them all). ISO 2022 then provided conventions for "switching" between different character sets in mid-file. Many other organisations developed variations on these, and for many years Windows and Macintosh computers used incompatible variations.
The text-encoding situation became more and more complex, leading to efforts by ISO and by the Unicode Consortium to develop a single, unified character encoding that could cover all known (or at least all currently known) languages. After some conflict, these efforts were unified. Unicode currently allows for 1,114,112 code values, and assigns codes covering nearly all modern text writing systems, as well as many historical ones and for many non-linguistic characters such as printer's dingbats, mathematical symbols, etc.
Text is considered plain-text regardless of its encoding. To properly understand or process it the recipient must know (or be able to figure out) what encoding was used; however, they need not know anything about the computer architecture that was used, or about the binary structures defined by whatever program (if any) created the data.
Perhaps the most common way of explicitly stating the specific encoding of plain text is with a MIME type.
For email and http, the default MIME type is "text/plain" -- plain text without markup.
Another MIME type often used in both email and http is "text/html; charset=UTF-8" -- plain text represented using UTF-8 character encoding with HTML markup.
Another common MIME type is "application/json" -- plain text represented using UTF-8 character encoding with JSON markup.
When a document is received without any explicit indication of the character encoding, some applications use charset detection to attempt to guess what encoding was used.
ASCII reserves the first 32 codes (numbers 0–31 decimal) for control characters known as the "C0 set": codes originally intended not to represent printable information, but rather to control devices (such as printers) that make use of ASCII, or to provide meta-information about data streams such as those stored on magnetic tape. They include common characters like the newline and the tab character.
In 8-bit character sets such as Latin-1 and the other ISO 8859 sets, the first 32 characters of the "upper half" (128 to 159) are also control codes, known as the "C1 set". They are rarely used directly; when they turn up in documents which are ostensibly in an ISO 8859 encoding, their code positions generally refer instead to the characters at that position in a proprietary, system-specific encoding, such as Windows-1252 or Mac OS Roman, that use the codes to instead provide additional graphic characters.
Unicode defines additional control characters, including bi-directional text direction override characters (used to explicitly mark right-to-left writing inside left-to-right writing and the other way around) and variation selectors to select alternate forms of CJK ideographs, emoji and other characters. | https://en.wikipedia.org/wiki?curid=25030 |
Presbyterian Church (USA)
The Presbyterian Church (USA), abbreviated PC(USA), is a mainline Protestant Christian denomination in the United States. A part of the Reformed tradition, it is the largest Presbyterian denomination in the US, and known for its relatively progressive stance on doctrine. The PC(USA) was established by the 1983 merger of the Presbyterian Church in the United States, whose churches were located in the Southern and border states, with the United Presbyterian Church in the United States of America, whose congregations could be found in every state. The similarly named Presbyterian Church in America is a separate denomination whose congregations can also trace their history to the various schisms and mergers of Presbyterian churches in the United States.
The denomination had 1,352,678 active members and 19,243 ordained ministers in 9,161 congregations at the end of 2018. This number does not include members who are baptized but who are not confirmed or the inactive members also affiliated. For example, in 2005, the PC(USA) claimed 318,291 baptized, but not confirmed, members and nearly 500,000 inactive members in addition to active members. Its membership has been declining over the past several decades; the trend has significantly accelerated in recent years, partly due to breakaway congregations. Average denominational worship attendance dropped to 565,467 in 2017 from 748,774 in 2013. The PC(USA) is the largest Presbyterian denomination in the United States.
Presbyterians trace their history to the Protestant Reformation in the 16th century. The Presbyterian heritage, and much of its theology, began with the French theologian and lawyer John Calvin (1509–64), whose writings solidified much of the Reformed thinking that came before him in the form of the sermons and writings of Huldrych Zwingli. From Calvin's headquarters in Geneva, the Reformed movement spread to other parts of Europe. John Knox, a former Roman Catholic priest from Scotland who studied with Calvin in Geneva, took Calvin's teachings back to Scotland and led the Scottish Reformation of 1560. Because of this reform movement, the Church of Scotland embraced Reformed theology and presbyterian polity. The Ulster Scots brought their Presbyterian faith with them to Ireland, where they laid the foundation of what would become the Presbyterian Church in Ireland.
Immigrants from Scotland and Ireland brought Presbyterianism to America as early as 1640, and immigration would remain a large source of growth throughout the colonial era. Another source of growth were a number of New England Puritans who left the Congregational churches because they preferred presbyterian polity. In 1706, seven ministers led by Francis Makemie established the first American presbytery at Philadelphia, which was followed by the creation of the Synod of Philadelphia in 1717.
The First Great Awakening and the revivalism it generated had a major impact on American Presbyterians. Ministers such as William and Gilbert Tennent, a friend of George Whitefield, emphasized the necessity of a conscious conversion experience and pushed for higher moral standards among the clergy. Disagreements over revivalism, itinerant preaching, and educational requirements for clergy led to a division known as the Old Side–New Side Controversy that lasted from 1741 to 1758.
In the South, the Presbyterians were evangelical dissenters, mostly Scotch-Irish, who expanded into Virginia between 1740 and 1758. Spangler (2008) argues they were more energetic and held frequent services better attuned to the frontier conditions of the colony. Presbyterianism grew in frontier areas where the Anglicans had made little impression. Uneducated whites and blacks were attracted to the emotional worship of the denomination, its emphasis on biblical simplicity, and its psalm singing. Some local Presbyterian churches, such as Briery in Prince Edward County, owned slaves. The Briery church purchased five slaves in 1766 and raised money for church expenses by hiring them out to local planters.
After the United States achieved independence from Great Britain, Presbyterian leaders felt that a national Presbyterian denomination was needed, and the Presbyterian Church in the United States of America (PCUSA) was organized. The first general assembly was held in Philadelphia in 1789. John Witherspoon, president of Princeton University and the only minister to sign the Declaration of Independence, was the first moderator.
Not all American Presbyterians participated in the creation of the PCUSA General Assembly because the divisions then occurring in the Church of Scotland were replicated in America. In 1751, Scottish Covenanters began sending ministers to America, and the Seceders were doing the same by 1753. In 1858, the majority of Covenanters and Seceders merged to create the United Presbyterian Church of North America (UPCNA).
In the decades after independence, many Americans including Calvinists (Presbyterians and Congregationalists), Methodists, and Baptists were swept up in Protestant religious revivals that would later become known as the Second Great Awakening. Presbyterians also helped to shape voluntary societies that encouraged educational, missionary, evangelical, and reforming work. As its influence grew, many non-Presbyterians feared that the PCUSA's informal influence over American life might effectively make it an established church.
The Second Great Awakening divided the PCUSA over revivalism and fear that revivalism was leading to an embrace of Arminian theology. In 1810, frontier revivalists split from the PCUSA and organized the Cumberland Presbyterian Church. Throughout the 1820s, support and opposition to revivalism hardened into well-defined factions, the New School and Old School respectively. By the 1838, the Old School–New School Controversy had divided the PCUSA. There were now two general assemblies each claiming to represent the PCUSA.
In 1858, the New School split along sectional lines when its Southern synods and presbyteries established the pro-slavery United Synod of the Presbyterian Church. Old School Presbyterians followed in 1861 after the start of hostilities in the American Civil War with the formation of the Presbyterian Church in the Confederate States of America. The Presbyterian Church in the CSA absorbed the smaller United Synod in 1864. After the war, this body was renamed the Presbyterian Church in the United States (PCUS) and was commonly nicknamed the "Southern Presbyterian Church" throughout its history. In 1869, the northern PCUSA's Old School and New School factions reunited as well and was known as the "Northern Presbyterian Church".
The early part of the 20th century saw continued growth in both major sections of the church. It also saw the growth of Fundamentalist Christianity (a movement of those who believed in the literal interpretation of the Bible as the fundamental source of the religion) as distinguished from Modernist Christianity (a movement holding the belief that Christianity needed to be re-interpreted in light of modern scientific theories such as evolution or the rise of degraded social conditions brought on by industrialization and urbanization).
Open controversy was sparked in 1922, when Harry Emerson Fosdick, a modernist and a Baptist pastoring a PCUSA congregation in New York City, preached a sermon entitled "Shall the Fundamentalists Win?" The crisis reached a head the following year when, in response to the New York Presbytery's decision to ordain a couple of men who could not affirm the virgin birth, the PCUSA's General Assembly reaffirmed the "five fundamentals": the deity of Christ, the Virgin Birth, the vicarious atonement, the inerrancy of Scripture, and Christ's miracles and resurrection. This move against modernism caused a backlash in the form of the "Auburn Affirmation" — a document embracing liberalism and modernism. The liberals began a series of ecclesiastical trials of their opponents, expelled them from the church and seized their church buildings. Under the leadership of J. Gresham Machen, a former Princeton Theological Seminary New Testament professor who had founded Westminster Theological Seminary in 1929, and who was a PCUSA minister, many of these conservatives would establish what became known as the Orthodox Presbyterian Church in 1936. Although the 1930s and 1940s and the ensuing neo-orthodox theological consensus mitigated much of the polemics during the mid-20th century, disputes erupted again beginning in the mid-1960s over the extent of involvement in the civil rights movement and the issue of ordination of women, and, especially since the 1990s, over the issue of ordination of homosexuals.
The Presbyterian Church in the United States of America was joined by the majority of the Cumberland Presbyterian Church, mostly congregations in the border and Southern states, in 1906. In 1920, it absorbed the Welsh Calvinist Methodist Church. The United Presbyterian Church of North America merged with the PCUSA in 1958 to form the United Presbyterian Church in the United States of America (UPCUSA).
Under Eugene Carson Blake, the UPCUSA's stated clerk, the denomination entered into a period of social activism and ecumenical endeavors, which culminated in the development of the Confession of 1967 which was the church's first new confession of faith in three centuries. The 170th General Assembly in 1958 authorized a committee to develop a brief contemporary statement of faith. The 177th General Assembly in 1965 considered and amended the draft confession and sent a revised version for general discussion within the church. The 178th General Assembly in 1966 accepted a revised draft and sent it to presbyteries throughout the church for final ratification. As the confession was ratified by more than 90% of all presbyteries, the 178th General Assembly finally adopted it in 1967. The UPCUSA also adopted a "Book of Confessions" in 1967, which would include the Confession of 1967, the Westminster Confession and Westminster Shorter Catechism, the Heidelberg Catechism, the Second Helvetic and Scots Confessions and the Barmen Declaration.
An attempt to reunite the United Presbyterian Church in the USA with the Presbyterian Church in the United States in the late 1950s failed when the latter church was unwilling to accept ecclesiastical centralization. In the meantime, a conservative group broke away from the Presbyterian Church in the United States in 1973, mainly over the issues of women's ordination and a perceived drift toward theological liberalism. This group formed the Presbyterian Church in America (PCA).
Attempts at union between the churches (UPCUSA and PCUS) were renewed in the 1970s, culminating in the merger of the two churches to form the Presbyterian Church (USA) on June 10, 1983. At the time of the merger, the churches had a combined membership of 3,121,238. Many of the efforts were spearheaded by the financial and outspoken activism of retired businessman Thomas Clinton who died two years before the merger. A new national headquarters was established in Louisville, Kentucky in 1988 replacing the headquarters of the UPCUSA in New York City and the PCUS located in Atlanta, Georgia.
The merger essentially consolidated moderate-to-liberal American Presbyterians into one body. Other US Presbyterian bodies (the Cumberland Presbyterians being a partial exception) place greater emphasis on doctrinal Calvinism, literalist hermeneutics, and conservative politics.
For the most part, PC(USA) Presbyterians, not unlike similar mainline traditions such as the Episcopal Church and the United Church of Christ, are fairly progressive on matters such as doctrine, environmental issues, sexual morality, and economic issues, though the denomination remains divided and conflicted on these issues. Like other mainline denominations, the PC(USA) has also seen a great deal of demographic aging, with fewer new members and declining membership since 1967.
In the 1990s, 2000s, and 2010s, the General Assembly of PC(USA) adopted several social justice initiatives, which covered a range of topics including: stewardship of God's creation, world hunger, homelessness, and LGBT issues. As of 2011, the PC(USA) no longer excludes Partnered Gay and Lesbian ministers from the ministry. Previously, the PC(USA) required its ministers to remain ""chastely in singleness or with fidelity in marriage"." Currently, the PC(USA) permits teaching elders to perform same-gender marriages. On a congregational basis, individual sessions (congregational governing bodies) may choose to permit same-gender marriages.
These changes have led to several renewal movements and denominational splinters. Some conservative-minded groups in the PC(USA), such as the Confessing Movement and the Presbyterian Lay Committee (formed in the mid-1960s) have remained in the main body, rather than leaving to form new, break-away groups.
Several Presbyterian denominations have split from PC(USA) or its predecessors over the years. For example, the Orthodox Presbyterian Church broke away from the Presbyterian Church in the USA (PC-USA) in 1936.
More recently formed Presbyterian denominations have posed a threat to modern day PC(USA) congregations disenchanted with the direction of the denomination, but wishing to continue in a Reformed, Presbyterian denomination. The Presbyterian Church in America (PCA), which does not allow ordained female clergy, separated from Presbyterian Church in the United States in 1973 and has subsequently become the second largest Presbyterian denomination in the United States. The Evangelical Presbyterian Church (EPC), which gives local presbyteries the option of allowing ordained female pastors, broke away from the United Presbyterian Church and incorporated in 1981. A PC(USA) renewal movement, Fellowship of Presbyterians (FOP) (now The Fellowship Community), held several national conferences serving disaffecting Presbyterians. FOP's organizing efforts culminated with the founding of ECO: A Covenant Order of Evangelical Presbyterians (ECO), a new Presbyterian denomination that allows ordination of women but is more conservative theologically than PC(USA).
In 2013 the presbyteries ratified the General Assembly's 2012 vote to allow the ordination of openly gay persons to the ministry and in 2014 the General Assembly voted to amend the church's constitution to define marriage as the union of two persons instead of the union of a man and woman, which was ratified (by the presbyteries) in 2015. This has led to the departure of several hundred congregations. The majority of churches leaving the Presbyterian Church (USA) have chosen to join the Evangelical Presbyterian Church or ECO. Few have chosen to join the larger more conservative Presbyterian Church in America, which does not permit female clergy.
Since 1983 the Presbyterian Youth Triennium has been held every three years at Purdue University in West Lafayette, Indiana, US, and is open to Presbyterian high school students throughout the world. The very first Youth Triennium was held in 1980 at Indiana University and the conference for teens is an effort of the Presbyterian Church (USA), the largest Presbyterian denomination in the nation; Cumberland Presbyterian Church; and Cumberland Presbyterian Church in America, the first African-American denomination to embrace Presbyterianism in the reformed tradition.
The Constitution of PC(USA) is composed of two portions: Part I, the "Book of Confessions" and Part II, the "Book of Order". The "Book of Confessions" outlines the beliefs of the PC(USA) by declaring the creeds by which the Church's leaders are instructed and led. Complementing that is the "Book of Order" which gives the rationale and description for the organization and function of the Church at all levels. The "Book of Order" is currently divided into four sections – 1) The Foundations of Presbyterian Polity 2) The Form of Government, 3) The Directory For Worship, and 4) The Rules of Discipline.
The Presbyterian Church (USA) has a representative form of government, known as presbyterian polity, with four levels of government and administration, as outlined in the "Book of Order". The councils (governing bodies) are as follows:
At the congregational level, the governing body is called the "session", from the Latin word "sessio", meaning "a sitting". The session is made up of the pastors of the church and all elders elected and installed to active service. Following a pattern set in the first congregation of Christians in Jerusalem described in the Book of Acts in the New Testament, the church is governed by "presbyters" (a term and category that includes elders and Ministers of Word and Sacrament, historically also referred to as "ruling or canon elders" because they "measure" the spiritual life and work of a congregation and ministers as "teaching elders").
The elders are nominated by a nominating committee of the congregation; in addition, nominations from the floor are permissible. Elders are then elected by the congregation. All elders elected to serve on the congregation's session of elders are required to undergo a period of study and preparation for this order of ministry, after which the session examines the elders-elect as to their personal faith; knowledge of doctrine, government, and discipline contained in the Constitution of the church, and the duties of the office of elder. If the examination is approved, the session appoints a day for the service of ordination and installation. Session meetings are normally moderated by a called and installed pastor and minutes are recorded by a clerk, who is also an ordained presbyter. If the congregation does not have an installed pastor, the Presbytery appoints a minister member or elected member of the presbytery as moderator with the concurrence of the local church session. The moderator presides over the session as first among equals and also serves a "liturgical" bishop over the ordination and installation of elders and deacons within a particular congregation.
The session guides and directs the ministry of the local church, including almost all spiritual and fiduciary leadership. The congregation as a whole has only the responsibility to vote on: 1) the call of the pastor (subject to presbytery approval) and the terms of call (the church's provision for compensating and caring for the pastor); 2) the election of its own officers (elders & deacons); 3) buying, mortgaging, or selling real property. All other church matters such as the budget, personnel matters, and all programs for spiritual life and mission, are the responsibility of the session. In addition, the session serves as an ecclesiastical court to consider disciplinary charges brought against church officers or members.
The session also oversees the work of the deacons, a second body of leaders also tracing its origins to the Book of Acts. The deacons are a congregational-level group whose duty is "to minister to those who are in need, to the sick, to the friendless, and to any who may be in distress both within and beyond the community of faith." In some churches, the responsibilities of the deacons are taken care of by the session, so there is no board of deacons in that church. In some states, churches are legally incorporated and members or elders of the church serve as trustees of the corporation. However, "the power and duties of such trustees shall not infringe upon the powers and duties of the Session or of the board of deacons." The deacons are a ministry board but not a governing body.
A "presbytery" is formed by all the congregations and the Ministers of Word and Sacrament in a geographic area together with elders selected (proportional to congregation size) from each of the congregations. Four special presbyteries are "non-geographical" in that they overlay other English-speaking presbyteries, though they are geographically limited to the boundaries of a particular synod (see below); it may be more accurate to refer to them as "trans-geographical." Three PC(USA) synods have a non-geographical presbytery for Korean language Presbyterian congregations, and one synod has a non-geographical presbytery for Native American congregations, the Dakota Presbytery. There are currently 172 presbyteries for the nearly 10,000 congregations in the PC(USA).
Only the presbytery (not a congregation, session, synod, or General Assembly) has the responsibility and authority to ordain church members to the ordered ministry of Word and Sacrament, also referred to as a Teaching Elder, to install ministers to (and/or remove them from) congregations as pastors, and to remove a minister from the ministry. A Presbyterian minister is a member of a presbytery. The General Assembly cannot ordain or remove a Teaching Elder, but the Office of the General Assembly does maintain and publish a national directory with the help of each presbytery's stated clerk. Bound versions are published bi-annually with the minutes of the General Assembly. A pastor cannot be a member of the congregation he or she serves as a pastor because his or her primary ecclesiastical accountability lies with the presbytery. Members of the congregation generally choose their own pastor with the assistance and support of the presbytery. The presbytery must approve the choice and officially install the pastor at the congregation, or approve the covenant for a temporary pastoral relationship. Additionally, the presbytery must approve if either the congregation or the pastor wishes to dissolve that pastoral relationship.
The presbytery has authority over many affairs of its local congregations. Only the presbytery can approve the establishment, dissolution, or merger of congregations. The presbytery also maintains a Permanent Judicial Commission, which acts as a court of appeal from sessions, and which exercises original jurisdiction in disciplinary cases against minister members of the presbytery.
A presbytery has two elected officers: a moderator and a stated clerk. The Moderator of the presbytery is elected annually and is either a minister member or an elder commissioner from one of the presbytery's congregations. The Moderator presides at all presbytery assemblies and is the chief overseer at the ordination and installation of ministers in that presbytery. The stated clerk is the chief ecclesial officer and serves as the presbytery's executive secretary and parliamentarian in accordance with the church Constitution and Robert's Rules of Order. While the moderator of a presbytery normally serves one year, the stated clerk normally serves a designated number of years and may be re-elected indefinitely by the presbytery. Additionally, an Executive Presbyter (sometimes designated as General Presbyter, Pastor to Presbytery, Transitional Presbyter) is often elected as a staff person to care for the administrative duties of the presbytery, often with the additional role of a pastor to the pastors. Presbyteries may be creative in the designation and assignment of duties for their staff. A presbytery is required to elect a Moderator and a Clerk, but the practice of hiring staff is optional. Presbyteries must meet at least twice a year, but they have the discretion to meet more often and most do.
"See "Map of Presbyteries and Synods"".
Presbyteries are organized within a geographical region to form a "synod". Each synod contains at least three presbyteries, and its elected voting membership is to include both elders and Ministers of Word and Sacrament in equal numbers. Synods have various duties depending on the needs of the presbyteries they serve. In general, their responsibilities (G-12.0102) might be summarized as: developing and implementing the mission of the church throughout the region, facilitating communication between presbyteries and the General Assembly, and mediating conflicts between the churches and presbyteries. Every synod elects a Permanent Judicial Commission, which has original jurisdiction in remedial cases brought against its constituent presbyteries, and which also serves as an ecclesiastical court of appeal for decisions rendered by its presbyteries' Permanent Judicial Commissions. Synods are required to meet at least biennially. Meetings are moderated by an elected synod Moderator with support of the synod's Stated Clerk. There are currently 16 synods in the PC(USA) and they vary widely in the scope and nature of their work. An ongoing current debate in the denomination is over the purpose, function, and need for synods.
See also the List of Presbyterian Church (USA) synods and presbyteries.
The "General Assembly" is the highest governing body of the PC(USA). Until the 216th assembly met in Richmond, Virginia in 2004, the General Assembly met annually; since 2004, the General Assembly has met biennially in even-numbered years. It consists of commissioners elected by presbyteries (not synods), and its voting membership is proportioned with parity between elders and Ministers of Word and Sacrament. There are many important responsibilities of the General Assembly. Among them, "The Book of Order" lists these four:
The General Assembly elects a moderator at each assembly who moderates the rest of the sessions of that assembly meeting and continues to serve until the next assembly convenes (two years later) to elect a new moderator or co-moderator. Currently, the denomination is served by Co-Moderators Denise Anderson and Jan Edmiston who were elected as the first co-moderators of the 222nd General Assembly (2016). At the 223rd Assembly in St Louis, MO, Co-Moderators Elder Vilmarie Cintrón-Olivieri & The Rev. Cindy Kohmann were elected. See a complete listing of past moderators at another Wikipedia Article.
A Stated Clerk is elected to a four-year term and is responsible for the Office of the General Assembly which conducts the ecclesiastical work of the church. The Office of the General Assembly carries out most of the ecumenical functions and all of the constitutional functions at the Assembly. The former Stated Clerk of the General Assembly is Gradye Parsons, who had served in that role since 2008 and was unanimously reelected in 2012. Parsons did not stand for re-election at the 222nd General Assembly meeting in 2016, and J. Herbert Nelson was elected Stated Clerk at the 2016 General Assembly meeting in Portland. Nelson is the first African American to be elected to the office, and is a third-generation Presbyterian pastor.
The Stated Clerk is also responsible for the records of the denomination, a function formalized in 1925 when the General Assembly created the "Department of Historical Research and Conservation" as part of the Office of the General Assembly. The current "Department of History" is also known as the Presbyterian Historical Society.
Six agencies carry out the work of the General Assembly. These are the Office of the General Assembly, the Presbyterian Publishing Corporation, the Presbyterian Investment and Loan Program, the Board of Pensions, the Presbyterian Foundation, and the Presbyterian Mission Agency (formerly known as the General Assembly Mission Council).
The General Assembly elects members of the Presbyterian Mission Agency Board (formerly General Assembly Mission Council). There are 48 elected members of the Presbyterian Mission Agency Board (40 voting members; 17 non-voting delegates), who represent synods, presbyteries, and the church at-large. Members serve one six-year term, with the exception of the present Moderator of the General Assembly (one 2-year term), the past Moderator of the General Assembly (one 2-year term), the moderator of Presbyterian Women (one 3-year term), ecumenical advisory members (one 2-year term, eligible for two additional terms), and stewardship and audit committee at-large members (one 2-year term, eligible for two additional terms). Among the elected members' major responsibilities is the coordination of the work of the program areas in light of General Assembly mission directions, objectives, goals and priorities. The PMAB meets three times a year. The General Assembly elects an Executive Director of the Presbyterian Mission Agency who is the top administrator overseeing the mission work of the PC(USA). Past Executive Director of the PMA is Ruling Elder Linda Bryant Valentine(2006-2015), and Interim RE Tony De La Rosa. Elected in 2018 is Teaching Elder Diane Givens Moffett (2018- ).
The General Assembly Permanent Judicial Commission (GAPJC) is the highest Church court of the denomination. It composed of one member elected by the General Assembly from each of its constituent synods (16). It has ultimate appellate jurisdiction over all Synod Permanent Judicial Commission cases involving issues of Church Constitution, and original jurisdiction over a small range of cases. The General Assembly Permanent Judicial Commission issues Authoritative Interpretations of The Constitution of the Presbyterian Church (USA) through its decisions.
www.ipc-usa.org/worship/
The denomination maintains affiliations with ten seminaries in the United States. These are:
Two other seminaries are related to the PC(USA) by covenant agreement: Auburn Theological Seminary in New York, New York, and Evangelical Seminary of Puerto Rico in San Juan, Puerto Rico.
There are numerous colleges and universities throughout the United States affiliated with PC(USA). For a complete list, see the article Association of Presbyterian Colleges and Universities. For more information, see the article PC(USA) seminaries.
While not affiliated with the PC(USA), the president of Fuller Theological Seminary, Mark Labberton, is an ordained minister of the PC(USA) and the seminary educates many candidates for ministry.
When the United Presbyterian Church in the USA merged with the Presbyterian Church in the United States there were 3,131,228 members. Statistics shows steadily decline since 1983. (The combined membership of the PCUS and United Presbyterian Church peaked in 1965 at 4.25 million communicant members.)
The PC (USA) has had the sharpest decline in their membership among the protestant denominations in U.S.A. The denomination lost more than a million members during last 14 years (2005-2019). As of 2019, the denomination has 1.3 million members and about 9000 local congregations.
The average local Presbyterian Church has 148 members (the mean in 2018). About 37% of the total congregations report between 1 and 50 members. Another 23% report between 51 and 100 members. The average worship attendance of a local Presbyterian congregation is 77 (51.7% of members). The largest congregation in the PC(USA) is Peachtree Presbyterian Church in Atlanta, Georgia, with a reported membership of 8,989 (2009). It was reported that about 31% of the Presbyterian members are over 71 years old (2018).
Most PC(USA) members are white (92.9%). Other racial and ethnic members include African-Americans (3.1% of the total membership of the denomination), Asians (2.3%), Hispanics (1.2%), Native Americans (0.2%), and others (0.3%). Despite declines in the total membership of the PC(USA), the percentage of racial-ethnic minority members has stayed about the same since 1995. The ratio of female members (58%) to male members (42%) has also remained stable since the mid-1960s.
Presbyterians are among the wealthiest Christians denomination in the United States, Presbyterians tend also to be better educated and they have a high number of graduate (64%) and post-graduate degrees (26%) per capita.
According to a 2014 study by the Pew Research Center, Presbyterians ranked as the fourth most financially successful religious group in the United States, with 32% of Presbyterians living in households with incomes of at least $100,000.
The session of the local congregation has a great deal of freedom in the style and ordering of worship within the guidelines set forth in the Directory for Worship section of the "Book of Order". Worship varies from congregation to congregation. The order may be very traditional and highly liturgical, or it may be very simple and informal. This variance is not unlike that seen in the "High Church" and "Low Church" styles of the Anglican Church. The "Book of Order" suggests a worship service ordered around five themes: "gathering around the Word, proclaiming the Word, responding to the Word, the sealing of the Word, and bearing and following the Word into the world." Prayer is central to the service and may be silent, spoken, sung, or read in unison (including The Lord's Prayer). Music plays a large role in most PC(USA) worship services and ranges from chant to traditional Protestant hymns, to classical sacred music, to more modern music, depending on the preference of the individual church and is offered prayerfully and not "for entertainment or artistic display." Scripture is read and usually preached upon. An offering is usually taken.
The Directory for Worship in the Book of Order provides the directions for what must be, or may be included in worship. During the 20th century, Presbyterians were offered optional use of liturgical books:
For more information, see Liturgical book of the Presbyterian Church (USA)
In regard to vestments, the Directory for Worship leaves that decision up to the ministers. Thus, on a given Sunday morning service, a congregation may see the minister leading worship in street clothes, Geneva gown, or an alb. Among the Paleo-orthodoxy and emerging church Presbyterians, clergy are moving away from the traditional black Geneva gown and reclaiming not only the more ancient Eucharist vestments of alb and chasuble, but also cassock and surplice (typically a full length Old English style surplice which resembles the Celtic alb, an ungirdled liturgical tunic of the old Gallican Rite).
The Service for the Lord's Day is the name given to the general format or ordering of worship in the Presbyterian Church as outlined in its Constitution's Book of Order. There is a great deal of liberty given toward worship in that denomination, so while the underlying order and components for the Service for the Lord's Day is extremely common, it varies from congregation to congregation, region to region.
Typical Presbyterian Church USA Order of Worship would look like this. This is taken from Madison Avenue Presbyterian Church, NYC
http://www.mapc.com/worship/order-of-worship/
The creation of the Service for the Lord's Day was one of the most positive contributions of the Worshipbook of 1970. The Book of Common Worship of 1993 leaned heavily upon this service.
The Presbyterian Church (USA) has, in the past, been a leading United States denomination in mission work, and many hospitals, clinics, colleges and universities worldwide trace their origins to the pioneering work of Presbyterian missionaries who founded them more than a century ago.
Currently, the church supports about 215 missionaries abroad annually. Many churches sponsor missionaries abroad at the session level, and these are not included in official statistics.
A vital part of the world mission emphasis of the denomination is building and maintaining relationships with Presbyterian, Reformed and other churches around the world, even if this is not usually considered missions.
The PC(USA) is a leader in disaster assistance relief and also participates in or relates to work in other countries through ecumenical relationships, in what is usually considered not missions, but deaconship.
The General Assembly of the Presbyterian Church (USA) determines and approves ecumenical statements, agreements, and maintains correspondence with other Presbyterian and Reformed bodies, other Christians churches, alliances, councils, and consortia. Ecumenical statements and agreements are subject to the ratification of the presbyteries. The following are some of the major ecumenical agreements and partnerships.
The church is committed to "engage in bilateral and multilateral dialogues with other churches and traditions in order to remove barriers of misunderstanding and establish common affirmations." As of 2012 it is in dialog with the Episcopal Church, the Moravian Church, the Korean Presbyterian Church in America, the Cumberland Presbyterian Church, the Cumberland Presbyterian Church in America, and the US Conference of Catholic Bishops. It also participates in international dialogues through the World Council of Churches and the World Communion of Reformed Churches. The most recent international dialogues include Pentecostal churches, the Seventh-day Adventist Church, Orthodox Church in America, and others.
In 2011 the National Presbyterian Church in Mexico, in 2012 the Mizoram Presbyterian Church and in 2015 the Independent Presbyterian Church of Brazil along with the Evangelical Presbyterian and Reformed Church in Peru severed ties with the PCUSA because of the PCUSA's teaching with regard to homosexuality.
The Presbyterian Church (USA) is in corresponding partnership with the National Council of Churches, the World Communion of Reformed Churches, and the World Council of Churches. It is a member of Churches for Middle East Peace.
In 1997 the PCUSA and three other churches of Reformation heritage: the Evangelical Lutheran Church in America, the Reformed Church in America and the United Church of Christ, acted on an ecumenical proposal of historic importance, known as "A Formula of Agreement". The timing reflected a doctrinal consensus which had been developing over the past thirty-two years coupled with an increasing urgency for the church to proclaim a gospel of unity in contemporary society. In light of identified doctrinal consensus, desiring to bear visible witness to the unity of the Church, and hearing the call to engage together in God's mission, it was recommended:
The term "full communion" is understood here to specifically mean that the four churches:
The agreement assumed the doctrinal consensus articulated in A Common Calling:The Witness of Our Reformation Churches in North America Today, and is to be viewed in concert with that document. The purpose of A Formula of Agreement is to elucidate the complementarity of affirmation and admonition as the basic principle of entering into full communion and the implications of that action as described in A Common Calling.
The 209th General Assembly (1997) approved A Formula of Agreement and in 1998 the 210th General Assembly declared full communion among these Protestant bodies.
The Presbyterian Church (USA) is in corresponding partnership with the National Council of Churches, the World Communion of Reformed Churches, Christian Churches Together, and the World Council of Churches.
As of June 2010, the World Alliance of Reformed Churches merged with the Reformed Ecumenical Council to form the World Communion of Reformed Churches. The result was a form of full communion similar to that outline in the Formula of Agreement, including orderly exchange of ministers.
The PC(USA) is one of nine denominations that joined together to form the Consultation on Church Union, which initially sought a merger of the denominations. In 1998 the Seventh Plenary of the Consultation on Church Union approved a document "Churches in Covenant Communion: The Church of Christ Uniting" as a plan for the formation of a covenant communion of churches. In 2002 the nine denominations inaugurated the new relationship and became known as Churches Uniting in Christ. The partnership is considered incomplete until the partnering communions reconcile their understanding of ordination and devise an orderly exchange of clergy.
Paragraph G-6.0106b of the Book of Order, which was adopted in 1996, prohibited the ordination of those who were not faithful in heterosexual marriage or chaste in singleness. This paragraph was included in the Book of Order from 1997 to 2011, and was commonly referred to by its pre-ratification designation, "Amendment B". Several attempts were made to remove this from the Book of Order, ultimately culminating in its removal in 2011. In 2011, the Presbyteries of the PC(USA) passed Amendment 10-A permitting congregations to ordain openly gay and lesbian elders and deacons, and allowing presbyteries to ordain ministers without reference to the fidelity/chastity provision, saying "governing bodies shall be guided by Scripture and the confessions in applying standards to individual candidates".
Many Presbyterian scholars, pastors, and theologians have been heavily involved in the debate over homosexuality, over the years. The Presbyterian Church of India cooperation with Presbyterian Church (USA) was dissolved in 2012 when the PC(USA) voted to ordain openly gay clergy to the ministry. In 2012, the PC(USA) granted permission, nationally, to begin ordaining openly gay and lesbian clergy.
Since 1980, the More Light Churches Network has served many congregations and individuals within American Presbyterianism who promote the full participation of all people in the PC(USA) regardless of sexual orientation or gender identity. The Covenant Network of Presbyterians was formed in 1997 to support repeal of "Amendment B" and to encourage networking amongst like-minded clergy and congregations. Other organizations of Presbyterians, such as the Confessing Movement and the Alliance of Confessing Evangelicals, have organized on the other side of the issue to support the fidelity/chastity standard for ordination, which was removed in 2011.
The Presbyterian Church (USA) voted to allow same-gender marriages on June 19, 2014 during its 221st General Assembly, making it one of the largest Christian denominations in the world to allow same-sex unions. This vote lifted a previous ban, and allows pastors to perform marriages in jurisdictions where it is legal. Additionally, the Assembly approved to amend the Book of Order that would change the definition of marriage from "between a man and a woman" to "between two people, traditionally between a man and a woman".
The 2006 "Report of the Theological Task Force on Peace, Unity, and Purity of the Church", in theory, attempted to find common ground. Some felt that the adoption of this report provided for a clear local option mentioned, while the Stated Clerk of the General Assembly, Clifton Kirkpatrick went on record as saying, "Our standards have not changed. The rules of the Book of Order stay in force and all ordinations are still subject to review by higher governing bodies." The authors of the report stated that it is a compromise and return to the original Presbyterian culture of local controls. The recommendation for more control by local presbyteries and sessions is viewed by its opposition as a method for bypassing the constitutional restrictions currently in place concerning ordination and marriage, effectively making the constitutional "standard" entirely subjective.
In the General Assembly gathering of June 2006, Presbyterian voting Commissioners passed an "authoritative interpretation", recommended by the Theological Task Force, of the "Book of Order" (the church constitution). Some argued that this gave presbyteries the "local option" of ordaining or not ordaining anyone based on a particular presbytery's reading of the constitutional statute. Others argued that presbyteries have always had this responsibility and that this new ruling did not change but only clarified that responsibility. On June 20, 2006, the General Assembly voted 298 to 221 (or 57% to 43%) to approve such interpretation. In that same session on June 20, the General Assembly also voted 405 to 92 (with 4 abstentions) to uphold the constitutional standard for ordination requiring fidelity in marriage or chastity in singleness.
The General Assembly of 2008 took several actions related to homosexuality. The first action was to adopt a different translation of the Heidelberg Catechism from 1962, removing the words "homosexual perversions" among other changes. This will require the approval of the 2010 and 2012 General Assemblies as well as the votes of the presbyteries after the 2010 Assembly. The second action was to approve a new Authoritative Interpretation of G-6.0108 of the "Book of Order" allowing for the ordaining body to make decisions on whether or not a departure from the standards of belief of practice is sufficient to preclude ordination. Some argue that this creates "local option" on ordaining homosexual persons. The third action was to replace the text of "Amendment B" with new text: "Those who are called to ordained service in the church, by their assent to the constitutional questions for ordination and installation (W-4.4003), pledge themselves to live lives obedient to Jesus Christ the Head of the Church, striving to follow where he leads through the witness of the Scriptures, and to understand the Scriptures through the instruction of the Confessions. In so doing, they declare their fidelity to the standards of the Church. Each governing body charged with examination for ordination and/or installation (G-14.0240 and G-14.0450) establishes the candidate's sincere efforts to adhere to these standards." This would have removed the "fidelity and chastity" clause. This third action failed to obtain the required approval of a majority of the presbyteries by June 2009. Fourth, a resolution was adopted to affirm the definition of marriage from Scripture and the Confessions as being between a man and a woman.
In July 2010, by a vote of 373 to 323, the General Assembly voted to propose to the presbyteries for ratification a constitutional amendment to remove from the Book of Order section G-6.0106.b. which included this explicit requirement for ordination: "Among these standards is the requirement to live either in fidelity within the covenant of marriage between a man and a woman (W-4.9001), or chastity in singleness." This proposal required ratification by a majority of the 173 presbyteries within 12 months of the General Assembly's adjournment. A majority of presbytery votes was reached in May 2011. The constitutional amendment took effect July 10, 2011. This amendment shifted back to the ordaining body the responsibility for making decisions about whom they shall ordain and what they shall require of their candidates for ordination. It neither prevents nor imposes the use of the so-called "fidelity and chastity" requirement, but it removes that decision from the text of the constitution and places that judgment responsibility back upon the ordaining body where it had traditionally been prior to the insertion of the former G-6.0106.b. in 1997. Each ordaining body, the session for deacon or elder and the presbytery for minister, is now responsible to make its own interpretation of what scripture and the confessions require of ordained officers.
In June 2014, the General Assembly approved a change in the wording of its constitution to define marriage a contract "between a woman and a man" to being "between two people, traditionally a man and a woman". It allowed gay and lesbian weddings within the church and further allow clergy to perform same-sex weddings. That revision gave clergy the choice of presiding over same-sex marriages, but clergy was not compelled to perform same-sex marriage.
PC(USA)'s book of order includes a "trust clause", which grants ownership of church property to the presbytery. Under this trust clause, the presbytery may assert a claim to the property of the congregation in the event of a congregational split, dissolution (closing), or disassociation from the PC(USA). This clause does not prevent particular churches from leaving the denomination, but if they do, they may not be entitled to any physical assets of that congregation unless by agreement with the presbytery. Recently this provision has been vigorously tested in courts of law.
In June 2004, the General Assembly met in Richmond, Virginia, and adopted by a vote of 431–62 a resolution that called on the church's committee on Mission Responsibility through Investment (MRTI) "to initiate a process of phased, selective divestment in multinational corporations operating in Israel". The resolution also said "the occupation ... has proven to be at the root of evil acts committed against innocent people on both sides of the conflict". The church statement at the time noted that "divestment is one of the strategies that U.S. churches used in the 1970s and 80s in a successful campaign to end apartheid in South Africa".
A second resolution, calling for an end to the construction of a wall by the state of Israel, passed. The resolution opposed to the construction of the Israeli West Bank barrier, regardless of its location, and opposed the United States government making monetary contribution to the construction. The General Assembly also adopted policies rejecting Christian Zionism and allowing the continued funding of conversionary activities aimed at Jews. Together, the resolutions caused tremendous dissent within the church and a sharp disconnect with the Jewish community. Leaders of several American Jewish groups communicated to the church their concerns about the use of economic leverages that apply specifically to companies operating in Israel. Some critics of the divestment policy accused church leaders of anti-Semitism.
In June 2006, after the General Assembly in Birmingham, Alabama changed policy (details), both pro-Israel and pro-Palestinian groups praised the resolution. Pro-Israel groups, who had written General Assembly commissioners to express their concerns about a corporate engagement/divestment strategy focused on Israel, praised the new resolution, saying that it reflected the church stepping back from a policy that singled out companies working in Israel. Pro-Palestinian groups said that the church maintained the opportunity to engage and potentially divest from companies that support the Israeli occupation, because such support would be considered inappropriate according to the customary MRTI process.
In August 2011, the American National Middle Eastern Presbyterian Caucus (NMEPC) endorsed the boycott, divestment, and sanctions (BDS) campaign against Israel.
In January 2014, The PC(USA) published "Zionism unsettled", which was commended as "a valuable opportunity to explore the political ideology of Zionism". One critic claimed it was anti-Zionist and characterised the Israeli–Palestinian as a conflict fueled by a "pathology inherent in Zionism". The Simon Wiesenthal Center described the study guide as "a hit-piece outside all norms of interfaith dialogue. It is a compendium of distortions, ignorance and outright lies – that tragically has emanated too often from elites within this church". The PC(USA) subsequently withdrew the publication from sale on its website.
On June 20, 2014 the General Assembly in Detroit approved a measure (310–303) calling for divestment from stock in Caterpillar, Hewlett-Packard and Motorola Solutions in protest of Israeli policies on the West Bank. The vote was immediately and sharply criticized by the American Jewish Committee which accused the General Assembly of acting out of anti-Semitic motives. Proponents of the measure strongly denied the accusations. | https://en.wikipedia.org/wiki?curid=25031 |
PackBits
PackBits is a fast, simple lossless compression scheme for run-length encoding of data.
Apple introduced the PackBits format with the release of MacPaint on the Macintosh computer. This compression scheme is one of the types of compression that can be used in TIFF-files. TGA-files also use this RLE compression scheme, but treats data stream as pixels instead of bytes.
A PackBits data stream consists of packets with a one-byte header followed by data. The header is a signed byte; the data can be signed, unsigned, or packed (such as MacPaint pixels).
In the following table, "n" is the value of the header byte as a signed integer.
Note that interpreting 0 as positive or negative makes no difference in the output. Runs of two bytes adjacent to non-runs are typically written as literal data. There is no way based on the PackBits data to determine the end of the data stream; that is to say, one must already know the size of the compressed or uncompressed data before reading a PackBits data stream to know where it ends.
Apple Computer (see the external link) provides this short example of packed data:
codice_1
The following code, written in Microsoft VBA, unpacks the data:
Sub UnpackBitsDemo()
End Sub
The same implementation in JS:
/**
function str2hex (str) {
function hex2str (hex) {
function unpackBits (data) {
var original = 'FE AA 02 80 00 2A FD AA 03 80 00 2A 22 F7 AA',
// Output is: AA AA AA 80 00 2A AA AA AA AA 80 00 2A 22 AA AA AA AA AA AA AA AA AA AA
console.log(str2hex(data)); | https://en.wikipedia.org/wiki?curid=25034 |
Pub rock (Australia)
Pub rock is a style of Australian rock and roll popular throughout the 1970s and 1980s, and still influencing contemporary Australian music in the 2000s decade. The term came from the venues where most of these bands originally played — inner-city and suburban pubs. These often noisy, hot, small and crowded venues were not always ideal as music venues and favoured loud, simple songs based on drums and electric guitar riffs.
The Australian version of pub rock incorporates hard rock, blues rock, and/or progressive rock. In the "Encyclopedia of Australian Rock and Pop" (1999), Australian musicologist Ian McFarlane described how, in the early 1970s, Billy Thorpe & The Aztecs, Blackfeather, and Buffalo pioneered Australia's pub rock movement. Australian rock music journalist Ed Nimmervoll declared that "[t]he seeds for Australian heavy rock can be traced back to two important sources, Billy Thorpe's Seventies Aztecs and Sydney band Buffalo".
The emergence of the Australian version of the pub rock genre and the related pub circuit was the result of several interconnected factors. From the 1950s to the 1970s, mainly because of restrictive state liquor licensing laws, only a small proportion of live pop and rock music in Australia was performed on licensed premises (mostly private clubs or discotheques); the majority of concerts were held in non-licensed venues like community, church or municipal halls. These concerts and dances were 'all-ages' events—often with adult supervision—and alcohol was not served.
During the 1960s, however, Australian states began liberalising their licensing laws. Sunday Observance Acts were repealed, pub opening hours were extended, discriminatory regulations — such as the long-standing ban on women entering or drinking in public bars — were removed, and in the 1970s the age of legal majority was lowered from 21 to 18. Concurrently, the members of the so-called "Baby Boomer" generation — who were the main audience for pop and rock music — were reaching their late teens and early twenties, and were thus able to enter such licensed premises. Pub owners soon realised that providing live music (which was often free) would draw young people to pubs in large numbers, and regular rock performances soon became a fixture at many pubs.
In the early 1970s Billy Thorpe & The Aztecs, Blackfeather, and Buffalo pioneered Australia's pub rock movement. In March 1970 Billy Thorpe & The Aztecs consisted of Thorpe on lead vocals and guitar, Jimmy Thompson on drums, Paul Wheeler on bass guitar and Lobby Loyde (ex-Purple Hearts, Wild Cherries) on lead guitar. They released a cover version of Willie Dixon's "Good Mornin' Little School Girl". They had developed a heavier sound and in July that year, Warren `Pig' Morgan (piano, backing vocals) had joined and the band recorded "The Hoax Is Over", which was released in January 1971. Thorpe described their sound "[It was] like we were standing on a pair of Boeing 747 engines. It cracked the foundations and broke windows in neighbouring buildings".
By early 1971 Blackfeather consisted of Neale Johns on lead vocals, John Robinson on lead guitar (ex-Lonely Ones, Monday's Children, Dave Miller Set), Robert Fortesque on bass guitar and Alexander Kash on drums. Their debut album, "At the Mountains of Madness", appeared in April 1971. In May they had a hit with "Seasons of Change", which peaked at No. 15 on the "Go-Set" National Top 40 Singles Chart. Buffalo formed in August 1971 by Dave Tice on co-lead vocals (ex-Head) with Paul Balbi on drums, John Baxter on guitar, and Peter Wells on bass guitar. Their debut album, "Dead Forever...", appeared in June the following year. According to Australian rock music journalist, Ed Nimmervoll, "The seeds for Australian heavy rock can be traced back to two important sources, Billy Thorpe's Seventies Aztecs and Sydney band Buffalo".
Many city and suburban pubs gained renown for their support of live music, and many prominent Australian bands — including AC/DC, Cold Chisel, The Angels and The Dingoes — developed their style at these venues in the early days of their careers. Australian musicologist, Ian McFarlane, described how AC/DC took "the raw energy of Aussie pub rock, extend its basic guidelines, serve it up to a teenybop "Countdown" audience and still reap the benefits of the live circuit by packing out the pubs". He found that Cold Chisel "fused a combination of rockabilly, hard rock and rough-house soul'n'blues that was defiantly Australian in outlook". He noted The Angels had "a profound effect on the Australian live music scene of the late 1970s/early 1980s. [They] helped redefine the Australian pub rock tradition ... [their] brand of no-frills, hard-driving boogie rock attracted pub goers in unprecedented numbers". The Dingoes provided a "spirited combination of R&B, country and red-hot rock'n'roll was imbued with a delightful sense of time and place" according to McFarlane.
Notable pub rock venues include the Largs Pier Hotel and the Governor Hindmarsh Hotel in Adelaide, the Royal Antler Hotel in Narrabeen, Sydney and the Civic Hotel in Sydney's city centre, the Star Hotel in Newcastle, New South Wales and the Station Hotel in Prahran, Melbourne, which was one of the premier pub-rock venues in Australia for more than two decades, Poyntons Carlton Club Hotel in Carlton Melbourne's first Sunday night live pub rock venue.
As the pub rock phenomenon expanded, hundreds of hotels in capital cities and major towns began providing regular live music, and a thriving circuit evolved, enabling bands to tour up and down the eastern and southern coast of Australia from North Queensland to South Australia.
It could be argued that the very venues many of the bands played in (pubs), had a major influence on the evolution of their music and sound. The venues were more often than not small and the crowds — alcohol-fuelled — were there for the experience rather than to see a "name band". Thus, an emphasis on simple, rhythm-based songs grew. With the sound in many of the rooms far from ideal for live music, an emphasis on a very loud snare and kick-drum and driving bass-guitar grew. Guitarists tended to rely on simple, repetitive riffs, rather than more complex solos or counter-melodies. This might explain why, even in studios and larger arenas and stadiums, many of the bands who originated in pubs relied on an exaggerated drum sound and fairly simple musical arrangements.
A band like Hunters & Collectors, for example, saw their sound harden from their arty origins (which included a brass-section, experimental percussion and complex arrangements) to a more straightforward rock sound with emphasis on drums, bass and simple guitar riffs; a sound that more suited the beer barns they were to play in over their extensive touring career.
Though Australia has a relatively small population, the proportionally high number of venues that bands could play in, mainly along the Eastern coast, meant that a band could tour extensively, often playing every night for long periods. This would allow bands such as AC/DC, Cold Chisel, INXS, Midnight Oil, Rose Tattoo and others to take their live skills into large venues in the US and Europe with ease. | https://en.wikipedia.org/wiki?curid=25036 |
Phonation
The term phonation has slightly different meanings depending on the subfield of phonetics. Among some phoneticians, "phonation" is the process by which the vocal folds produce certain sounds through quasi-periodic vibration. This is the definition used among those who study laryngeal anatomy and physiology and speech production in general. Phoneticians in other subfields, such as linguistic phonetics, call this process "voicing", and use the term "phonation" to refer to any oscillatory state of any part of the larynx that modifies the airstream, of which voicing is just one example. Voiceless and supra-glottal phonations are included under this definition.
The phonatory process, or voicing, occurs when air is expelled from the lungs through the glottis, creating a pressure drop across the larynx. When this drop becomes sufficiently large, the vocal folds start to oscillate. The minimum pressure drop required to achieve phonation is called the phonation threshold pressure (PTP), and for humans with normal vocal folds, it is approximately 2–3 cm H2O. The motion of the vocal folds during oscillation is mostly lateral, though there is also some superior component as well. However, there is almost no motion along the length of the vocal folds. The oscillation of the vocal folds serves to modulate the pressure and flow of the air through the larynx, and this modulated airflow is the main component of the sound of most voiced phones.
The sound that the larynx produces is a harmonic series. In other words, it consists of a fundamental tone (called the fundamental frequency, the main acoustic cue for the percept pitch) accompanied by harmonic overtones, which are multiples of the fundamental frequency. According to the source–filter theory, the resulting sound excites the resonance chamber that is the vocal tract to produce the individual speech sounds.
The vocal folds will not oscillate if they are not sufficiently close to one another, are not under sufficient tension or under too much tension, or if the pressure drop across the larynx is not sufficiently large. In linguistics, a phone is called voiceless if there is no phonation during its occurrence. In speech, voiceless phones are associated with vocal folds that are elongated, highly tensed, and placed laterally (abducted) when compared to vocal folds during phonation.
Fundamental frequency, the main acoustic cue for the percept "pitch", can be varied through a variety of means. Large scale changes are accomplished by increasing the tension in the vocal folds through contraction of the cricothyroid muscle. Smaller changes in tension can be effected by contraction of the thyroarytenoid muscle or changes in the relative position of the thyroid and cricoid cartilages, as may occur when the larynx is lowered or raised, either volitionally or through movement of the tongue to which the larynx is attached via the hyoid bone. In addition to tension changes, fundamental frequency is also affected by the pressure drop across the larynx, which is mostly affected by the pressure in the lungs, and will also vary with the distance between the vocal folds. Variation in fundamental frequency is used linguistically to produce intonation and tone.
There are currently two main theories as to how vibration of the vocal folds is initiated: the myoelastic theory and the aerodynamic theory. These two theories are not in contention with one another and it is quite possible that both theories are true and operating simultaneously to initiate and maintain vibration. A third theory, the neurochronaxic theory, was in considerable vogue in the 1950s, but has since been largely discredited.
The myoelastic theory states that when the vocal cords are brought together and breath pressure is applied to them, the cords remain closed until the pressure beneath them, the subglottic pressure, is sufficient to push them apart, allowing air to escape and reducing the pressure enough for the muscle tension recoil to pull the folds back together again. The pressure builds up once again until the cords are pushed apart, and the whole cycle keeps repeating itself. The rate at which the cords open and close, the number of cycles per second, determines the pitch of the phonation.
The aerodynamic theory is based on the Bernoulli energy law in fluids. The theory states that when a stream of breath is flowing through the glottis while the arytenoid cartilages are held together (by the action of the interarytenoid muscles), a push-pull effect is created on the vocal fold tissues that maintains self-sustained oscillation. The push occurs during glottal opening, when the glottis is convergent, and the pull occurs during glottal closing, when the glottis is divergent. Such an effect causes a transfer of energy from the airflow to the vocal fold tissues which overcomes losses by dissipation and sustain the oscillation. The amount of lung pressure needed to begin phonation is defined by Titze as the oscillation threshold pressure. During glottal closure, the air flow is cut off until breath pressure pushes the folds apart and the flow starts up again, causing the cycles to repeat.
The textbook entitled Myoelastic Aerodynamic Theory of Phonation by Ingo Titze credits Janwillem van den Berg as the originator of the theory and provides detailed mathematical development of the theory.
This theory states that the frequency of the vocal fold vibration is determined by the chronaxie of the recurrent nerve, and not by breath pressure or muscular tension. Advocates of this theory thought that every single vibration of the vocal folds was due to an impulse from the recurrent laryngeal nerves and that the acoustic center in the brain regulated the speed of vocal fold vibration. Speech and voice scientists have long since abandoned this theory as the muscles have been shown to not be able to contract fast enough to accomplish the vibration. In addition, persons with paralyzed vocal folds can produce phonation, which would not be possible according to this theory. Phonation occurring in excised larynges would also not be possible according to this theory.
In linguistic phonetic treatments of phonation, such as those of Peter Ladefoged, phonation was considered to be a matter of points on a continuum of tension and closure of the vocal cords. More intricate mechanisms were occasionally described, but they were difficult to investigate, and until recently the state of the glottis and phonation were considered to be nearly synonymous.
If the vocal cords are completely relaxed, with the arytenoid cartilages apart for maximum airflow, the cords do not vibrate. This is voiceless phonation, and is extremely common with obstruents. If the arytenoids are pressed together for glottal closure, the vocal cords block the airstream, producing stop sounds such as the glottal stop. In between there is a sweet spot of maximum vibration. Also, the existence of an optimal glottal shape for ease of phonation has been shown, at which the lung pressure required to initiate the vocal cord vibration is minimum. This is modal voice, and is the normal state for vowels and sonorants in all the world's languages. However, the aperture of the arytenoid cartilages, and therefore the tension in the vocal cords, is one of degree between the end points of open and closed, and there are several intermediate situations utilized by various languages to make contrasting sounds.
For example, Gujarati has vowels with a partially lax phonation called breathy voice or murmured voice (transcribed in IPA with a subscript umlaut ), while Burmese has vowels with a partially tense phonation called creaky voice or laryngealized voice (transcribed in IPA with a subscript tilde ). The Jalapa dialect of Mazatec is unusual in contrasting both with modal voice in a three-way distinction. (Note that Mazatec is a tonal language, so the glottis is making several tonal distinctions simultaneously with the phonation distinctions.)
Javanese does not have modal voice in its stops, but contrasts two other points along the phonation scale, with more moderate departures from modal voice, called slack voice and stiff voice. The "muddy" consonants in Shanghainese are slack voice; they contrast with tenuis and aspirated consonants.
Although each language may be somewhat different, it is convenient to classify these degrees of phonation into discrete categories. A series of seven alveolar stops, with phonations ranging from an open/lax to a closed/tense glottis, are:
The IPA diacritics "under-ring" and "subscript wedge", commonly called "voiceless" and "voiced", are sometimes added to the symbol for a voiced sound to indicate more lax/open (slack) and tense/closed (stiff) states of the glottis, respectively. (Ironically, adding the 'voicing' diacritic to the symbol for a voiced consonant indicates "less" modal voicing, not more, because a modally voiced sound is already fully voiced, at its sweet spot, and any further tension in the vocal cords dampens their vibration.)
Alsatian, like several Germanic languages, has a typologically unusual phonation in its stops. The consonants transcribed (ambiguously called "lenis") are partially voiced: The vocal cords are positioned as for voicing, but do not actually vibrate. That is, they are technically voiceless, but without the open glottis usually associated with voiceless stops. They contrast with both modally voiced and modally voiceless in French borrowings, as well as aspirated word initially.
If the arytenoid cartiledges are parted to admit turbulent airflow, the result is whisper phonation if the vocal folds are adducted, and whispery voice phonation (murmur) if the vocal folds vibrate modally. Whisper phonation is heard in many productions of French "oui!", and the "voiceless" vowels of many North American languages are actually whispered.
It has long been noted that in many languages, both phonologically and historically, the glottal consonants do not behave like other consonants. Phonetically, they have no manner or place of articulation other than the state of the glottis: "glottal closure" for , "breathy voice" for , and "open airstream" for . Some phoneticians have described these sounds as neither glottal nor consonantal, but instead as instances of pure phonation, at least in many European languages. However, in Semitic languages they do appear to be true glottal consonants.
In the last few decades it has become apparent that phonation may involve the entire larynx, with as many as six valves and muscles working either independently or together. From the glottis upward, these articulations are:
Until the development of fiber-optic laryngoscopy, the full involvement of the larynx during speech production was not observable, and the interactions among the six laryngeal articulators is still poorly understood. However, at least two supra-glottal phonations appear to be widespread in the world's languages. These are harsh voice ('ventricular' or 'pressed' voice), which involves overall constriction of the larynx, and faucalized voice ('hollow' or 'yawny' voice), which involves overall expansion of the larynx.
The Bor dialect of Dinka has contrastive modal, breathy, faucalized, and harsh voice in its vowels, as well as three tones. The "ad hoc" diacritics employed in the literature are a subscript double quotation mark for faucalized voice, , and underlining for harsh voice, . Examples are,
Other languages with these contrasts are Bai (modal, breathy, and harsh voice), Kabiye (faucalized and harsh voice, previously seen as ±ATR), Somali (breathy and harsh voice).
Elements of laryngeal articulation or phonation may occur widely in the world's languages as phonetic detail even when not phonemically contrastive. For example, simultaneous glottal, ventricular, and arytenoid activity (for something other than epiglottal consonants) has been observed in Tibetan, Korean, Nuuchahnulth, Nlaka'pamux, Thai, Sui, Amis, Pame, Arabic, Tigrinya, Cantonese, and Yi.
In languages such as French, all obstruents occur in pairs, one modally voiced and one voiceless: [b] [d] [g] [v] [z] [ʒ] → [p] [t] [k] [f] [s] [ʃ].
In English, every voiced fricative corresponds to a voiceless one. For the pairs of English stops, however, the distinction is better specified as voice onset time rather than simply voice: In initial position, /b d g/ are only partially voiced (voicing begins during the hold of the consonant), and /p t k/ are aspirated (voicing begins only well after its release). Certain English morphemes have voiced and voiceless allomorphs, such as: the plural, verbal, and possessive endings spelled "-s" (voiced in "kids" but voiceless in "kits" ), and the past-tense ending spelled "-ed" (voiced in "buzzed" but voiceless in "fished" ).
A few European languages, such as Finnish, have no phonemically voiced obstruents but pairs of long and short consonants instead. Outside Europe, the lack of voicing distinctions is common; indeed, in Australian languages it is nearly universal. In languages without the distinction between voiceless and voiced obstruents, they are realized as voiced in voiced environments, such as between vowels, and voiceless elsewhere.
In phonology, a register is a combination of tone and vowel phonation into a single phonological parameter. For example, among its vowels, Burmese combines modal voice with low tone, breathy voice with falling tone, creaky voice with high tone, and glottal closure with high tone. These four registers contrast with each other, but no other combination of phonation (modal, breath, creak, closed) and tone (high, low, falling) is found.
Among vocal pedagogues and speech pathologists, a vocal register also refers to a particular phonation limited to a particular range of pitch, which possesses a characteristic sound quality. The term "register" may be used for several distinct aspects of the human voice:
Four combinations of these elements are identified in speech pathology: the vocal fry register, the modal register, the falsetto register, and the whistle register. | https://en.wikipedia.org/wiki?curid=25037 |
Principal ideal domain
In mathematics, a principal ideal domain, or PID, is an integral domain in which every ideal is principal, i.e., can be generated by a single element. More generally, a principal ideal ring is a nonzero commutative ring whose ideals are principal, although some authors (e.g., Bourbaki) refer to PIDs as principal rings. The distinction is that a principal ideal ring may have zero divisors whereas a principal ideal domain cannot.
Principal ideal domains are thus mathematical objects that behave somewhat like the integers, with respect to divisibility: any element of a PID has a unique decomposition into prime elements (so an analogue of the fundamental theorem of arithmetic holds); any two elements of a PID have a greatest common divisor (although it may not be possible to find it using the Euclidean algorithm). If and are elements of a PID without common divisors, then every element of the PID can be written in the form .
Principal ideal domains are noetherian, they are integrally closed, they are unique factorization domains and Dedekind domains. All Euclidean domains and all fields are principal ideal domains.
Principal ideal domains appear in the following chain of class inclusions:
Examples include:
Examples of integral domains that are not PIDs:
The key result is the structure theorem: If "R" is a principal ideal domain, and "M" is a finitely
generated "R"-module, then formula_20 is a direct sum of cyclic modules, i.e., modules with one generator. The cyclic modules are isomorphic to formula_21 for some formula_22 (notice that formula_23 may be equal to formula_24, in which case formula_21 is formula_26).
If "M" is a free module over a principal ideal domain "R", then every submodule of "M" is again free. This does not hold for modules over arbitrary rings, as the example formula_27 of modules over formula_28 shows.
In a principal ideal domain, any two elements have a greatest common divisor, which may be obtained as a generator of the ideal .
All Euclidean domains are principal ideal domains, but the converse is not true.
An example of a principal ideal domain that is not a Euclidean domain is the ring formula_29 In this domain no and exist, with , so that formula_30, despite formula_31 and formula_32 having a greatest common divisor of .
Every principal ideal domain is a unique factorization domain (UFD). The converse does not hold since for any UFD , the ring of polynomials in 2 variables is a UFD but is not a PID. (To prove this look at the ideal generated by formula_33 It is not the whole ring since it contains no polynomials of degree 0, but it cannot be generated by any one single element.)
The previous three statements give the definition of a Dedekind domain, and hence every principal ideal domain is a Dedekind domain.
Let "A" be an integral domain. Then the following are equivalent.
A field norm is a Dedekind-Hasse norm; thus, (5) shows that a Euclidean domain is a PID. (4) compares to:
An integral domain is a Bézout domain if and only if any two elements in it have a gcd "that is a linear combination of the two." A Bézout domain is thus a GCD domain, and (4) gives yet another proof that a PID is a UFD. | https://en.wikipedia.org/wiki?curid=25039 |
Pioneer program
The Pioneer programs were two series of United States lunar and planetary space probes exploration. The first program, which ran from 1958 to 1960, unsuccessfully attempted to send spacecraft to orbit the Moon, successfully sent one spacecraft to fly by the Moon, and successfully sent one spacecraft to investigate interplanetary space between the orbits of Earth and Venus. The second program, which ran from 1965 to 1992, sent four spacecraft to measure interplanetary space weather, two to explore Jupiter and Saturn, and two to explore Venus. The two outer planet probes, Pioneer 10 and Pioneer 11, became the first of five artificial objects to achieve the escape velocity that will allow them to leave the Solar System, and carried a golden plaque depicting a man and a woman and information about the origin and the creators of the probes, in case any extraterrestrials find them someday.
Credit for naming the first probe has been attributed to Stephen A. Saliga, who had been assigned to the Air Force Orientation Group, Wright-Patterson AFB, as chief designer of Air Force exhibits. While he was at a briefing, the spacecraft was described to him, as, a "lunar-orbiting vehicle, with an infrared scanning device." Saliga thought the title too long, and lacked theme for an exhibit design. He suggested, "Pioneer", as the name of the probe, since "the Army had already launched and orbited the Explorer satellite, and their Public Information Office was identifying the Army, as, 'Pioneers in Space,'" and, by adopting the name, the Air Force would "make a 'quantum jump' as to who, really, [were] the 'Pioneers' in space.'"
The earliest missions were attempts to achieve Earth's escape velocity, simply to show it was feasible and to study the Moon. This included the first launch by NASA which was formed from the old NACA. These missions were carried out by the Air Force Ballistic Missile Division, Army, and NASA.
Five years after the early Able space probe missions ended, NASA Ames Research Center used the Pioneer name for a new series of missions, initially aimed at the inner Solar System, before the flyby missions to Jupiter and Saturn. While successful, the missions returned much poorer images than the Voyager program probes would five years later. In 1978, the end of the program saw a return to the inner Solar System, with the Pioneer Venus Orbiter and Multiprobe, this time using orbital insertion rather than flyby missions.
The new missions were numbered beginning with Pioneer 6 (alternate names in parentheses).
The spacecraft in Pioneer missions 6, 7, 8, and 9 comprised a new interplanetary space weather network:
Pioneer 6 and Pioneer 9 are in solar orbits with 0.8 AU distance to the Sun. Their orbital periods are therefore slightly shorter than Earth's. Pioneer 7 and Pioneer 8 are in solar orbits with 1.1 AU distance to the Sun. Their orbital periods are therefore slightly longer than Earth's. Since the probes' orbital periods differ from that of the Earth, from time to time, they face a side of the Sun that cannot be seen from Earth. The probes can sense parts of the Sun several days before the Sun's rotation reveals it to ground-based Earth orbiting observatories. | https://en.wikipedia.org/wiki?curid=25040 |
Lockheed P-38 Lightning
The Lockheed P-38 Lightning is a World War II–era American piston-engined fighter aircraft. Developed for the United States Army Air Corps, the P-38 had distinctive twin booms and a central nacelle containing the cockpit and armament. Allied propaganda claimed it had been nicknamed the fork-tailed devil () by the Luftwaffe and "two planes, one pilot" by the Japanese. Along with its use as a general fighter, the P-38 was utilized in various aerial combat roles including as a highly effective fighter-bomber, a night fighter, and as a long-range escort fighter when equipped with drop tanks. The P-38 was also used as a bomber-pathfinder, guiding streams of medium and heavy bombers; or even other P-38s, equipped with bombs, to their targets. Used in the aerial reconnaissance role, the P-38 accounted for 90 percent of the aerial film captured over Europe.
The P-38 was used most successfully in the Pacific Theater of Operations and the China-Burma-India Theater of Operations as the aircraft of America's top aces, Richard Bong (40 victories), Thomas McGuire (38 victories) and Charles H. MacDonald (27 victories). In the South West Pacific theater, the P-38 was the primary long-range fighter of United States Army Air Forces until the introduction of large numbers of P-51D Mustangs toward the end of the war.
Unusual for a fighter of this time, the exhaust was muffled by the turbo-superchargers, making the P-38's operation relatively quiet. The two turbo-superchargers also provided the P-38 with good high-altitude performance, making it one of the earliest Allied fighters capable of performing at such altitudes. It was extremely forgiving and could be mishandled in many ways, but the rate of roll in the early versions was too low for it to excel as a dogfighter. The P-38 was the only American fighter aircraft in large-scale production throughout American involvement in the war, from Pearl Harbor to Victory over Japan Day. At the end of the war, orders for 1,887 more were cancelled.
Lockheed designed the P-38 in response to a February 1937 specification from the United States Army Air Corps (USAAC). Circular Proposal X-608 was a set of aircraft performance goals authored by First Lieutenants Benjamin S. Kelsey and Gordon P. Saville for a twin-engine, high-altitude "interceptor" having "the tactical mission of interception and attack of hostile aircraft at high altitude." Forty years later, Kelsey explained that he and Saville drew up the specification using the word "interceptor" as a way to bypass the inflexible Army Air Corps requirement for pursuit aircraft to carry no more than of armament including ammunition, and to bypass the USAAC restriction of single-seat aircraft to one engine. Kelsey was looking for a minimum of of armament. Kelsey and Saville aimed to get a more capable fighter, better at dog-fighting and at high-altitude combat. Specifications called for a maximum airspeed of at least at altitude, and a climb to within six minutes, the toughest set of specifications USAAC had ever presented. The unbuilt Vultee XP1015 was designed to the same requirement, but was not advanced enough to merit further investigation. A similar single-engine proposal was issued at the same time, Circular Proposal X-609, in response to which the Bell P-39 Airacobra was designed. Both proposals required liquid-cooled Allison V-1710 engines with turbo-superchargers and gave extra points for tricycle landing gear.
The Lockheed design team, under the direction of Hall Hibbard and Clarence "Kelly" Johnson, considered a range of twin-engine configurations, including both engines in a central fuselage with push–pull propellers.
The eventual configuration was rare in terms of contemporary fighter aircraft design, with the preceding Fokker G.1, the contemporary Focke-Wulf Fw 189 Luftwaffe reconnaissance aircraft, and the later Northrop P-61 Black Widow night fighter having a similar planform, along with a few other unusual aircraft. The Lockheed team chose twin booms to accommodate the tail assembly, engines, and turbo-superchargers, with a central nacelle for the pilot and armament. The XP-38 gondola mockup was designed to mount two .50-caliber (12.7 mm) M2 Browning machine guns with 200 rounds per gun (rpg), two .30-caliber (7.62 mm) Brownings with 500 rpg, and a T1 Army Ordnance 23 mm (.90 in) autocannon with a rotary magazine as a substitute for the non-existent 25 mm Hotchkiss aircraft autocannon specified by Kelsey and Saville. In the YP-38s, a 37 mm (1.46 in) M9 autocannon with 15 rounds replaced the T1. The 15 rounds were in three five-round clips, an unsatisfactory arrangement according to Kelsey, and the M9 did not perform reliably in flight. Further armament experiments from March to June 1941 resulted in the P-38E combat configuration of four M2 Browning machine guns, and one Hispano 20 mm (.79 in) autocannon with 150 rounds.
Clustering all the armament in the nose was unusual in U.S. aircraft, which typically used wing-mounted guns with trajectories set up to crisscross at one or more points in a convergence zone. Nose-mounted guns did not suffer from having their useful ranges limited by pattern convergence, meaning that good pilots could shoot much farther. A Lightning could reliably hit targets at any range up to , whereas the wing guns of other fighters were optimized for a specific range. The rate of fire was about 650 rounds per minute for the 20×110 mm cannon round (130-gram shell) at a muzzle velocity of about , and for the .50-caliber machine guns (43-gram rounds), about 850 rpm at velocity. Combined rate of fire was over 4,000 rpm with roughly every sixth projectile a 20 mm shell. The duration of sustained firing for the 20 mm cannon was approximately 14 seconds while the .50-caliber machine guns worked for 35 seconds if each magazine was fully loaded with 500 rounds, or for 21 seconds if 300 rounds were loaded to save weight for long distance flying.
The Lockheed design incorporated tricycle undercarriage and a bubble canopy, and featured two turbosupercharged 12-cylinder Allison V-1710 engines fitted with counter-rotating propellers to eliminate the effect of engine torque, with the turbochargers positioned behind the engines, the exhaust side of the units exposed along the dorsal surfaces of the booms. Counter-rotation was achieved by the use of "handed" engines: the crankshafts of the engines turned in opposite directions, a relatively easy task for the V-1710 modular-design aircraft powerplant.
The P-38 was the first American fighter to make extensive use of stainless steel and smooth, flush-riveted butt-jointed aluminum skin panels. It was also the first military airplane to fly faster than in level flight.
Lockheed won the competition on 23 June 1937 with its Model 22 and was contracted to build a prototype XP-38 for US$163,000, though Lockheed's own costs on the prototype would add up to US$761,000. Construction began in July 1938, and the XP-38 first flew on 27 January 1939 at the hands of Ben Kelsey.
Kelsey then proposed a speed dash to Wright Field on 11 February 1939 to relocate the aircraft for further testing. General Henry "Hap" Arnold, commander of the USAAC, approved of the record attempt and recommended a cross-country flight to New York. The flight set a speed record by flying from California to New York in seven hours and two minutes, not counting two refueling stops, but the aircraft was downed by carburetor icing short of the Mitchel Field runway in Hempstead, New York, and was wrecked. However, on the basis of the record flight, the Air Corps ordered 13 YP-38s on 27 April 1939 for US$134,284 each. (The "Y" in "YP" was the USAAC's designation for a prototype, while the "X" in "XP" was for experimental.) Lockheed's Chief test pilot Tony LeVier angrily characterized the accident as an unnecessary publicity stunt, but according to Kelsey, the loss of the prototype, rather than hampering the program, sped the process by cutting short the initial test series. The success of the aircraft design contributed to Kelsey's promotion to captain in May 1939.
Manufacture of YP-38s fell behind schedule, at least partly because of the need for mass-production suitability making them substantially different in construction from the prototype. Another factor was the sudden required expansion of Lockheed's facility in Burbank, taking it from a specialized civilian firm dealing with small orders to a large government defense contractor making Venturas, Harpoons, Lodestars, Hudsons, and designing the Constellation for TWA. The first YP-38 was not completed until September 1940, with its maiden flight on 17 September. The 13th and final YP-38 was delivered to the Air Corps in June 1941; 12 aircraft were retained for flight testing and one for destructive stress testing. The YPs were substantially redesigned and differed greatly in detail from the hand-built XP-38. They were lighter and included changes in engine fit. The propeller rotation was reversed, with the blades spinning outward (away from the cockpit) at the top of their arc, rather than inward as before. This improved the aircraft's stability as a gunnery platform.
Test flights revealed problems initially believed to be tail flutter. During high-speed flight approaching Mach 0.68, especially during dives, the aircraft's tail would begin to shake violently and the nose would tuck under (see Mach tuck), steepening the dive. Once caught in this dive, the fighter would enter a high-speed compressibility stall and the controls would lock up, leaving the pilot no option but to bail out (if possible) or remain with the aircraft until it got down to denser air, where he might have a chance to pull out. During a test flight in May 1941, USAAC Major Signa Gilkey managed to stay with a YP-38 in a compressibility lockup, riding it out until he recovered gradually using elevator trim. Lockheed engineers were very concerned at this limitation but first had to concentrate on filling the current order of aircraft. In late June 1941, the Army Air Corps was renamed the U.S. Army Air Forces (USAAF), and a total of 65 Lightnings were finished for the service by September 1941 with more on the way for the USAAF, the Royal Air Force (RAF), and the Free French Air Force operating from England.
By November 1941, many of the initial assembly-line challenges had been met, which freed up time for the engineering team to tackle the problem of frozen controls in a dive. Lockheed had a few ideas for tests that would help them find an answer. The first solution tried was the fitting of spring-loaded servo tabs on the elevator trailing edge designed to aid the pilot when control yoke forces rose over , as would be expected in a high-speed dive. At that point, the tabs would begin to multiply the effort of the pilot's actions. The expert test pilot, Ralph Virden, was given a specific high-altitude test sequence to follow and was told to restrict his speed and fast maneuvering in denser air at low altitudes, since the new mechanism could exert tremendous leverage under those conditions. A note was taped to the instrument panel of the test craft underscoring this instruction. On 4 November 1941, Virden climbed into YP-38 #1 and completed the test sequence successfully, but 15 minutes later was seen in a steep dive followed by a high-G pullout. The tail unit of the aircraft failed at about during the high-speed dive recovery; Virden was killed in the subsequent crash. The Lockheed design office was justifiably upset, but their design engineers could only conclude that servo tabs were "not" the solution for loss of control in a dive. Lockheed still had to find the problem; the Army Air Forces personnel were sure it was flutter and ordered Lockheed to look more closely at the tail.
In 1941 flutter was a familiar engineering problem related to a too-flexible tail, but the P-38's empennage was completely skinned in aluminum rather than fabric and was quite rigid. At no time did the P-38 suffer from true flutter. To prove a point, one elevator and its vertical stabilizers were skinned with metal 63% thicker than standard, but the increase in rigidity made no difference in vibration. Army Lieutenant Colonel Kenneth B. Wolfe (head of Army Production Engineering) asked Lockheed to try external mass balances above and below the elevator, though the P-38 already had large mass balances elegantly placed within each vertical stabilizer. Various configurations of external mass balances were equipped, and dangerously steep test flights were flown to document their performance. Explaining to Wolfe in Report No. 2414, Kelly Johnson wrote "the violence of the vibration was unchanged and the diving tendency was naturally the same for all conditions." The external mass balances did not help at all. Nonetheless, at Wolfe's insistence, the additional external balances were a feature of every P-38 built from then on.
Johnson said in his autobiography that he pleaded with NACA to do model tests in its wind tunnel. They already had experience of models thrashing around violently at speeds approaching those requested and did not want to risk damaging their tunnel. Gen. Arnold, head of Army Air Forces, ordered them to run the tests, which were done up to Mach 0.74. The P-38's dive problem was revealed to be the center of pressure moving back toward the tail when in high-speed airflow. The solution was to change the geometry of the wing's lower surface when diving in order to keep lift within bounds of the top of the wing. In February 1943, quick-acting dive flaps were tried and proven by Lockheed test pilots. The dive flaps were installed outboard of the engine nacelles, and in action they extended downward 35° in 1.5 seconds. The flaps did not act as a speed brake; they affected the pressure distribution in a way that retained the wing's lift.
Late in 1943, a few hundred dive flap field modification kits were assembled to give North African, European and Pacific P-38s a chance to withstand compressibility and expand their combat tactics. Unfortunately, these crucial flaps did not always reach their destination. In March 1944, 200 dive flap kits intended for European Theater of Operations (ETO) P-38Js were destroyed in a mistaken identification incident in which an RAF fighter shot down the Douglas C-54 Skymaster (mistaken for an Fw 200) taking the shipment to England. Back in Burbank, P-38Js coming off the assembly line in spring 1944 were towed out to the ramp and modified in the open air. The flaps were finally incorporated into the production line in June 1944 on the last 210 P-38Js. Despite testing having proved the dive flaps effective in improving tactical maneuvers, a 14-month delay in production limited their implementation, with only the final half of all Lightnings built having the dive flaps installed as an assembly-line sequence.
Johnson later recalled:
Buffeting was another early aerodynamic problem. It was difficult to distinguish from compressibility as both were reported by test pilots as "tail shake". Buffeting came about from airflow disturbances ahead of the tail; the airplane would shake at high speed. Leading edge wing slots were tried as were combinations of filleting between the wing, cockpit and engine nacelles. Air tunnel test number 15 solved the buffeting completely and its fillet solution was fitted to every subsequent P-38 airframe. Fillet kits were sent out to every squadron flying Lightnings. The problem was traced to a 40% increase in air speed at the wing-fuselage junction where the thickness/chord ratio was highest. An airspeed of at could push airflow at the wing-fuselage junction close to the speed of sound. Filleting solved the buffeting problem for the P-38E and later models.
Another issue with the P-38 arose from its unique design feature of outwardly rotating (at the "tops" of the propeller arcs) counter-rotating propellers. Losing one of two engines in any twin-engine non-centerline thrust aircraft on takeoff creates sudden drag, yawing the nose toward the dead engine and rolling the wingtip down on the side of the dead engine. Normal training in flying twin-engine aircraft when losing an engine on takeoff is to push the remaining engine to full throttle to maintain airspeed; if a pilot did that in the P-38, regardless of which engine had failed, the resulting engine torque and p-factor force produced a sudden uncontrollable yawing roll, and the aircraft would flip over and hit the ground. Eventually, procedures were taught to allow a pilot to deal with the situation by reducing power on the running engine, feathering the prop on the failed engine, and then increasing power gradually until the aircraft was in stable flight. Single-engine takeoffs were possible, though not with a full fuel and ammunition load.
The engines were unusually quiet because the exhausts were muffled by the General Electric turbo-superchargers on the twin Allison V12s. There were early problems with cockpit temperature regulation; pilots were often too hot in the tropical sun as the canopy could not be fully opened without severe buffeting and were often too cold in northern Europe and at high altitude, as the distance of the engines from the cockpit prevented easy heat transfer. Later variants received modifications (such as electrically heated flight suits) to solve these problems.
On 20 September 1939, before the YP-38s had been built and flight tested, the USAAC ordered 66 initial production P-38 Lightnings, 30 of which were delivered to the (re-named) USAAF in mid-1941, but not all these aircraft were armed. The unarmed aircraft were subsequently fitted with four .50 in (12.7 mm) machine guns (instead of the two .50 in/12.7 mm and two .30 in/7.62 mm of their predecessors) and a 37 mm (1.46 in) cannon. They also had armored glass, cockpit armor and fluorescent instrument lighting . One was completed with a pressurized cabin on an experimental basis and designated XP-38A. Due to reports the USAAF was receiving from Europe, the remaining 36 in the batch were upgraded with small improvements such as self-sealing fuel tanks and enhanced armor protection to make them combat-capable. The USAAF specified that these 36 aircraft were to be designated P-38D. As a result, there never were any P-38Bs or P-38Cs. The P-38D's main role was to work out bugs and give the USAAF experience with handling the type.
In March 1940, the French and the British, through the Anglo-French Purchasing Committee, ordered a total of 667 P-38s for US$100M, designated Model 322F for the French and Model 322B for the British. The aircraft would be a variant of the P-38E. The overseas Allies wished for complete commonality of Allison engines with the large numbers of Curtiss P-40 Tomahawks both nations had on order, and thus ordered the Model 322 twin right-handed engines instead of counter-rotating ones and without turbo-superchargers. Performance was supposed to be at . After the fall of France in June 1940, the British took over the entire order and gave the aircraft the service name "Lightning." By June 1941, the War Ministry had cause to reconsider their earlier aircraft specifications based on experience gathered in the Battle of Britain and The Blitz. British displeasure with the Lockheed order came to the fore in July, and on 5 August 1941 they modified the contract such that 143 aircraft would be delivered as previously ordered, to be known as "Lightning (Mark) I," and 524 would be upgraded to US-standard P-38E specifications with a top speed of at guaranteed, to be called "Lightning II" for British service. Later that summer an RAF test pilot reported back from Burbank with a poor assessment of the "tail flutter" situation, and the British cancelled all but three of the 143 Lightning Is. As a loss of approximately US$15M was involved, Lockheed reviewed their contracts and decided to hold the British to the original order. Negotiations grew bitter and stalled. Everything changed after the 7 December 1941 attack on Pearl Harbor after which the United States government seized some 40 of the Model 322s for West Coast defense; subsequently all British Lightnings were delivered to the USAAF starting in January 1942. The USAAF lent the RAF three of the aircraft, which were delivered by sea in March 1942 and were test flown no earlier than May at Cunliffe-Owen Aircraft Swaythling, the Aeroplane and Armament Experimental Establishment and the Royal Aircraft Establishment. The A&AEE example was unarmed, lacked turbochargers and restricted to ; though the undercarriage was praised and flight on one engine described as comfortable. These three were subsequently returned to the USAAF; one in December 1942 and the others in July 1943. Of the remaining 140 Lightning Is, 19 were not modified and were designated by the USAAF as RP-322-I ('R' for 'Restricted', because non-counter-rotating propellers were considered more dangerous on takeoff), while 121 were converted to non-turbo-supercharged counter-rotating V-1710F-2 engines and designated P-322-II. All 121 were used as advanced trainers; a few were still serving that role in 1945. A few RP-322s were later used as test modification platforms such as for smoke-laying canisters. The RP-322 was a fairly fast aircraft below and well-behaved as a trainer.
Many of the British order of 524 Lightning IIs were fitted with stronger F-10 Allison engines as they became available, and all were given wing pylons for fuel tanks or bombs. The upgraded aircraft were deployed to the Pacific as USAAC F-5A reconnaissance or P-38G fighter models, the latter used with great effect to shoot down Admiral Yamamoto in April 1943. Robert Petit's G model named "Miss Virginia" was on that mission, borrowed by Rex Barber who was later credited with the kill. Petit had already used "Miss Virginia" to defeat two Nakajima A6M2-N "Rufe" floatplanes in February and to heavily damage a Japanese submarine chaser in March, which he mistakenly claimed as a destroyer sunk. Murray "Jim" Shubin used a less powerful F model he named "Oriole" to down five confirmed and possibly six Zeros over Guadalcanal in June 1943 to become ace in a day.
One result of the failed British/French order was to give the aircraft its name. Lockheed had originally dubbed the aircraft Atalanta from Greek mythology in the company tradition of naming planes after mythological and celestial figures, but the RAF name won out.
The strategic bombing proponents within the USAAF, called the Bomber Mafia by their ideological opponents, had established in the early 1930s a policy against research to create long-range fighters, which they thought would not be practical; this kind of research was not to compete for bomber resources. Aircraft manufacturers understood that they would not be rewarded if they installed subsystems on their fighters to enable them to carry drop tanks to provide more fuel for extended range. Lieutenant Kelsey, acting against this policy, risked his career in late 1941 when he convinced Lockheed to incorporate such subsystems in the P-38E model, without putting his request in writing. It is possible that Kelsey was responding to Colonel George William Goddard's observation that the US sorely needed a high-speed, long-range photo reconnaissance plane. Along with a change order specifying some P-38Es be produced without guns but with photo reconnaissance cameras, to be designated the F-4-1-LO, Lockheed began working out the problems of drop tank design and incorporation. After the attack on Pearl Harbor, eventually about 100 P-38Es were sent to a modification center near Dallas, Texas, or to the new Lockheed assembly plant B-6 (today the Burbank Airport), to be fitted with four K-17 aerial photography cameras. All of these aircraft were also modified to be able to carry drop tanks. P-38Fs were modified as well. Every Lightning from the P-38G onward was capable of being fitted with drop tanks straight off the assembly line.
In March 1942, General Arnold made an off-hand comment that the US could avoid the German U-boat menace by flying fighters to the UK (rather than packing them onto ships). President Roosevelt pressed the point, emphasizing his interest in the solution. Arnold was likely aware of the flying radius extension work being done on the P-38, which by this time had seen success with small drop tanks in the range of , the difference in capacity being the result of subcontractor production variation. Arnold ordered further tests with larger drop tanks in the range of ; the results were reported by Kelsey as providing the P-38 with a ferrying range. Because of available supply, the smaller drop tanks were used to fly Lightnings to the UK, the plan called Operation Bolero.
Led by two Boeing B-17 Flying Fortresses, the first seven P-38s, each carrying two small drop tanks, left Presque Isle Army Air Field on 23 June 1942 for RAF Heathfield in Scotland. Their first refueling stop was made in far northeast Canada at Goose Bay. The second stop was a rough airstrip in Greenland called Bluie West One, and the third refueling stop was in Iceland at Keflavik. Other P-38s followed this route with some lost in mishaps, usually due to poor weather, low visibility, radio difficulties and navigational errors. Nearly 200 of the P-38Fs (and a few modified Es) were successfully flown across the Atlantic in July–August 1942, making the P-38 the first USAAF fighter to reach Britain and the first fighter ever to be delivered across the Atlantic under its own power. Kelsey himself piloted one of the Lightnings, landing in Scotland on 25 July.
The first unit to receive P-38s was the 1st Fighter Group. After the attack on Pearl Harbor, the unit joined the 14th Pursuit Group in San Diego to provide West Coast defense.
The first Lightning to see active service was the F-4 version, a P-38E in which the guns were replaced by four K17 cameras. They joined the 8th Photographic Squadron in Australia on 4 April 1942. Three F-4s were operated by the Royal Australian Air Force in this theater for a short period beginning in September 1942.
On 29 May 1942, 25 P-38s began operating in the Aleutian Islands in Alaska. The fighter's long range made it well-suited to the campaign over the almost -long island chain, and it was flown there for the rest of the war. The Aleutians were one of the most rugged environments available for testing the new aircraft under combat conditions. More Lightnings were lost due to severe weather and other conditions than enemy action; there were cases where Lightning pilots, mesmerized by flying for hours over gray seas under gray skies, simply flew into the water. On 9 August 1942, two P-38Es of the 343rd Fighter Group, 11th Air Force, at the end of a long-range patrol, happened upon a pair of Japanese Kawanishi H6K "Mavis" flying boats and destroyed them, making them the first Japanese aircraft to be shot down by Lightnings.
After the Battle of Midway, the USAAF began redeploying fighter groups to Britain as part of Operation Bolero and Lightnings of the 1st Fighter Group were flown across the Atlantic via Iceland. On 14 August 1942, Second Lieutenant Elza Shahan of the 27th Fighter Squadron, and Second Lieutenant Joseph Shaffer of the 33rd Squadron operating out of Iceland shot down a Focke-Wulf Fw 200 "Condor" over the Atlantic. Shahan in his P-38F downed the "Condor"; Shaffer, flying either a P-40C or a P-39, had already set an engine on fire. This was the first Luftwaffe aircraft destroyed by the USAAF.
After 347 sorties with no enemy contact, the 1st and 14th Fighter Groups transferred from the UK to the 12th Air Force in North Africa as part of the force being built up for Operation Torch. The Lightning's long range allowed the pilots to fly their fighters over the Bay of Biscay, skirting neutral Spain and Portugal to refuel in Morocco. The P-38s were initially based at Tafaroui airfield in Algeria alongside P-40 Warhawks and the rest of the 12th Air Force. P-38s were first involved in North African combat operations on 11 November 1942. The first North African P-38 kill was on 22 November when Lieutenant Mark Shipman of the 14th downed an Italian airplane with twin engines. Shipman later made two more kills: a Messerschmitt Bf 109 fighter and a very large Me 323 "Gigant" transport.
Early results in the Mediterranean Theater of Operations were mixed. Some P-38 pilots scored multiple kills to become aces, while many others were shot down due to inexperience or tactical strictures. Overall, the P-38 suffered its highest losses in the Mediterranean Theater. The primary function of the P-38 in North Africa was to escort bombers, but the fighters also targeted transport aircraft, and later in the campaign they were sometimes tasked with ground attack missions. When tied to bomber escort duties, the P-38 squadrons were vulnerable to attack from above by German fighters who selected the most advantageous position and timing. The ineffectual early tactical doctrine of the American units required the P-38s to fly near the bombers at all times rather than to defend aggressively or to fly ahead and clear the airspace for the bombers, and many American pilots were downed because of this limitation. Losses mounted, and all available P-38s in the UK were flown to North Africa to restore squadron strength. After this painful experience, the American leadership changed tactics, and in February 1943 the P-38 was given free rein in its battles.
The first German success against the P-38 was on 28 November when Bf 109 pilots of "Jagdgeschwader" 53 claimed seven Lightnings for no loss of their own. Further one-sided German victories were noted on several occasions through January 1943. The first P-38 pilots to achieve ace status were Virgil Smith of the 14th FG and Jack Illfrey of the 1st FG, both credited with five wins by 26 December. Smith got a sixth enemy aircraft on 28 December but was killed two days later in a crash landing, likely after taking fire from "Oberfeldwebel" Herbert Rollwage of JG 53 who survived the war with at least 71 kills. This was Rollwage's first victory over a P-38, and his 35th claim at the time.
The two squadrons of the 14th Fighter Group were reduced so badly in December that the 82nd FG was flown from the UK to North Africa to cover the shortage. The first kill by the 82nd was during a bomber escort mission on 7 January 1943 when William J. "Dixie" Sloan broke formation and turned toward six attacking Bf 109s to shoot one of them down. Known for his maverick style, Sloan racked up 12 victories by July 1943. After another heavy toll in January 1943, 14th FG had to be withdrawn from the front to reorganize, with surviving pilots sent home and the few remaining Lightnings transferred to the 82nd. The 14th was out of action for three months, returning in May.
On 5 April 1943, 26 P-38Fs of the 82nd claimed 31 enemy aircraft destroyed, helping to establish air superiority in the area and earning it the German nickname ""der Gabelschwanz Teufel"" – the Fork-Tailed Devil. The P-38 remained active in the Mediterranean for the rest of the war, continuing to deliver and receive damage in combat. On 25 August 1943, 13 P-38s were shot down in a single sortie by JG 53 Bf 109s. On 2 September, 10 P-38s were shot down, in return for losing one German pilot: 67-victory ace Franz Schieß who had been the leading "Lightning" killer in the Luftwaffe with 17 destroyed.
The Mediterranean Theater saw the first aerial combat between German fighters and P-38s. German fighter pilot appraisal of the P-38 was mixed. Some observers dismissed the P-38 as an easy kill while others gave it high praise, a deadly enemy worthy of respect. Johannes Steinhoff, commander of JG 77 in North Africa, said that the unit's old Bf 109s were "perhaps, a little faster" than the P-38, but a dogfight with the twin-engined fighter was daunting because its turning radius was much smaller, and it could quickly get on the tail of the Bf 109. Franz Stigler, an ace with 28 kills, flew Bf 109s against the P-38 in North Africa. Stigler said the Lightning "could turn inside us with ease and they could go from level flight to climb almost instantaneously. We lost quite a few pilots who tried to make an attack and then pull up... One cardinal rule we never forgot was: avoid fighting the P-38 head on. That was suicide." Stigler said the best defense was to flick-roll the Bf 109 and dive, as the Lightning was slow in the first 10 degrees of roll, and it was not as fast in a dive. Herbert Kaiser, eventually a 68-kill ace, shot down his first P-38 in January 1943. Kaiser said that the P-38 should be respected as a formidable opponent, that it was faster and more maneuverable than the Bf 109G-6 model he flew, especially since the G-6 was slowed by underwing cannon pods. Johann Pichler, another high-scoring ace, said that the P-38 in 1943 was much faster in a climb than the Bf 109. Kurt Bühligen, third-highest scoring German pilot on the Western front with 112 victories, recalled: "The P-38 fighter (and the B-24) were easy to burn. Once in Africa we were six and met eight P-38s and shot down seven. One sees a great distance in Africa and our observers and flak people called in sightings and we could get altitude first and they were low and slow." "General der Jagdflieger" Adolf Galland was unimpressed with the P-38, declaring "it had similar shortcomings in combat to our Bf 110, our fighters were clearly superior to it." Heinz Bäer said that P-38s "were not difficult at all. They were easy to outmaneuver and were generally a sure kill".
On 12 June 1943, a P-38G, while flying a special mission between Gibraltar and Malta or, perhaps, just after strafing the radar station of Capo Pula, landed on the airfield of Capoterra (Cagliari), in Sardinia, from navigation error due to a compass failure. "Regia Aeronautica" chief test pilot "colonnello" (Lieutenant Colonel) Angelo Tondi flew the aircraft to Guidonia airfield where the P-38G was evaluated. On 11 August 1943, Tondi took off to intercept a formation of about 50 bombers, returning from the bombing of Terni (Umbria). Tondi attacked B-17G "Bonny Sue", s.n. 42-30307, that fell off the shore of Torvaianica, near Rome, while six airmen parachuted out. According to US sources, he also damaged three more bombers on that occasion. On 4 September, the 301st BG reported the loss of B-17 "The Lady Evelyn," s.n. 42-30344, downed by "an enemy P-38". War missions for that plane were limited, as the Italian petrol was too corrosive for the Lockheed's tanks. Other Lightnings were eventually acquired by Italy for postwar service.
In a particular case when faced by more agile fighters at low altitudes in a constricted valley, Lightnings suffered heavy losses. On the morning of 10 June 1944, 96 P-38Js of the 1st and 82nd Fighter Groups took off from Italy for Ploiești, the third-most heavily defended target in Europe, after Berlin and Vienna. Instead of bombing from high altitude as had been tried by the Fifteenth Air Force, USAAF planning had determined that a dive-bombing surprise attack, beginning at about with bomb release at or below , performed by 46 82nd Fighter Group P-38s, each carrying one bomb, would yield more accurate results. All of 1st Fighter Group and a few aircraft in 82nd Fighter Group were to fly cover, and all fighters were to strafe targets of opportunity on the return trip; a distance of some , including a circuitous outward route made in an attempt to achieve surprise. Some 85 or 86 fighters arrived in Romania to find enemy airfields alerted, with a wide assortment of aircraft scrambling for safety. P-38s shot down several, including heavy fighters, transports and observation aircraft. At Ploiești, defense forces were fully alert, the target was concealed by smoke screen, and anti-aircraft fire was very heavy, seven Lightnings were lost to anti-aircraft fire at the target, and two more during strafing attacks on the return flight. German Bf 109 fighters from I./JG 53 and 2./JG 77 fought the Americans. Sixteen aircraft of the 71st Fighter Squadron were challenged by a large formation of Romanian single-seater IAR.81C fighters. The fight took place below in a narrow valley. Herbert Hatch saw two IAR 81Cs that he misidentified as Focke-Wulf Fw 190s hit the ground after taking fire from his guns, and his fellow pilots confirmed three more of his kills. However, the outnumbered 71st Fighter Squadron took more damage than it dished out, losing nine aircraft. In all, the USAAF lost 22 aircraft on the mission. The Americans claimed 23 aerial victories, though Romanian and German fighter units admitted losing only one aircraft each. Eleven enemy locomotives were strafed and left burning, and flak emplacements were destroyed, along with fuel trucks and other targets. Results of the bombing were not observed by the USAAF pilots because of the smoke. The dive-bombing mission profile was not repeated, though the 82nd Fighter Group was awarded the Presidential Unit Citation for its part.
Experiences over Germany had shown a need for long-range escort fighters to protect the Eighth Air Force's heavy bomber operations. The P-38Hs of the 55th Fighter Group were transferred to the Eighth in England in September 1943, and were joined by the 20th, 364th and 479th Fighter Groups soon after. P-38s and Spitfires escorted Fortress raids over Europe.
Because its distinctive shape was less prone to cases of mistaken identity and friendly fire, Lieutenant General Jimmy Doolittle, Commander of the 8th Air Force, chose to pilot a P-38 during the invasion of Normandy so that he could watch the progress of the air offensive over France. At one point in the mission, Doolittle flick-rolled through a hole in the cloud cover, but his wingman, then-Major General Earle E. Partridge, was looking elsewhere and failed to notice Doolittle's quick maneuver, leaving Doolittle to continue on alone on his survey of the crucial battle. Of the P-38, Doolittle said that it was "the sweetest-flying plane in the sky".
A little-known role of the P-38 in the European theater was that of fighter-bomber during the invasion of Normandy and the Allied advance across France into Germany. Assigned to the IX Tactical Air Command, the 370th Fighter Group and its P-38s initially flew missions from England, dive-bombing radar installations, enemy armor, troop concentrations and flak towers. The 370th's group commander Howard F. Nichols and a squadron of his P-38 Lightnings attacked Field Marshal Günther von Kluge's headquarters in July 1944; Nichols himself skipped a bomb through the front door. The 370th later operated from Cardonville France, flying ground attack missions against gun emplacements, troops, supply dumps and tanks near Saint-Lô in July and in the Falaise–Argentan area in August 1944. The 370th participated in ground attack missions across Europe until February 1945 when the unit changed over to the P-51 Mustang.
After some disastrous raids in 1944 with B-17s escorted by P-38s and Republic P-47 Thunderbolts, Jimmy Doolittle, then head of the U.S. Eighth Air Force, went to the Royal Aircraft Establishment, Farnborough, asking for an evaluation of the various American fighters. Test pilot Captain Eric Brown, Fleet Air Arm, recalled:
We had found out that the Bf 109 and the FW 190 could fight up to a Mach of 0.75, three-quarters the speed of sound. We checked the Lightning and it couldn't fly in combat faster than 0.68. So it was useless. We told Doolittle that all it was good for was photo-reconnaissance and had to be withdrawn from escort duties. And the funny thing is that the Americans had great difficulty understanding this because the Lightning had the two top aces in the Far East.
After evaluation tests at Farnborough, the P-38 was kept in fighting service in Europe for a while longer. Although many failings were remedied with the introduction of the P-38J, by September 1944, all but one of the Lightning groups in the Eighth Air Force had converted to the P-51 Mustang. The Eighth Air Force continued to conduct reconnaissance missions using the F-5 variant.
The P-38 was used most extensively and successfully in the Pacific theater, where it proved more suited, combining exceptional range with the reliability of two engines for long missions over water. The P-38 was used in a variety of roles, especially escorting bombers at altitudes of . The P-38 was credited with destroying more Japanese aircraft than any other USAAF fighter. Freezing cockpit temperatures were not a problem at low altitude in the tropics. In fact the cockpit was often too hot since opening a window while in flight caused buffeting by setting up turbulence through the tailplane. Pilots taking low altitude assignments often flew stripped down to shorts, tennis shoes, and parachute. While the P-38 could not out-turn the A6M Zero and most other Japanese fighters when flying below , its superior speed coupled with a good rate of climb meant that it could use energy tactics, making multiple high-speed passes at its target. In addition, its tightly grouped guns were even more deadly to lightly armored Japanese warplanes than to German aircraft. The concentrated, parallel stream of bullets allowed aerial victory at much longer distances than fighters carrying wing guns. Dick Bong, the United States' highest-scoring World War II air ace (40 victories in P-38s), flew directly at his targets to ensure he hit them, in some cases flying through the debris of his target (and on one occasion colliding with an enemy aircraft which was claimed as a "probable" victory). The twin Allison engines performed admirably in the Pacific.
General George C. Kenney, commander of the USAAF 5th Air Force operating in New Guinea, could not get enough P-38s; they had become his favorite fighter in November 1942 when one squadron, the 39th Fighter Squadron of the 35th Fighter Group, joined his assorted P-39s and P-40s. The Lightnings established local air superiority with their first combat action on 27 December 1942. Kenney sent repeated requests to Arnold for more P-38s, and was rewarded with occasional shipments, but Europe was a higher priority in Washington. Despite their small force, Lightning pilots began to compete in racking up scores against Japanese aircraft.
On 2–4 March 1943, P-38s flew top cover for 5th Air Force and Australian bombers and attack aircraft during the Battle of the Bismarck Sea, in which eight Japanese troop transports and four escorting destroyers were sunk. Two P-38 aces from the 39th Fighter Squadron were killed on the second day of the battle: Bob Faurot and Hoyt "Curley" Eason (a veteran with five victories who had trained hundreds of pilots, including Dick Bong). In one notable engagement on 3 March 1943 P-38s escorted 13 B-17s as they bombed the Japanese convoy from a medium altitude of 7,000 feet which dispersed the convoy formation and reduced their concentrated anti-aircraft firepower. A B-17 was shot down and when Japanese Zero fighters machine-gunned some of the B-17 crew members that bailed out in parachutes, three P-38s promptly engaged and shot down five of the Zeros.
The Lightning figured in one of the most significant operations in the Pacific theater: the interception, on 18 April 1943, of Admiral Isoroku Yamamoto, the architect of Japan's naval strategy in the Pacific including the attack on Pearl Harbor. When American codebreakers found out that he was flying to Bougainville Island to conduct a front-line inspection, 16 P-38G Lightnings were sent on a long-range fighter-intercept mission, flying from Guadalcanal at heights of above the ocean to avoid detection. The Lightnings met Yamamoto's two Mitsubishi G4M "Betty" fast bomber transports and six escorting Zeros just as they arrived at the island. The first Betty crashed in the jungle and the second ditched near the coast. Two Zeros were also claimed by the American fighters with the loss of one P-38. Japanese search parties found Yamamoto's body at the jungle crash site the next day.
The P-38's service record shows mixed results, which may reflect more on its employment than on flaws with the aircraft. The P-38's engine troubles at high altitudes only occurred with the Eighth Air Force. One reason for this was the inadequate cooling systems of the G and H models; the improved P-38 J and L had tremendous success flying out of Italy into Germany at all altitudes. Until the -J-25 variant, P-38s were easily avoided by German fighters because of the lack of dive flaps to counter compressibility in dives. German fighter pilots not wishing to fight would perform the first half of a Split S and continue into steep dives because they knew the Lightnings would be reluctant to follow.
On the positive side, having two engines was a built-in insurance policy. Many pilots made it safely back to base after having an engine failure en route or in combat. On 3 March 1944, the first Allied fighters reached Berlin on a frustrated escort mission. Lieutenant Colonel Jack Jenkins of 55th Fighter Group led the group of P-38H pilots, arriving with only half his force after flak damage and engine trouble took their toll. On the way into Berlin, Jenkins reported one rough-running engine, causing him to wonder if he would ever make it back. The B-17s he was supposed to escort never showed up, having turned back at Hamburg. Jenkins and his wingman were able to drop tanks and outrun enemy fighters to return home with three good engines between them.
In the European Theater, P-38s made 130,000 sorties with a loss of 1.3% overall, comparing favorably with P-51s, which posted a 1.1% loss, considering that the P-38s were vastly outnumbered and suffered from poorly thought-out tactics. The majority of the P-38 sorties were made in the period prior to Allied air superiority in Europe, when pilots fought against a very determined and skilled enemy. Lieutenant Colonel Mark Hubbard, a vocal critic of the aircraft, rated it the third best Allied fighter in Europe. The Lightning's greatest virtues were long range, heavy payload, high speed, fast climb and concentrated firepower. The P-38 was a formidable fighter, interceptor and attack aircraft.
In the Pacific theater, the P-38 downed over 1,800 Japanese aircraft, with more than 100 pilots becoming aces by downing five or more enemy aircraft. American fuel supplies contributed to a better engine performance and maintenance record, and range was increased with leaner mixtures. In the second half of 1944, the P-38L pilots out of Dutch New Guinea were flying , fighting for fifteen minutes and returning to base. Such long legs were invaluable until the P-47N and P-51D entered service.
The end of the war left the USAAF with thousands of P-38s rendered obsolete by the jet age. The last P-38s in service with the United States Air Force were retired in 1949. A total of 100 late-model P-38L and F-5 Lightnings were acquired by Italy through an agreement dated April 1946. Delivered, after refurbishing, at the rate of one per month, they finally were all sent to the Aeronautica Militare by 1952. The Lightnings served in the 4° "Stormo" and other units including 3° "Stormo", flying reconnaissance over the Balkans, ground attack, naval cooperation and air superiority missions. Due to old engines, pilot errors and lack of experience in operations, a large number of P-38s were lost in at least 30 accidents, many of them fatal. Despite this, many Italian pilots liked the P-38 because of its excellent visibility on the ground and stability on takeoff. The Italian P-38s were phased out in 1956; none survived the scrapyard.
Surplus P-38s were also used by other foreign air forces with 12 sold to Honduras and 15 retained by China. Six F-5s and two unarmed black two-seater P-38s were operated by the Dominican Air Force based in San Isidro Airbase, Dominican Republic in 1947. The majority of wartime Lightnings present in the continental U.S. at the end of the war were put up for sale for US$1,200 apiece; the rest were scrapped. P-38s in distant theaters of war were bulldozed into piles and abandoned or scrapped; very few avoided that fate.
The CIA "Liberation Air Force" flew one P-38M to support the 1954 Guatemalan coup d'etat. On 27 June 1954, this aircraft dropped napalm bombs that destroyed the British cargo ship , which was loading Guatemalan cotton and coffee for Grace Line in Puerto San José. In 1957, five Honduran P-38s bombed and strafed a village occupied by Nicaraguan forces during a border dispute between these two countries concerning part of Gracias a Dios Department.
P-38s were popular contenders in the air races from 1946 through 1949, with brightly colored Lightnings making screaming turns around the pylons at Reno and Cleveland. Lockheed test pilot Tony LeVier was among those who bought a Lightning, choosing a P-38J model and painting it red to make it stand out as an air racer and stunt flyer. Lefty Gardner, former B-24 and B-17 pilot and associate of the Confederate Air Force, bought a mid-1944 P-38L-1-LO that had been modified into an F-5G. Gardner painted it white with red and blue trim and named it "White Lightnin"; he reworked its turbo systems and intercoolers for optimum low-altitude performance and gave it P-38F style air intakes for better streamlining. "White Lightnin" was severely damaged in a crash landing following an engine fire on a transit flight and was bought and restored with a brilliant polished aluminum finish by the company that owns Red Bull. The aircraft is now located in Austria.
F-5s were bought by aerial survey companies and employed for mapping. From the 1950s on, the use of the Lightning steadily declined, and only a little more than two dozen still exist, with few still flying. One example is a P-38L owned by the Lone Star Flight Museum in Galveston, Texas, painted in the colors of Charles H. MacDonald's "Putt Putt Maru". Two other examples are F-5Gs which were owned and operated by Kargl Aerial Surveys in 1946, and are now located in Chino, California at Yanks Air Museum, and in McMinnville, Oregon at Evergreen Aviation Museum. The earliest-built surviving P-38, "Glacier Girl", was recovered from the Greenland ice cap in 1992, fifty years after she crashed there on a ferry flight to the UK, and after a complete restoration, flew once again ten years after her recovery.
Over 10,000 Lightnings were manufactured, becoming the only U.S. combat aircraft that remained in continuous production throughout the duration of American participation in World War II. The Lightning had a major effect on other aircraft; its wing, in a scaled-up form, was used on the Lockheed Constellation.
Delivered and accepted Lightning production variants began with the P-38D model. The few "hand made" YP-38s initially contracted were used as trainers and test aircraft. There were no Bs or Cs delivered to the government as the USAAF allocated the 'D' suffix to all aircraft with self-sealing fuel tanks and armor. Many secondary but still initial teething tests were conducted using the earliest D variants.
The first combat-capable Lightning was the P-38E (and its photo-recon variant the F-4) which featured improved instruments, electrical, and hydraulic systems. Part-way through production, the older Hamilton Standard Hydromatic hollow steel propellers were replaced by new Curtiss Electric duraluminum propellers. The definitive (and now famous) armament configuration was settled upon, featuring four .50 in (12.7 mm) machine guns with 500 rpg, and a 20 mm (.79 in) Hispano autocannon with 150 rounds.
While the machine guns had been arranged symmetrically in the nose on the P-38D, they were "staggered" in the P-38E and later versions, with the muzzles protruding from the nose in the relative lengths of roughly 1:4:6:2. This was done to ensure a straight ammunition-belt feed into the weapons, as the earlier arrangement led to jamming.
The first P-38E rolled out of the factory in October 1941 as the Battle of Moscow filled the news wires of the world. Because of the versatility, redundant engines, and especially high speed and high altitude characteristics of the aircraft, as with later variants over a hundred P-38Es were completed in the factory or converted in the field to a photoreconnaissance variant, the F-4, in which the guns were replaced by four cameras. Most of these early reconnaissance Lightnings were retained stateside for training, but the F-4 was the first Lightning to be used in action in April 1942.
After 210 P-38Es were built, they were followed, starting in February 1942, by the P-38F, which incorporated racks inboard of the engines for fuel tanks or a total of of bombs. Early variants did not enjoy a high reputation for maneuverability, though they could be agile at low altitudes if flown by a capable pilot, using the P-38's forgiving stall characteristics to their best advantage. From the P-38F-15 model onwards, a "combat maneuver" setting was added to the P-38's Fowler flaps. When deployed at the 8° maneuver setting, the flaps allowed the P-38 to out-turn many contemporary single-engined fighters at the cost of some added drag. However, early variants were hampered by high aileron control forces and a low initial rate of roll, and all such features required a pilot to gain experience with the aircraft, which in part was an additional reason Lockheed sent its representative to England, and later to the Pacific Theater.
The aircraft was still experiencing extensive teething troubles as well as being victimized by "urban legends", mostly involving inapplicable twin engine factors which had been designed out of the aircraft by Lockheed. In addition to these, the early versions had a reputation as a "widow maker" as it could enter an unrecoverable dive due to a sonic surface effect at high sub-sonic speeds. The 527 P-38Fs were heavier, with more powerful engines that used more fuel, and were unpopular in the air war in Northern Europe. Since the heavier engines were having reliability problems and with them, without external fuel tanks, the range of the P-38F was reduced, and since drop tanks themselves were in short supply as the fortunes in the Battle of the Atlantic had not yet swung the Allies' way, the aircraft became relatively unpopular in minds of the bomber command planning staffs despite being the longest ranged fighter first available to the 8th Air Force in sufficient numbers for long range escort duties. Nonetheless, General Spaatz, then commander of the 8th Air Force in the UK, said of the P-38F: "I'd rather have an airplane that goes like hell and has a few things wrong with it, than one that won't go like hell and has a few things wrong with it."
The P-38F was followed in June 1942 by the P-38G, using more powerful Allisons of each and equipped with a better radio. A dozen of the planned P-38G production were set aside to serve as prototypes for what would become the P-38J with further uprated Allison V-1710F-17 engines ( each) in redesigned booms which featured chin-mounted intercoolers in place of the original system in the leading edge of the wings and more efficient radiators. Lockheed subcontractors, however, were initially unable to supply both of Burbank's twin production lines with a sufficient quantity of new core intercoolers and radiators. War Production Board planners were unwilling to sacrifice production, and one of the two remaining prototypes received the new engines but retained the old leading edge intercoolers and radiators.
As the P-38H, 600 of these stop-gap Lightnings with an improved 20 mm cannon and a bomb capacity of were produced on one line beginning in May 1943 while the near-definitive P-38J began production on the second line in August 1943. The Eighth Air Force was experiencing high altitude and cold weather issues which, while not unique to the aircraft, were perhaps more severe as the turbo-superchargers upgrading the Allisons were having their own reliability issues making the aircraft more unpopular with senior officers out of the line. This was a situation unduplicated on all other fronts where the commands were clamoring for as many P-38s as they could get. Both the P-38G and P-38H models' performance was restricted by an intercooler system integral to the wing's leading edge which had been designed for the YP-38's less powerful engines. At the higher boost levels, the new engine's charge air temperature would increase above the limits recommended by Allison and would be subject to detonation if operated at high power for extended periods of time. Reliability was not the only issue, either. For example, the reduced power settings required by the P-38H did not allow the maneuvering flap to be used to good advantage at high altitude. All these problems really came to a head in the unplanned P-38H and sped the Lightning's eventual replacement in the Eighth Air Force; fortunately the Fifteenth Air Force were glad to get them.
Some P-38G production was diverted on the assembly line to F-5A reconnaissance aircraft. An F-5A was modified to an experimental two-seat reconnaissance configuration as the XF-5D, with a plexiglas nose, two machine guns and additional cameras in the tail booms.
The P-38J was introduced in August 1943. The turbo-supercharger intercooler system on previous variants had been housed in the leading edges of the wings and had proven vulnerable to combat damage and could burst if the wrong series of controls were mistakenly activated. In the P-38J series, the streamlined engine nacelles of previous Lightnings were changed to fit the intercooler radiator between the oil coolers, forming a "chin" that visually distinguished the J model from its predecessors. While the P-38J used the same V-1710-89/91 engines as the H model, the new core-type intercooler more efficiently lowered intake manifold temperatures and permitted a substantial increase in rated power. The leading edge of the outer wing was fitted with fuel tanks, filling the space formerly occupied by intercooler tunnels, but these were omitted on early P-38J blocks due to limited availability.
The final 210 J models, designated P-38J-25-LO, alleviated the compressibility problem through the addition of a set of electrically actuated dive recovery flaps just outboard of the engines on the bottom centerline of the wings. With these improvements, a USAAF pilot reported a dive speed of almost , although the indicated air speed was later corrected for compressibility error, and the actual dive speed was lower. Lockheed manufactured over 200 retrofit modification kits to be installed on P-38J-10-LO and J-20-LO already in Europe, but the USAAF C-54 carrying them was shot down by an RAF pilot who mistook the Douglas transport for a German Focke-Wulf Condor. Unfortunately, the loss of the kits came during Lockheed test pilot Tony LeVier's four-month morale-boosting tour of P-38 bases. Flying a new Lightning named "Snafuperman", modified to full P-38J-25-LO specifications at Lockheed's modification center near Belfast, LeVier captured the pilots' full attention by routinely performing maneuvers during March 1944 that common Eighth Air Force wisdom held to be suicidal. It proved too little, too late, because the decision had already been made to re-equip with Mustangs.
The P-38J-25-LO production block also introduced hydraulically boosted ailerons, one of the first times such a system was fitted to a fighter. This significantly improved the Lightning's rate of roll and reduced control forces for the pilot. This production block and the following P-38L model are considered the definitive Lightnings, and Lockheed ramped up production, working with subcontractors across the country to produce hundreds of Lightnings each month.
There were two P-38Ks developed from 1942 to 1943, one official and one an internal Lockheed experiment. The first was actually a battered RP-38E "piggyback" test mule previously used by Lockheed to test the P-38J chin intercooler installation, now fitted with paddle-bladed "high activity" Hamilton Standard Hydromatic propellers similar to those used on the P-47. The new propellers required spinners of greater diameter, and the mule's crude, hand-formed sheet steel cowlings were further stretched to blend the spinners into the nacelles. It retained its "piggyback" configuration that allowed an observer to ride behind the pilot. With Lockheed's AAF representative as a passenger and the maneuvering flap deployed to offset Army Hot Day conditions, the old "K-Mule" still climbed to . With a fresh coat of paint covering its crude hand-formed steel cowlings, this RP-38E acts as stand-in for the "P-38K-1-LO" in the model's only picture.
The 12th G model originally set aside as a P-38J prototype was re-designated P-38K-1-LO and fitted with the aforementioned paddle-blade propellers and new Allison V-1710-75/77 (F15R/L) powerplants rated at at War Emergency Power. These engines were geared 2.36 to 1, unlike the standard P-38 ratio of 2 to 1. The AAF took delivery in September 1943, at Eglin Field. In tests, the P-38K-1 achieved at military power and was predicted to exceed at War Emergency Power with a similar increase in load and range. The initial climb rate was /min and the ceiling was . It reached in five minutes flat; this with a coat of camouflage paint which added weight and drag. Although it was judged superior in climb and speed to the latest and best fighters from all AAF manufacturers, the War Production Board refused to authorize P-38K production due to the two-to-three-week interruption in production necessary to implement cowling modifications for the revised spinners and higher thrust line. Some have also doubted Allison's ability to deliver the F15 engine in quantity. As promising as it had looked, the P-38K project came to a halt.
The P-38L was the most numerous variant of the Lightning, with 3,923 built, 113 by Consolidated-Vultee in their Nashville plant. It entered service with the USAAF in June 1944, in time to support the Allied invasion of France on D-Day. Lockheed production of the Lightning was distinguished by a suffix consisting of a production block number followed by "LO," for example "P-38L-1-LO", while Consolidated-Vultee production was distinguished by a block number followed by "VN," for example "P-38L-5-VN."
The P-38L was the first Lightning fitted with zero-length rocket launchers. Seven high velocity aircraft rockets (HVARs) on pylons beneath each wing, and later, five rockets on each wing on "Christmas tree" launch racks which added to the aircraft. The P-38L also had strengthened stores pylons to allow carriage of bombs or drop tanks.
Lockheed modified 200 P-38J airframes in production to become unarmed F-5B photo-reconnaissance aircraft, while hundreds of other P-38Js and P-38Ls were modified at Lockheed's Dallas Modification Center to become F-5Cs, F-5Es, F-5Fs, or F-5Gs. A few P-38Ls were field-modified to become two-seat TP-38L familiarization trainers. During and after June 1948, the remaining J and L variants were designated ZF-38J and ZF-38L, with the "ZF" designator (meaning "obsolete fighter") replacing the "P for Pursuit" category.
Late model Lightnings were delivered unpainted, as per USAAF policy established in 1944. At first, field units tried to paint them, since pilots worried about being too visible to the enemy, but it turned out the reduction in weight and drag was a minor advantage in combat.
The P-38L-5, the most common sub-variant of the P-38L, had a modified cockpit heating system consisting of a plug-socket in the cockpit into which the pilot could plug his heat-suit wire for improved comfort. These Lightnings also received the uprated V-1710-112/113 (F30R/L) engines, and this dramatically lowered the amount of engine failure problems experienced at high altitude so commonly associated with European operations.
The Lightning was modified for other roles. In addition to the F-4 and F-5 reconnaissance variants, a number of P-38Js and P-38Ls were field-modified as formation bombing "pathfinders" or "droopsnoots", fitted with a Norden bombsight or an H2X radar system. Such pathfinders would lead a formation of medium and heavy bombers; or of other P-38s, each loaded with two bombs; the entire formation releasing their ordinance when the pathfinder did.
A number of Lightnings were modified as night fighters. There were several field or experimental modifications with different equipment fits that finally led to the "formal" P-38M night fighter, or "Night Lightning". A total of 75 P-38Ls were modified to the Night Lightning configuration, painted flat-black with conical flash hiders on the guns, an AN/APS-6 radar pod below the nose, and a second cockpit with a raised canopy behind the pilot's canopy for the radar operator. The headroom in the rear cockpit was limited, requiring radar operators who were preferably short in stature.
One of the initial production P-38s had its turbo-superchargers removed, with a secondary cockpit placed in one of the booms to examine how flightcrew would respond to such an "asymmetric" cockpit layout. One P-38E was fitted with an extended central nacelle to accommodate a tandem-seat cockpit with dual controls, and was later fitted with a laminar flow wing.
Very early in the Pacific War, a scheme was proposed to fit Lightnings with floats to allow them to make long-range ferry flights. The floats would be removed before the aircraft went into combat. There were concerns that saltwater spray would corrode the tailplane, and so in March 1942, P-38E "41-1986" was modified with a tailplane raised some , booms lengthened by two feet and a rearward-facing second seat added for an observer to monitor the effectiveness of the new arrangement. A second version was crafted on the same airframe with the twin booms given greater sideplane area to augment the vertical rudders. This arrangement was removed and a final third version was fabricated that had the booms returned to normal length but the tail raised . All three tail modifications were designed by George H. "Bert" Estabrook. The final version was used for a quick series of dive tests on 7 December 1942 in which Milo Burcham performed the test maneuvers and Kelly Johnson observed from the rear seat. Johnson concluded that the raised floatplane tail gave no advantage in solving the problem of compressibility. At no time was this P-38E testbed airframe actually fitted with floats, and the idea was quickly abandoned as the U.S. Navy proved to have enough sealift capacity to keep up with P-38 deliveries to the South Pacific.
Still another P-38E was used in 1942 to tow a Waco troop glider as a demonstration. However, there proved to be plenty of other aircraft, such as Douglas C-47 Skytrains, available to tow gliders, and the Lightning was spared this duty.
Standard Lightnings were used as crew and cargo transports in the South Pacific. They were fitted with pods attached to the underwing pylons, replacing drop tanks or bombs, that could carry a single passenger in a lying-down position, or cargo. This was a very uncomfortable way to fly. Some of the pods were not even fitted with a window to let the passenger see out or bring in light.
Lockheed proposed a carrier-based Model 822 version of the Lightning for the United States Navy. The Model 822 would have featured folding wings, an arresting hook, and stronger undercarriage for carrier operations. The navy was not interested, as they regarded the Lightning as too big for carrier operations and did not like liquid-cooled engines anyway, and the Model 822 never went beyond the paper stage. However, the navy did operate four land-based F-5Bs in North Africa, inherited from the USAAF and redesignated FO-1.
A P-38J was used in experiments with an unusual scheme for mid-air refueling, in which the fighter snagged a drop tank trailed on a cable from a bomber. The USAAF managed to make this work, but decided it was not practical. A P-38J was also fitted with experimental retractable snow ski landing gear, but this idea never reached operational service either.
After the war, a P-38L was experimentally fitted with armament of three .60 in (15.2 mm) machine guns. The .60 in (15.2 mm) caliber cartridge had been developed early in the war for an infantry anti-tank rifle, a type of weapon developed by a number of nations in the 1930s when tanks were lighter but, by 1942, armor was too tough for this caliber.
Another P-38L was modified after the war as a "super strafer," with eight .50 in (12.7 mm) machine guns in the nose and a pod under each wing with two .50 in (12.7 mm) guns, for a total of 12 machine guns. Nothing came of this conversion either.
Civil
The 5,000th Lightning built, a P-38J-20-LO, "44-23296", was painted bright vermilion red, and had the name "YIPPEE" painted on the underside of the wings in big white letters as well as the signatures of hundreds of factory workers. This and other aircraft were used by a handful of Lockheed test pilots including Milo Burcham, Jimmie Mattern and Tony LeVier in remarkable flight demonstrations, performing such stunts as slow rolls at treetop level with one prop feathered to dispel the myth that the P-38 was unmanageable.
On 15 July 1942, a flight of six P-38s and two B-17 bombers, with a total of 25 crew members on board, took off from Presque Isle Air Base in Maine headed for the UK. What followed was a harrowing and life-threatening landing of the entire squadron on a remote ice cap in Greenland. None of the crew was lost and they were all rescued and returned safely home after spending several days on the ice.
Fifty years later a small group of aviation enthusiasts decided to locate those aircraft, which had come to be known as "The Lost Squadron", and to recover one of the lost P-38s. It turned out to be no easy task, as the planes had been buried under 25 stories of ice and drifted over a mile from their original location. The recovered P-38, dubbed "Glacier Girl", was eventually restored to airworthiness.
This plane crashed in September 1942, and is buried in the sea off the coast of Harlech, Wales, United Kingdom. It has been granted protected status in November 2019 for its historic and archaeological interest by Cadw.
The American ace of aces and his closest competitor both flew Lightnings and tallied 40 and 38 victories respectively. Majors Richard I. "Dick" Bong and Thomas B. "Tommy" McGuire of the USAAF competed for the top position. Both men were awarded the Medal of Honor.
McGuire was killed in air combat in January 1945 over the Philippines, after accumulating 38 confirmed kills, making him the second-ranking American ace. Bong was rotated back to the United States as America's ace of aces, after making 40 kills, becoming a test pilot. He was killed on 6 August 1945, the day the atomic bomb was dropped on Japan, when his Lockheed P-80 Shooting Star jet fighter flamed out on takeoff.
The famed aviator Charles Lindbergh toured the South Pacific as a civilian contractor for United Aircraft Corporation, comparing and evaluating performance of single- and twin-engined fighters for Vought. He worked to improve range and load limits of the Vought F4U Corsair, flying both routine and combat strafing missions in Corsairs alongside Marine pilots.
Everywhere Lindbergh went in the South Pacific, he was accorded the normal preferential treatment of a visiting colonel, although he had resigned his Air Corps Reserve colonel's commission three years before. In Hollandia, Lindbergh attached himself to the 475th FG, flying P-38s. Although new to the aircraft, Lindbergh was instrumental in extending the range of the P-38 through improved throttle settings, or engine-leaning techniques, notably by reducing engine speed to 1,600 rpm, setting the carburetors for auto-lean and flying at indicated airspeed which reduced fuel consumption to 70 gal/h, about 2.6 mpg. This combination of settings had been considered dangerous and would upset the fuel mixture, causing an explosion.
While with the 475th, he held training classes and took part in a number of Army Air Corps combat missions. On 28 July 1944, Lindbergh shot down a Mitsubishi Ki-51 "Sonia" flown by the veteran commander of 73rd Independent Flying Chutai, Imperial Japanese Army Captain Saburo Shimada. In an extended, twisting dogfight in which many of the participants ran out of ammunition, Shimada turned his aircraft directly toward Lindbergh who was just approaching the combat area. Lindbergh fired in a defensive reaction brought on by Shimada's apparent head-on ramming attack. Hit by cannon and machine gun fire, the "Sonia's" propeller visibly slowed, but Shimada held his course. Lindbergh pulled up at the last moment to avoid collision as the damaged "Sonia" went into a steep dive, hit the ocean and sank. Lindbergh's wingman, ace Joseph E. "Fishkiller" Miller, Jr., had also scored hits on the "Sonia" after it had begun its fatal dive, but Miller was certain the kill credit was Lindbergh's. The unofficial kill was not entered in the 475th's war record. On 12 August 1944, Lindbergh left Hollandia to return to the United States.
The third-ranking American ace of the pacific theater, Charles H. MacDonald, flew a Lightning against the Japanese and scored 27 kills in his famous aircraft, the "Putt Putt Maru".
Martin James Monti was an American pilot who defected to the Axis powers in a stolen F-5E Lightning, which was handed over to the "Luftwaffe" "Zirkus Rosarius" for testing afterward.
Robin Olds was the last P-38 ace in the Eighth Air Force and the last in the ETO. Flying a P-38J, he downed five German fighters on two separate missions over France and Germany. He subsequently transitioned to P-51s and scored seven more kills. After World War II, he flew F-4 Phantom IIs in Vietnam, ending his career as brigadier general with 16 kills.
Ross is a decorated World War II pilot who flew 96 missions for the U.S. Army Air Forces under the U.S. 8th Air Force's 7th Reconnaissance Group in the 22nd Reconnaissance Squadron. Ross flew the Lockheed P-38 Lightning as a photoreconnaissance pilot out of RAF Mount Farm in England during the war. He received 11 medals and was awarded the Distinguished Flying Cross twice for missions that were integral to Allied victory at the Battle of the Bulge.
At midday on 31 July 1944, the noted aviation pioneer and writer Antoine de Saint-Exupéry ("Night Flight", "Wind, Sand and Stars" and "The Little Prince") vanished in his P-38 of the French "Armée de l'Air's" "Groupe de Chasse II/33", after departing Borgo-Porreta, Corsica. His health, both physically and mentally, had been deteriorating. Saint-Exupéry was said to be intermittently subject to depression and there had been talk of taking him off flying status. He was on a flight over the Mediterranean, from Corsica to mainland France, in an unarmed F-5B photoreconnaissance variant of the P-38J, described as being a "war-weary, non-airworthy craft".
In 2000, a French scuba diver found the partial remnants of a Lightning spread over several thousand square meters of the Mediterranean seabed off the coast of Marseille. In April 2004, the recovered component serial numbers were confirmed as being from Saint-Exupéry's F-5B Lightning. Only a small amount of the aircraft's wreckage was recovered. In June 2004, the recovered parts and fragments were given to the Air and Space Museum of France in Le Bourget, Paris, where Saint-Exupéry's life is commemorated in a special exhibit.
In 1981 and also in 2008, two Luftwaffe fighter pilots, respectively Robert Heichele and Horst Rippert, separately claimed to have shot down Saint-Exupéry's P-38. Both claims were unverifiable and possibly self-promotional, as neither of their units' combat records of action from that period made any note of such a shoot-down.
A P-38 piloted by Clay Tice was the first American aircraft to land in Japan after VJ Day, when he and his wingman set down on Nitagahara because his wingman was low on fuel.
The RAF's notable photoreconnaissance pilot, Wing Commander Adrian Warburton (DSO w/Bar, DFC w/2 Bars) was posted as the RAF Liaison Officer to the USAAF 7th Photographic Reconnaissance Group. On 12 April 1944 he took off in a P-38 with others to photograph targets in Germany. Warburton failed to arrive at the rendezvous point and was never seen again. In 2003, his remains were recovered in Germany from his wrecked aircraft.
Harley Earl arranged for several of his designers to view a YP-38 prototype shortly before World War II, and its design directly inspired the tail fins of the 1948–1949 Cadillac.
The P-38 was also the inspiration for Raymond Loewy and his design team at Studebaker for the 1950 and 1951 model-year Studebakers.
The whine of the speeder bike engines in "Return of the Jedi" was partly achieved by recording the engine noise of a P-38, combined with that of a North American P-51 Mustang. | https://en.wikipedia.org/wiki?curid=25041 |
Prayer
Prayer is an invocation or act that seeks to activate a rapport with an object of worship through deliberate communication. In the narrow sense, the term refers to an act of supplication or intercession directed towards a deity (a god), or a deified ancestor. More generally, prayer can also have the purpose of thanksgiving or praise, and in comparative religion is closely associated with more abstract forms of meditation and with charms or spells.
Prayer can take a variety of forms: it can be part of a set liturgy or ritual, and it can be performed alone or in groups. Prayer may take the form of a hymn, incantation, formal creedal statement, or a spontaneous utterance in the praying person.
The act of prayer is attested in written sources as early as 5000 years ago. Today, most major religions involve prayer in one way or another; some ritualize the act, requiring a strict sequence of actions or placing a restriction on who is permitted to pray, while others teach that prayer may be practised spontaneously by anyone at any time.
Scientific studies regarding the use of prayer have mostly concentrated on its effect on the healing of sick or injured people. The efficacy of prayer in faith healing has been evaluated in numerous studies, with contradictory results.
The English term "prayer" is from Medieval Latin "precaria" "petition, prayer". The Vulgate Latin is "oratio", which translates Greek προσευχή in turn the Septuagint translation of Biblical Hebrew "tĕphillah".
Various spiritual traditions offer a wide variety of devotional acts. There are morning and evening prayers, graces said over meals, and reverent physical gestures. Some Christians bow their heads and fold their hands. Some Native Americans regard dancing as a form of prayer. Some Sufis whirl. Hindus chant mantras. Jewish prayer may involve swaying back and forth and bowing. Muslim prayer involves bowing, kneeling and prostration. Quakers keep silent. Some pray according to standardized rituals and liturgies, while others prefer extemporaneous prayers. Still others combine the two.
Friedrich Heiler is often cited in Christian circles for his systematic "Typology of Prayer" which lists six types of prayer: primitive, ritual, Greek cultural, philosophical, mystical, and prophetic. Some forms of prayer require a prior ritualistic form of cleansing or purification such as in ghusl and wudhu.
Prayer may be done privately and individually, or it may be done corporately in the presence of fellow believers. Prayer can be incorporated into a daily "thought life", in which one is in constant communication with a god. Some people pray throughout all that is happening during the day and seek guidance as the day progresses. This is actually regarded as a requirement in several Christian denominations, although enforcement is not possible nor desirable. There can be many different answers to prayer, just as there are many ways to interpret an answer to a question, if there in fact comes an answer. Some may experience audible, physical, or mental epiphanies. If indeed an answer comes, the time and place it comes is considered random.
Some outward acts that sometimes accompany prayer are: anointing with oil; ringing a bell; burning incense or paper; lighting a candle or candles; See, for example, facing a specific direction (i.e. towards Mecca or the East); making the sign of the cross. One less noticeable act related to prayer is fasting.
A variety of body postures may be assumed, often with specific meaning (mainly respect or adoration) associated with them: standing; sitting; kneeling; prostrate on the floor; eyes opened; eyes closed; hands folded or clasped; hands upraised; holding hands with others; a laying on of hands and others. Prayers may be recited from memory, read from a book of prayers, or composed spontaneously as they are prayed. They may be said, chanted, or sung. They may be with musical accompaniment or not. There may be a time of outward silence while prayers are offered mentally. Often, there are prayers to fit specific occasions, such as the blessing of a meal, the birth or death of a loved one, other significant events in the life of a believer, or days of the year that have special religious significance. Details corresponding to specific traditions are outlined below.
Anthropologically, the concept of prayer is closely related to that of surrender and supplication.
The traditional posture of prayer in medieval Europe is kneeling or supine with clasped hands, in antiquity more typically with raised hands. The early Christian prayer posture was standing, looking up to heaven, with outspread arms and bare head. This is the pre-Christian, pagan prayer posture (except for the bare head, which was prescribed for males in Corinthians 11:4, in Roman paganism, the head had to be covered in prayer). Certain Cretan and Cypriote figures of the Late Bronze Age, with arms raised, have been interpreted as worshippers. Their posture is similar to the "flight" posture, a crouching posture with raised hands, observed in schizophrenic patients and related to the universal "hands up" gesture of surrender. The kneeling posture with clasped hands appears to have been introduced only with the beginning high medieval period, presumably adopted from a gesture of feudal homage.
Although prayer in its literal sense is not used in animism, communication with the spirit world is vital to the animist way of life. This is usually accomplished through a shaman who, through a trance, gains access to the spirit world and then shows the spirits' thoughts to the people. Other ways to receive messages from the spirits include using astrology or contemplating fortune tellers and healers.
Some of the oldest extant literature, such as the Sumerian temple hymns of Enheduanna (c. 23rd century BC) are liturgy addressed to deities and thus technically "prayer". The Egyptian Pyramid Texts of about the same period similarly contain spells or incantations addressed to the gods. In the loosest sense, in the form of magical thinking combined with animism, prayer has been argued as representing a human cultural universal, which would have been present since the emergence of behavioral modernity, by anthropologists such as Sir Edward Burnett Tylor and Sir James George Frazer.
Reliable records are available for the polytheistic religions of the Iron Age, most notably Ancient Greek religion (which strongly influenced Roman religion). These religious traditions were direct developments of the earlier Bronze Age religions.
Ceremonial prayer was highly formulaic and ritualized.
In ancient polytheism, ancestor worship is indistinguishable from theistic worship (see also Euhemerism).
Vestiges of ancestor worship persist, to a greater or lesser extent, in modern religious traditions throughout the world, most notably in Japanese Shinto and in Chinese folk religion. The practices involved in Shinto prayer are heavily influenced by Buddhism; Japanese Buddhism has also been strongly influenced by Shinto in turn. Shinto prayers quite frequently consist of wishes or favors asked of the "kami", rather than lengthy praises or devotions. The practice of votive offering is also universal, and is attested at least since the Bronze Age. In Shinto, this takes the form of a small wooden tablet, called an "ema".
Prayers in Etruscan were used in the Roman world by augurs and other oracles long after Etruscan became a dead language. The Carmen Arvale and the Carmen Saliare are two specimens of partially preserved prayers that seem to have been unintelligible to their scribes, and whose language is full of archaisms and difficult passages.
Roman prayers and sacrifices were often envisioned as legal bargains between deity and worshipper. The Roman principle was expressed as "do ut des": "I give, so that you may give." Cato the Elder's treatise on agriculture contains many examples of preserved traditional prayers; in one, a farmer addresses the unknown deity of a possibly sacred grove, and sacrifices a pig in order to placate the god or goddess of the place and beseech his or her permission to cut down some trees from the grove.
Celtic, Germanic and Slavic religions are recorded much later, and much more fragmentarily, than the religions of classical antiquity. They nevertheless show substantial parallels to the better-attested religions of the Iron Age. In the case of Germanic religion, the practice of prayer is reliably attested, but no actual liturgy is recorded from the early (Roman era) period. An Old Norse prayer is on record in the form of a dramatization in skaldic poetry. This prayer is recorded in stanzas2 and3 of the poem "Sigrdrífumál", compiled in the 13th century "Poetic Edda" from earlier traditional sources, where the valkyrie Sigrdrífa prays to the gods and the earth after being woken by the hero Sigurd.
A prayer to Odin is mentioned in chapter2 of the "Völsunga saga" where King Rerir prays for a child. In stanza9 of the poem "Oddrúnargrátr", a prayer is made to "kind wights, Frigg and Freyja, and many gods In chapter 21 of "Jómsvíkinga saga", wishing to turn the tide of the Battle of Hjörungavágr, Haakon Sigurdsson eventually finds his prayers answered by the goddesses Þorgerðr Hölgabrúðr and Irpa.
Folk religion in the medieval period produced syncretisms between pre-Christian and Christian traditions. An example is the 11th-century Anglo-Saxon charm "Æcerbot" for the fertility of crops and land, or the medical "Wið færstice". The 8th-century Wessobrunn Prayer has been proposed as a Christianized pagan prayer and compared to the pagan "Völuspá" and the Merseburg Incantations, the latter recorded in the 9th or 10th century but of much older traditional origins.
In Australian Aboriginal mythology, prayers to the "Great Wit" are performed by the "clever men" and "clever women", or "kadji". These Aboriginal shamans use maban or mabain, the material that is believed to give them their purported magical powers. The Pueblo Indians are known to have used prayer sticks, that is, sticks with feathers attached as supplicatory offerings. The Hopi Indians used prayer sticks as well, but they attached to it a small bag of sacred meal.
The most common form of prayer is to directly appeal to a deity to grant one's requests. Some have termed this as the social approach to prayer.
Atheist arguments against prayer are mostly directed against petitionary prayer in particular. Daniel Dennett argued that petitionary prayer might have the undesirable psychological effect of relieving a person of the need to take active measures.
This potential drawback manifests in extreme forms in such cases as Christian Scientists who rely on prayers instead of seeking medical treatment for family members for easily curable conditions which later result in death.
Christopher Hitchens (2012) argued that praying to a god which is omnipotent and all-knowing would be presumptuous. For example, he interprets Ambrose Bierce's definition of prayer by stating that "the man who prays is the one who thinks that god has arranged matters all wrong, but who also thinks that he can instruct god how to put them right."
In this view, prayer is not a conversation. Rather, it is meant to inculcate certain attitudes in the one who prays, but not to influence. Among Jews, this has been the approach of Rabbenu Bachya, Rabbi Yehuda Halevi, Joseph Albo, Samson Raphael Hirsch, and Joseph B. Soloveitchik. This view is expressed by Rabbi Nosson Scherman in the overview to the Artscroll Siddur (p. XIII).
Among Christian theologians, E.M. Bounds stated the educational purpose of prayer in every chapter of his book, "The Necessity of Prayer". Prayer books such as the Book of Common Prayer are both a result of this approach and an exhortation to keep it.
In this view, the ultimate goal of prayer is to help train a person to focus on divinity through philosophy and intellectual contemplation (meditation). This approach was taken by the Jewish scholar and philosopher Maimonides and the other medieval rationalists. It became popular in Jewish, Christian, and Islamic intellectual circles, but never became the most popular understanding of prayer among the laity in any of these faiths. In all three of these faiths today, a significant minority of people still hold to this approach.
In this approach, the purpose of prayer is to enable the person praying to gain a direct experience of the recipient of the prayer (or as close to direct as a specific theology permits). This approach is very significant in Christianity and widespread in Judaism (although less popular theologically). In Eastern Orthodoxy, this approach is known as hesychasm. It is also widespread in Sufi Islam, and in some forms of mysticism. It has some similarities with the rationalist approach, since it can also involve contemplation, although the contemplation is not generally viewed as being as rational or intellectual. Christian and Roman Catholic traditions also include an experiential approach to prayer within the practice of Lectio Divina, historically a Benedictine practice in which scripture is read aloud; actively meditated upon using the intellect (but not analysis) possibly using the mind to place the listener within a relationship or dialogue with the text that was read; a prayer spoken; and finally concludes with , a more passive experiential approach than the previous meditation, which is characterized by the Catechism of the Catholic Church as an experience of consciously being attentive, and having a silent love toward God, which the individual experiences without demanding to receive an experience. The experience of God within Christian mysticism has been contrasted with the concept of experiential religion or mystical experience because of a long history or authors living and writing about experience with the divine in a manner that identifies God as unknowable and ineffable, the language of such ideas could be characterized paradoxically as "experiential", as well as without the phenomena of experience.
The notion of "religious experience" can be traced back to William James, who used a term called "religious experience" in his book, "The Varieties of Religious Experience". The origins of the use of this term can be dated further back.
In the 18th, 19th, and 20th centuries, several historical figures put forth very influential views that religion and its beliefs can be grounded in experience itself. While Kant held that moral experience justified religious beliefs, John Wesley in addition to stressing individual moral exertion thought that the religious experiences in the Methodist movement (paralleling the Romantic Movement) were foundational to religious commitment as a way of life.
Wayne Proudfoot traces the roots of the notion of "religious experience" to the German theologian Friedrich Schleiermacher (1768–1834), who argued that religion is based on a feeling of the infinite. The notion of "religious experience" was used by Schleiermacher and Albert Ritschl to defend religion against the growing scientific and secular critique, and defend the view that human (moral and religious) experience justifies religious beliefs.
Such religious empiricism would be later seen as highly problematic and was – during the period in-between world wars – famously rejected by Karl Barth. In the 20th century, religious as well as moral experience as justification for religious beliefs still holds sway. Some influential modern scholars holding this liberal theological view are Charles Raven and the Oxford physicist/theologian Charles Coulson.
The notion of "religious experience" was adopted by many scholars of religion, of whom William James was the most influential.
The notion of "experience" has been criticised. Robert Sharf points out that "experience" is a typical Western term, which has found its way into Asian religiosity via western influences. The notion of "experience" introduces a false notion of duality between "experiencer" and "experienced", whereas the essence of kensho is the realisation of the "non-duality" of observer and observed. "Pure experience" does not exist; all experience is mediated by intellectual and cognitive activity. The specific teachings and practices of a specific tradition may even determine what "experience" someone has, which means that this "experience" is not the "proof" of the teaching, but a "result" of the teaching. A pure consciousness without concepts, reached by "cleaning the doors of perception", would be an overwhelming chaos of sensory input without coherence.
In the Hebrew Bible prayer is an evolving means of interacting with God, most frequently through a spontaneous, individual, unorganized form of petitioning and/or thanking. Standardized prayer such as is done today is non-existent, although beginning in Deuteronomy, the Bible lays the groundwork for organized prayer, including basic liturgical guidelines, and by the Bible's later books, prayer has evolved to a more standardized form, although still radically different from the form practiced by modern Jews.
Individual prayer is described by the Tanakh two ways. The first of these is when prayer is described as occurring, and a result is achieved, but no further information regarding a person's prayer is given. In these instances, such as with Isaac, Moses, Samuel, and Job, the act of praying is a method of changing a situation for the better. The second way in which prayer is depicted is through fully fleshed out episodes of prayer, where a person's prayer is related in full. Many famous biblical personalities have such a prayer, including every major character from Hannah to Hezekiah.
In the New Testament prayer is presented as a positive command (; ). The People of God are challenged to include Christian prayer in their everyday life, even in the busy struggles of marriage () as it brings people closer to God.
Jesus encouraged his disciples to pray in secret in their private rooms, using the Lord's Prayer, as a humble response to the prayer of the Pharisees, whose practices in prayer were regarded as impious by the New Testament writers ().
Throughout the New Testament, prayer is shown to be God's appointed method by which we obtain what He has to bestow (; ; . Further, the Book of James says that the lack of blessings in life results from a failure to pray (). Jesus healed through prayer and expected his followers to do so also (; ). The apostle Paul wrote to the churches of Thessalonica to "Pray continually." ()
Observant Jews pray three times a day, Shacharit, Mincha, and Ma'ariv with lengthier prayers on special days, such as the Shabbat and Jewish holidays including Musaf and the reading of the Torah. The siddur is the prayerbook used by Jews all over the world, containing a set order of daily prayers. Jewish prayer is usually described as having two aspects: "kavanah" (intention) and "keva" (the ritualistic, structured elements).
The most important Jewish prayers are the Shema Yisrael ("Hear O Israel") and the Amidah ("the standing prayer").
Communal prayer is preferred over solitary prayer, and a quorum of ten adult males (a "minyan") is considered by Orthodox Judaism a prerequisite for several communal prayers.
There are also many other ritualistic prayers a Jew performs during their day, such as washing before eating bread, washing after one wakes up in the morning, and doing grace after meals.
In this view, the ultimate goal of prayer is to help train a person to focus on divinity through philosophy and intellectual contemplation. This approach was taken by Maimonides and the other medieval rationalists. One example of this approach to prayer is noted by Rabbi Steven Weil, who was appointed the Orthodox Union's Executive-Vice President in 2009. He notes that the word "prayer" is a derivative of the Latin "precari", which means "to beg". The Hebrew equivalent "tefilah", however, along with its root "pelel" or its reflexive "l’hitpallel", means the act of self-analysis or self-evaluation. This approach is sometimes described as the person praying having a dialogue or conversation with God.
In this view, prayer is not a conversation. Rather, it is meant to inculcate certain attitudes in the one who prays, but not to influence. This has been the approach of Rabbenu Bachya, Yehuda Halevy, Joseph Albo, Samson Raphael Hirsch, and Joseph Dov Soloveitchik. This view is expressed by Rabbi Nosson Scherman in the overview to the Artscroll Siddur (p. XIII); note that Scherman goes on to also affirm the Kabbalistic view (see below).
Kabbalah uses a series of "kavanot", directions of intent, to specify the path the prayer ascends in the dialog with God, to increase its chances of being answered favorably. Kabbalists ascribe a higher meaning to the purpose of prayer, which is no less than affecting the very fabric of reality itself, restructuring and repairing the universe in a real fashion. In this view, every word of every prayer, and indeed, even every letter of every word, has a precise meaning and a precise effect. Prayers thus literally affect the mystical forces of the universe, and repair the fabric of creation.
Among Jews, this approach has been taken by the Chassidei Ashkenaz (German pietists of the Middle-Ages), the Arizal's Kabbalist tradition, Ramchal, most of Hassidism, the Vilna Gaon, and Jacob Emden.
Christian prayers are quite varied. They can be completely spontaneous, or read entirely from a text, like the Anglican Book of Common Prayer. The most common prayer among Christians is the Lord's Prayer, which according to the gospel accounts (e.g. ) is how Jesus taught his disciples to pray. The Lord's Prayer is a model for prayers of adoration, confession and petition in Christianity. In medieval England, prayers (particularly the "paternoster") were frequently used as a measure of time in medical and culinary recipe books.
Christians generally pray to God or to the Father. Some Christians (e.g., Catholics, Orthodox) will also ask the righteous in heaven and "in Christ," such as Virgin Mary or other saints to intercede by praying on their behalf (intercession of saints). Formulaic closures include "through our Lord Jesus Christ, Your Son, who lives and reigns with You, in the unity of the Holy Spirit, God, through all the ages of ages," and "in the name of the Father, and the Son, and the Holy Spirit."
It is customary among Protestants to end prayers with "In Jesus' name, Amen" or "In the name of Christ, Amen." However, the most commonly used closure in Christianity is simply "Amen" (from a Hebrew adverb used as a statement of affirmation or agreement, usually translated as "so be it").
In the Western or Latin Rite of the Roman Catholic Church, probably the most common is the Rosary; In the Eastern Church (the Eastern rites of the Catholic Church and Orthodox Church), the Jesus Prayer. The Jesus Prayer is also often repeated as part of the meditative hesychasm practice in Eastern Christianity.
Roman Catholic tradition includes specific prayers and devotions as acts of reparation which do not involve a petition for a living or deceased beneficiary, but aim to repair the sins of others, e.g. for the repair of the sin of blasphemy performed by others.
Other forms of prayer among Catholics would be meditative prayer, contemplative prayer and infused prayer discussed at length by Catholic Saints St. John of the Cross and St. Theresa of Jesus.
In Pentecostal congregations, prayer is often accompanied by speaking in an unknown tongue, a practice now known as glossolalia. Practitioners of Pentecostal glossolalia may claim that the languages they speak in prayer are real foreign languages, and that the ability to speak those languages spontaneously is a gift of the Holy Spirit. Some people outside of the movement, however, have offered dissenting views. George Barton Cutten suggested that glossolalia was a sign of mental illness. Felicitas Goodman suggested that tongue speakers were under a form of hypnosis. Others suggest that it is a learned behaviour. Some of these views have allegedly been refuted.
Christian Science teaches that prayer is a spiritualization of thought or an understanding of God and of the nature of the underlying spiritual creation. Adherents believe that this can result in healing, by bringing spiritual reality into clearer focus in the human scene. The world as it appears to the senses is regarded as a distorted version of the world of spiritual ideas. Prayer can heal the distortion. Christian Scientists believe that prayer does not change the spiritual creation but gives a clearer view of it, and the result appears in the human scene as healing: the human picture adjusts to coincide more nearly with the divine reality. Christian Scientists do not practice intercessory prayer as it is commonly understood, and they generally avoid combining prayer with medical treatment in the belief that the two practices tend to work against each other. Prayer works through love: the recognition of God's creation as spiritual, intact, and inherently lovable.
The Arabic word for prayer is "salah". In Islam, there are five daily obligatory prayers that are considered as one of the pillars of the religion. The command to ritual prayer occurs repeatedly in the Quran. The prayer is performed by the person while they are facing the Kaaba in Mecca. There is the "call for prayer" ("adhan"), where the "muezzin" calls for all the followers to stand together for the prayer. The prayer consists of actions such as glorifying and praising God (such as mentioning ‘Allāhu Akbar’ (God is Great)) while standing, recitation of chapters of the Quran (such as the opening chapter of the book ("Al-Fatiha")), bowing down then praising God, prostrating ("sujud") then again praising God and it ends with the words: "Peace be with you and God’s mercy". During the prayer, a Muslim cannot talk or do anything else besides pray. Once the prayer is complete, one can offer personal prayers or supplications to God for their needs that are known as "dua". There are many standard invocations in Arabic to be recited at various times ("e.g." after the prayer) and for various occasions ("e.g." for one's parents) with manners and etiquette such as before eating. Muslims may also say "dua" in their own words and languages for any issue they wish to communicate with God in the hope that God will answer their prayers. Certain Shi'a sects pray the five daily prayers divided into three separate parts of the day, providing several Hadith as supporting evidence; although according to Shia Islam, it is also permissible to pray at five times.
Bahá'u'lláh, the Báb, and `Abdu'l-Bahá wrote many prayers for general use, and some for specific occasions, including for unity, detachment, spiritual upliftment, and healing among others. Bahá'ís are also required to recite each day one of three obligatory prayers composed by Bahá'u'lláh. The believers have been enjoined to face in the direction of the Qiblih when reciting their Obligatory Prayer. The longest obligatory prayer may be recited at any time during the day; another, of medium length, is recited once in the morning, once at midday, and once in the evening; and the shortest can be recited anytime between noon and sunset. Bahá'ís also read from and meditate on the scriptures every morning and evening.
In both Buddhism and Hinduism, the repetition of mantras is closely related to the practice of repetitive prayer in Western religion (rosary, Jesus prayer). Many of the most widespread Hindu and Buddhist mantras are in origin invocations of deities, e.g. Gayatri Mantra dedicated to Savitr, Pavamana Mantra to Soma Pavamana, and many of the Buddhist Dhāraṇī originate as recitations of lists of names or attributes of deities. Most of the shorter Buddhist mantras originate as the invocation of the name of a specific deity or "bodhisattva", such as "Om mani padme hum" being in origin the invocation of a "bodhisattva" called "Maṇipadma". However, from an early time these mantras were interpreted in the context of mystical sound symbolism. The most extreme example of this is the om syllable, which as early as in the Aitareya Brahmana was claimed as equivalent to the entire Vedas (collection of ritual hymns).
In the earliest Buddhist tradition, the Theravada, and in the later Mahayana tradition of Zen (or Chán), prayer plays only an ancillary role. It is largely a ritual expression of wishes for success in the practice and in helping all beings.
The skillful means (Sanskrit: "upāya") of the transfer of merit (Sanskrit: "pariṇāmanā") is an evocation and prayer. Moreover, indeterminate buddhas are available for intercession as they reside in awoken-fields (Sanskrit: "buddha-kshetra").
The "nirmānakāya" of an awoken-field is what is generally known and understood as a mandala. The opening and closing of the ring (Sanskrit: "maṇḍala") is an active prayer. An active prayer is a mindful activity, an activity in which mindfulness is not just cultivated but "is". A common prayer is "May the merit of my practice, adorn Buddhas' Pure Lands, requite the fourfold kindness from above, and relieve the suffering of the three life-journeys below. Universally wishing sentient beings, Friends, foes, and karmic creditors, all to activate the bodhi mind, and all to be reborn in the Pure Land of Ultimate Bliss." (願以此功德 莊嚴佛淨土 上報四重恩 下濟三途苦 普願諸眾生 冤親諸債主 悉發菩提心 同生極樂國)
The Generation Stage (Sanskrit: "utpatti-krama") of Vajrayana involves prayer elements.
The Tibetan Buddhism tradition emphasizes an instructive and devotional relationship to a guru; this may involve devotional practices known as guru yoga which are congruent with prayer. It also appears that Tibetan Buddhism posits the existence of various deities, but the peak view of the tradition is that the deities or "yidam" are no more existent or real than the continuity (Sanskrit: "santana"; refer mindstream) of the practitioner, environment and activity. But how practitioners engage "yidam" or tutelary deities will depend upon the level or more appropriately "yana" at which they are practicing. At one level, one may pray to a deity for protection or assistance, taking a more subordinate role. At another level, one may invoke the deity, on a more equal footing. And at a higher level one may deliberately cultivate the idea that one has become the deity, whilst remaining aware that its ultimate nature is "śūnyatā". The views of the more esoteric "yana" are impenetrable for those without direct experience and empowerment.
Pure Land Buddhism emphasizes the recitation by devotees of prayer-like mantras, a practice often called "Nembutsu". On one level it is said that reciting these mantras can ensure rebirth into a "Sambhogakāya" land (Sanskrit: "buddha-kshetra") after bodily dissolution, a sheer ball spontaneously co-emergent to a buddha's enlightened intention. According to Shinran, the founder of the Pure Land Buddhism tradition that is most prevalent in the US, "for the long haul nothing is as efficacious as the Nembutsu." On another, the practice is a form of meditation aimed at achieving realization.
But beyond all these practices the Buddha emphasized the primacy of individual practice and experience. He said that supplication to gods or deities was not necessary. Nevertheless, today many lay people in East Asian countries pray to the Buddha in ways that resemble Western prayer—asking for intervention and offering devotion.
Hinduism has incorporated many kinds of prayer (Sanskrit: "prārthanā"), from fire-based rituals to philosophical musings. While chanting involves 'by dictum' recitation of timeless verses or verses with timings and notations, "dhyanam" involves deep meditation (however short or long) on the preferred deity/God. Again the object to which prayers are offered could be a persons referred as "devtas", trinity or incarnation of either "devtas" or trinity or simply plain formless meditation as practiced by the ancient sages. These prayers can be directed to fulfilling personal needs or deep spiritual enlightenment, and also for the benefit of others. Ritual invocation was part and parcel of the Vedic religion and as such permeated their sacred texts. Indeed, the highest sacred texts of the Hindus, the Vedas, are a large collection of mantras and prayer rituals. Classical Hinduism came to focus on extolling a single supreme force, Brahman, that is made manifest in several lower forms as the familiar gods of the Hindu pantheon. Hindus in India have numerous devotional movements. Hindus may pray to the highest absolute God Brahman, or more commonly to its three manifestations, a creator god called Brahma, a preserver god called Vishnu and a destroyer god (so that the creation cycle can start afresh) Shiva, and at the next level to Vishnu's avatars (earthly appearances) Rama and Krishna or to many other male or female deities. Typically, Hindus pray with their hands (the palms) joined together in "pranam". The hand gesture is similar to the popular Indian greeting "namaste".
The "Ardās" (Punjabi: ਅਰਦਾਸ) is a Sikh prayer that is done before performing or after undertaking any significant task; after reciting the daily "Banis" (prayers); or completion of a service like the "Paath" (scripture reading/recitation), "kirtan" (hymn-singing) program or any other religious program. In Sikhism, these prayers are also said before and after eating. The prayer is a plea to God to support and help the devotee with whatever he or she is about to undertake or has done.
The "Ardas" is usually always done standing up with folded hands. The beginning of the "Ardas" is strictly set by the tenth Sikh Guru, Guru Gobind Singh. When it comes to conclusion of this prayer, the devotee uses words like "Waheguru please bless me in the task that I am about to undertake" when starting a new task or "Akal Purakh, having completed the hymn-singing, we ask for your continued blessings so that we can continue with your memory and remember you at all times", etc. The word "Ardās" is derived from Persian word 'Arazdashat', meaning a request, supplication, prayer, petition or an address to a superior authority.
Ardās is a unique prayer based on the fact that it is one of the few well-known prayers in the Sikh religion that was not written in its entirety by the Gurus. The Ardās cannot be found within the pages of the Guru Granth Sahib because it is a continually changing devotional text that has evolved over time in order for it to encompass the feats, accomplishments, and feelings of all generations of Sikhs within its lines. Taking the various derivation of the word Ardās into account, the basic purpose of this prayer is an appeal to Waheguru for his protection and care, as well as being a plea for the welfare and prosperity of all mankind, and a means for the Sikhs to thank Waheguru for all that he has done.
Wiccan prayers can include meditation, rituals and incantations. Wiccans see prayers as a form of communication with the God and Goddess. Such communication may include prayers for "esbat" and "sabbat" celebrations, for dinner, for pre-dawn times or for one's own or others' safety, for healing or for the dead.
In Raëlism rites and practises vary from initiation ceremonies to sensual meditation. An initiation ceremony usually involves a Raelian putting water on the forehead of a new member. Such ceremonies take place on certain special days on the Raelian calendar. Sensual meditation techniques include breathing exercises and various forms of erotic meditation.
In Eckankar, one of the basic forms of prayer includes singing the word "HU" (pronounced as "hue"), a holy name of God. ECKists may do this with eyes closed or open, aloud or silently. Practitioners may experience the divine ECK or Holy Spirit.
Practitioners of theurgy and Western esotericism may practice a form of ritual which utilizes both pre-sanctioned prayers and names of God, and prayers "from the heart" that, when combined, allow the participant to ascend spiritually, and in some instances, induce a trance in which God or other spiritual beings may be realized. Very much as in Hermetic Qabalah and orthodox Kabbalah, it is believed that prayer can influence both the physical and non-physical worlds. The use of ritualistic signs and names are believed to be archetypes in which the subconscious may take form as the Inner God, or another spiritual being, and the "prayer from the heart" to be that spiritual force speaking through the participant.
In Thelema (which includes both theist as well as atheist practitioners) adherents share a number of practices that are forms of individual prayer, including basic yoga; (asana and pranayama); various forms of ritual magick; rituals of one's own devising (often based upon a syncretism of religions, or Western Esotericism, such as the Lesser Banishing Ritual of the Pentagram and Star Ruby); and performance of Liber Resh vel Helios (aka Liber 200), which consists of four daily adorations to the sun (often consisting of four hand/body positions and recitation of a memorized song, normally spoken, addressing different godforms identified with the sun).
While no dogma within Thelema expresses the purpose behind any individual aspirant who chooses to perform "Resh", note that the practice of "Resh" is not a simple petition toward the sun, nor a form of "worshiping" the celestial body that we call the Sun, but instead uses the positioning of that source of light, which enables life on our planet, as well as using mythological images of that solar force, so that the individual can perform the prayer, possibly furthering a self-identification with the sun, so "that repeated application of the Liber Resh adorations expands the consciousness of the individual by compelling him to take a different perspective, by inducing him to 'look at things from the point of view of the Sun' [...]".
Prayer is often used as a means of faith healing in an attempt to use religious or spiritual means to prevent illness, cure disease, or improve health.
Scientific studies regarding the use of prayer have mostly concentrated on its effect on the healing of sick or injured people. Meta-studies have been performed showing evidence only for no effect or a potentially small effect. For instance, a 2006 meta analysis on 14 studies concluded that there is "no discernable effect" while a 2007 systemic review of studies on intercessory prayer reported inconclusive results, noting that seven of 17 studies had "small, but significant, effect sizes" but the review noted that the most methodologically rigorous studies failed to produce significant findings. Some studies have indicated increased medical complications in groups receiving prayer over those without.
The efficacy of petition in prayer for physical healing to a deity has been evaluated in numerous other studies, with contradictory results. There has been some criticism of the way the studies were conducted.
Some attempt to heal by prayer, mental practices, spiritual insights, or other techniques, claiming they can summon divine or supernatural intervention on behalf of the ill. Others advocate that ill people may achieve healing through prayer performed by themselves. According to the varied beliefs of those who practice it, faith healing may be said to afford gradual relief from pain or sickness or to bring about a sudden "miracle cure", and it may be used in place of, or in tandem with, conventional medical techniques for alleviating or curing diseases. Faith healing has been criticized on the grounds that those who use it may delay seeking potentially curative conventional medical care. This is particularly problematic when parents use faith healing techniques on children.
In 1872, Francis Galton conducted a famous statistical experiment to determine whether prayer had a physical effect on the external environment. Galton hypothesized that if prayer was effective, members of the British Royal family would live longer, given that thousands prayed for their wellbeing every Sunday. He therefore compared longevity in the British Royal family with that of the general population, and found no difference. While the experiment was probably intended to satirize, and suffered from a number of confounders, it set the precedent for a number of different studies, the results of which are contradictory.
Two studies claimed that patients who are being prayed for recover more quickly or more frequently although critics have claimed that the methodology of such studies are flawed, and the perceived effect disappears when controls are tightened. One such study, with a double-blind design and about 500 subjects per group, was published in 1988; it suggested that intercessory prayer by born again Christians had a statistically significant positive effect on a coronary care unit population. Critics contend that there were severe methodological problems with this study. Another such study was reported by Harris et al. Critics also claim that the 1988 study was not fully double-blinded, and that in the Harris study, patients actually had a longer hospital stay in the prayer group, if one discounts the patients in both groups who left before prayers began, although the Harris study did demonstrate the prayed for patients on average received lower course scores (indicating better recovery).
One of the largest randomized, blind clinical trials was a remote "retroactive" intercessory prayer study conducted in Israel by Leibovici. This study used 3393 patient records from 1990–96, and blindly assigned some of these to an intercessory prayer group. The prayer group had shorter hospital stays and duration of fever.
Several studies of prayer effectiveness have yielded null results. A 2001 double-blind study of the Mayo Clinic found no significant difference in the recovery rates between people who were (unbeknownst to them) assigned to a group that prayed for them and those who were not. Similarly, the MANTRA study conducted by Duke University found no differences in outcome of cardiac procedures as a result of prayer. In another similar study published in the "American Heart Journal" in 2006, Christian intercessory prayer when reading a scripted prayer was found to have no effect on the recovery of heart surgery patients; however, the study found patients who had knowledge of receiving prayer had slightly higher instances of complications than those who did not know if they were being prayed for or those who did not receive prayer. Another 2006 study suggested that prayer actually had a significant negative effect on the recovery of cardiac bypass patients, resulting in more frequent deaths and slower recovery time for those patient who received prayers.
Many believe that prayer can aid in recovery, not due to divine influence but due to psychological and physical benefits. It has also been suggested that if a person knows that he or she is being prayed for it can be uplifting and increase morale, thus aiding recovery. (See Subject-expectancy effect.) Many studies have suggested that prayer can reduce physical stress, regardless of the god or gods a person prays to, and this may be true for many worldly reasons. According to a study by Centra State Hospital, "the psychological benefits of prayer may help reduce stress and anxiety, promote a more positive outlook, and strengthen the will to live." Other practices such as yoga, t'ai chi, and meditation may also have a positive impact on physical and psychological health.
Others feel that the concept of conducting prayer experiments reflects a misunderstanding of the purpose of prayer. The previously mentioned study published in the "American Heart Journal" indicated that some of the intercessors who took part in it complained about the scripted nature of the prayers that were imposed to them, saying that this is not the way they usually conduct prayer:
One scientific movement attempts to track the physical effects of prayer through neuroscience. Leaders in this movement include Andrew Newberg, an Associate Professor at the University of Pennsylvania. In Newberg's brain scans, monks, priests, nuns, sisters and gurus alike have exceptionally focused attention and compassion sites. This is a result of the frontal lobe of the brain’s engagement (Newberg, 2009). Newburg believes that anybody can connect to the supernatural with practice. Those without religious affiliations benefit from the connection to the metaphysical as well. Newberg also states that further evidence towards humans' need for metaphysical relationships is that as science had increased spirituality has not decreased. Newburg believes that at the end of the 18th century, when the scientific method began to consume the human mind, religion could have vanished. However, two hundred years later, the perception of spirituality, in many instances, appears to be gaining in strength (2009). Newberg's research also provides the connection between prayer and meditation and health. By understanding how the brain works during religious experiences and practices Newberg's research shows that the brain changes during these practices allowing an understanding of how religion affects psychological and physical health (2009). For example, brain activity during meditation indicates that people who frequently practice prayer or meditation experience lower blood-pressure, lower heart rates, decreased anxiety, and decreased depression.
One study found that prayer combined with IVF treatment nearly doubled the number of women who were successfully pregnant, and more than doubled the number of successful implantations.
Some modalities of alternative medicine employ prayer. A survey released in May 2004 by the National Center for Complementary and Alternative Medicine, part of the National Institutes of Health in the United States, found that in 2002, 43% of Americans pray for their own health, 24% pray for others' health, and 10% participate in a prayer group for their own health. | https://en.wikipedia.org/wiki?curid=25042 |
Punjabi language
Punjabi ( / ; ; ) is an Indo-Aryan language with more than 125 million native speakers in the Indian subcontinent and around the world. It is the native language of the Punjabi people, an ethnolinguistic group of the cultural region of Punjab, which encompasses northwest India and eastern Pakistan.
Punjabi is the most widely spoken language in Pakistan, the 11th most widely spoken language in India and the third most-spoken native language in the Indian subcontinent. It is the third most spoken language in the United Kingdom after the native British languages and Polish. It is also the fifth most-spoken native language in Canada after English, French, Mandarin and Cantonese. It is the twenty-sixth most spoken language in the United States, and tenth in Australia.
Punjabi is unusual among Indo-European languages in its use of lexical tone; see below for examples. Gurmukhi is the official script for the language in Punjab, India while Shahmukhi is used in Punjab, Pakistan; other national and local scripts have also been in use historically and currently, as discussed in .
The word "Punjabi" (sometimes spelled "Panjabi") has been derived from the word Panj-āb, Persian for "Five Waters", referring to the five major eastern tributaries of the Indus River. The name of the region was introduced by the Turko-Persian conquerors of South Asia and was a translation of the Sanskrit name for the region, "Panchanada", which means "Land of the Five Rivers". "Panj" is cognate with Sanskrit ("") and Greek ("pénte") and Lithuanian "Penki" - "five", and "āb" is cognate with Sanskrit ("áp") and with the of . The historical Punjab region, now divided between India and Pakistan, is defined physiographically by the Indus River and these five tributaries. One of the five, the Beas River, is a tributary of another, the Sutlej.
Punjabi developed from Prakrit languages and later (Sanskrit: ; corruption or corrupted speech) From 600 BC Sanskrit was advocated as official language and Prakrit gave birth to many regional languages in different parts of India. All these languages are called Prakrit (Sanskrit: ) collectively. Paishachi Prakrit was one of these Prakrit languages, which was spoken in north and north-western India and Punjabi developed from this Prakrit. Later in northern India Paishachi Prakrit gave rise to Paishachi Aparbhsha, a descendant of Prakrit. Punjabi emerged as an Apabhramsha, a degenerated form of Prakrit, in the 7th century A.D. and became stable by the 10th century.
Arabic and Persian influence in the historical Punjab region began with the late first millennium Muslim conquests on the Indian subcontinent. The Persian language was introduced in the subcontinent a few centuries later by various Turko-Persian dynasties. Many Persian and Arabic words were incorporated in Punjabi. It is noteworthy that the Hindustani language is divided into Hindi, with more Sanskritisation, and Urdu, with more Persianisation, but in Punjabi Sanskrit usage is less and Panjabi relies heavily on Persian and Arabic words are used with a liberal approach to language. Most important words in Panjabi like ਅਰਦਾਸ and ਰਹਿਰਾਸ and common words like ਨਹਿਰ ਜ਼ਮੀਨ, ਗਜ਼ਲ, etc have all come out of Persian. In fact the sounds of ਜ਼, ਖ਼, ਸ਼, ਫ਼ have been borrowed from Persian. Later, it was lexically influenced by Portuguese (words like ਅਲਮਾਰੀ), Greek (words like ਦਾਮ), Turkish (words like ਕੈਂਚੀ, ਸੁਗਾਤ), Japanese (words like ਰਿਕਸ਼ਾ), Chinese (words like ਚਾਹ, ਲੀਚੀ, ਲੁਕਾਠ) and English (words like ਜੱਜ, ਅਪੀਲ, ਮਾਸਟਰ) though these influences have been minor in comparison to Persian and Arabic.
Punjabi is the most widely spoken language in Pakistan, the eleventh -most widely spoken in India and spoken Punjabi diaspora in various countries.
Punjabi is the most widely spoken language in Pakistan, being the native language of % of its population. It is the provincial language in the Punjab Province.
Beginning with the 1981 census, speakers of Saraiki and Hindko were no longer included in the total numbers for Punjabi, which could explain the apparent decrease.
Punjabi is spoken as a native language by about 33 million people in India. Punjabi is the official language of the Indian state of Punjab. It is additional official in Haryana and Delhi. Some of its major urban centres in northern India are Amritsar, Ludhiana, Chandigarh, Jalandhar, Ambala, Patiala, Bathinda, Hoshiarpur and Delhi.
Punjabi is also spoken as a minority language in several other countries where Punjabi people have emigrated in large numbers, such as the United States, Australia, the United Kingdom, and Canada, where it is the fourth-most-commonly used language.
There were 76 million Punjabi speakers in Pakistan in 2008, 33 million in India in 2011, 368,000 in Canada in 2006, and smaller numbers in other countries.
The Majhi dialect spoken around Amritsar and Lahore is Punjabi's prestige dialect. Majhi is spoken in the heart of Punjab in the region of Majha, which spans Lahore, Amritsar, Gurdaspur, Kasur, Tarn Taran, Faisalabad, Nankana Sahib, Pathankot, Okara, Pakpattan, Sahiwal, Narowal, Sheikhupura, Sialkot, Gujranwala and Gujrat districts. Punjabi official language based on the Majhi.
Majhi retains the nasal consonants and , which have been superseded elsewhere by non-nasals and respectively.
Shahpuri dialect (also known as Sargodha dialect) is mostly spoken in Pakistani Punjab. Its name is derived from former Shahpur District (now Shahpur Tehsil, being part of Sargodha District). It is spoken throughout a widespread area, spoken in Sargodha and Khushab Districts and also spoken in neighbouring Mianwali and Bhakkar Districts. It is mainly spoken on western end of Indus River to Chenab river crossing Jhelum river.
Malwai is spoken in the southern part of Indian Punjab and also in Bahawalnagar and Vehari districts of Pakistan. Main areas are faridkot, Barnala, Ludhiana, Patiala, Ambala, Bathinda, Mansa, Sangrur, Malerkotla, Fazilka, Ferozepur, Moga. Malwa is the southern and central part of present-day Indian Punjab. It also includes the Punjabi speaking northern areas of Haryana, viz. Ambala, Sirsa, Kurukshetra, Panchkula etc. Not to be confused with the Malvi language, which shares its name.
Doabi is spoken in both the Indian Punjab as well as parts of Pakistan Punjab owing to post-1947 migration of Muslim populace from East Punjab. The word "Do Aabi" means "the land between two rivers" and this dialect was historically spoken between the rivers of the Beas and the Sutlej in the region called Doaba. Regions it is presently spoken in include the Jalandhar, Hoshiarpur and Kapurthala districts in Indian Punjab, specifically in the areas known as the Dona and Manjki, as well as the Toba Tek Singh and Faisalabad districts in Pakistan Punjab where the dialect is known as Faisalabadi Punjabi.
Puadh is a region of Punjab and parts of Haryana between the Satluj and Ghaggar rivers. The part lying south, south-east and east of Rupnagar adjacent to Ambala District (Haryana) is Puadhi. The Puadh extends from that part of the Rupnagar District which lies near Satluj to beyond the Ghaggar river in the east up to Kala Amb, which is at the border of the states of Himachal Pradesh and Haryana. Parts of Fatehgarh Sahib district, and parts of Patiala districts like Rajpura are also part of Puadh. The Puadhi dialect is spoken over a large area in present Punjab as well as Haryana. In Punjab, Kharar, Kurali, Ropar, Nurpurbedi, Morinda, Pail, Rajpura and Samrala are areas where Puadhi is spoken and the dialect area also includes Pinjore, Kalka, Ismailabad, Pehowa to Bangar area in Fatehabad district.
Jhangochi spoken in Khanewal and Jhang districts is actually subdialect of Jatki/Jangli. 'Jhangochi' word has limitations as it doesn't represent whole bar region of Punjab.
Jatki or Jangli is a dialect of native tribes of areas whose names are often suffixed with 'Bar' derived from jungle bar before irrigation system arrived in the start of the 20th century, for example, Sandal Bar, Kirana Bar, Neeli Bar, Ganji Bar. Native people called their dialect as Jatki instead of Jangli. Jatki dialect is mostly spoken by Indigenous peoples of Faisalabad, Jhang, Toba Tek Singh, Chiniot, Nankana Sahib, Hafizabad, Mandi Bahauddin, Sargodha, Sahiwal, Okara, Pakpattan, Bahawalnagar, Vehari and Khanewal districts of Pakistani Punjab. It is also spoken in few areas of Sheikhupura, Muzaffargarh, Lodhran' Bahawalpur districts and Fazilka district of Indian Punjab.
West of Chenaab river in Jhang district of Pakistani Punjab the dialect of Jhangochi merges with Thalochi and resultant dialect is Chenavari. Name is derived from Chenaab river.
While a vowel length distinction between short and long vowels exists, reflected in modern Gurmukhi orthographical conventions, it is secondary to the vowel quality contrast between centralised vowels and peripheral vowels in terms of phonetic significance.
The peripheral vowels have nasal analogues.
The three retroflex consonants do not occur initially, and the nasals occur only as allophones of in clusters with velars and palatals. The well-established phoneme may be realised allophonically as the voiceless retroflex fricative in learned clusters with retroflexes. The phonemic status of the fricatives varies with familiarity with Hindustani norms, with the pairs , , , and systematically distinguished in educated speech. The retroflex lateral is most commonly analysed as an approximant as opposed to a flap.
Punjabi is a tonal language and in many words there is a choice of up to three tones, high-falling, low-rising, and level (neutral):
Level tone is found in about 75% of words and is described by some as absence of tone. There are also some words which are said to have rising tone in the first syllable and falling in the second. (Some writers describe this as a fourth tone.) However, a recent acoustic study of six Punjabi speakers in the United States found no evidence of a separate falling tone following a medial consonant.
It is considered that these tones arose when voiced aspirated consonants () lost their aspiration. At the beginning of a word they became voiceless unaspirated consonants () followed by a high-falling tone; medially or finally they became voiced unaspirated consonants (), preceded by a low-rising tone. (The development of a high-falling tone apparently did not take place in every word, but only in those which historically had a long vowel.)
The presence of an [h] (although the [h] is now silent or very weakly pronounced except word-initially) word-finally (and sometimes medially) also often causes a rising tone before it, for example "" "tea".
The Gurmukhi script which was developed in the 16th century has separate letters for voiced aspirated sounds, so it is thought that the change in pronunciation of the consonants and development of tones may have taken place since that time.
Some other languages in Pakistan have also been found to have tonal distinctions, including Burushaski, Gujari, Hindko, Kalami, Shina, and Torwali.
Punjabi has a canonical word order of SOV (subject–object–verb). It has postpositions rather than prepositions.
Punjabi distinguishes two genders, two numbers, and five cases of direct, oblique, vocative, ablative, and locative/instrumental. The ablative occurs only in the singular, in free variation with oblique case plus ablative postposition, and the locative/instrumental is usually confined to set adverbial expressions.
Adjectives, when declinable, are marked for the gender, number, and case of the nouns they qualify. There is also a T-V distinction.
Upon the inflectional case is built a system of particles known as postpositions, which parallel English's prepositions. It is their use with a noun or verb that is what necessitates the noun or verb taking the oblique case, and it is with them that the locus of grammatical function or "case-marking" then lies.
The Punjabi verbal system is largely structured around a combination of aspect and tense/mood. Like the nominal system, the Punjabi verb takes a single inflectional suffix, and is often followed by successive layers of elements like auxiliary verbs and postpositions to the right of the lexical base.
The grammar of the Punjabi language concerns the word order, case marking, verb conjugation, and other morphological and syntactic structures of the Punjabi language.
The Punjabi language is written in multiple scripts (a phenomenon known as synchronic digraphia). Each of the major scripts currently in use is typically associated with a particular religious group, although the association is not absolute or exclusive.
In India, Punjabi Sikhs use Gurmukhi, a script of the Brahmic family, which has official status in the state of Punjab. In Pakistan, Punjabi Muslims use Shahmukhi, a variant of the Perso-Arabic script and closely related to the Urdu alphabet. The Punjabi Hindus in India had a preference for Devanagari, another Brahmic script also used for Hindi, and in the first decades since independence raised objections to the uniform adoption of Gurmukhi in the state of Punjab, but most have now switched to Gurmukhi and so the use of Devanagari is rare.
Historically, various local Brahmic scripts including Laṇḍā and its descendants were also in use.
The Punjabi Braille is used by the visually impaired.
This sample text was taken from the Punjabi Wikipedia article on Lahore.
Gurmukhi: ਲਹੌਰ ਪਾਕਿਸਤਾਨੀ ਪੰਜਾਬ ਦੀ ਰਾਜਧਾਨੀ ਹੈ। ਲੋਕ ਗਿਣਤੀ ਦੇ ਨਾਲ ਕਰਾਚੀ ਤੋਂ ਬਾਅਦ ਲਹੌਰ ਦੂਜਾ ਸਭ ਤੋਂ ਵੱਡਾ ਸ਼ਹਿਰ ਹੈ। ਲਹੌਰ ਪਾਕਿਸਤਾਨ ਦਾ ਸਿਆਸੀ, ਰਹਤਲੀ ਅਤੇ ਪੜ੍ਹਾਈ ਦਾ ਗੜ੍ਹ ਹੈ ਅਤੇ ਇਸੇ ਲਈ ਇਹਨੂੰ ਪਾਕਿਸਤਾਨ ਦਾ ਦਿਲ ਵੀ ਕਿਹਾ ਜਾਂਦਾ ਹੈ । ਲਹੌਰ ਰਾਵੀ ਦਰਿਆ ਦੇ ਕੰਢੇ 'ਤੇ ਵਸਦਾ ਹੈ । ਇਸਦੀ ਲੋਕ ਗਿਣਤੀ ਇੱਕ ਕਰੋੜ ਦੇ ਨੇੜੇ ਹੈ |
Transliteration: lahaur pākistānī panjāb dī rājtā̀ni/dārul hakūmat ài. lok giṇtī de nāḷ karācī tõ bāad lahaur dūjā sáb tõ vaḍḍā šáir ài. lahaur pākistān dā siāsī, rátalī ate paṛā̀ī dā gáṛ ài te ise laī ínū̃ pākistān dā dil vī kihā jāndā ài. lahaur rāvī dariā de káṇḍè te vasdā ài. isdī lok giṇtī ikk karoṛ de neṛe ài.
IPA:
Translation: Lahore is the capital city of Pakistani Punjab. After Karachi, Lahore is the second largest city. Lahore is Pakistan's political, cultural, and educational hub, and so it is also said to be the heart of Pakistan. Lahore lies on the bank of the Ravi River. Its population is close to ten million people.
The "Janamsakhis", stories on the life and legend of Guru Nanak (1469–1539), are early examples of Punjabi prose literature.
The Victorian novel, Elizabethan drama, free verse and Modernism entered Punjabi literature through the introduction of British education during the Raj. Nanak Singh (1897–1971), Vir Singh, Ishwar Nanda, Amrita Pritam (1919–2005), Puran Singh (1881–1931), Dhani Ram Chatrik (1876–1957), Diwan Singh (1897–1944) and Ustad Daman (1911–1984), Mohan Singh (1905–78) and Shareef Kunjahi are some legendary Punjabi writers of this period.
After independence of Pakistan and India Najm Hossein Syed, Fakhar Zaman and Afzal Ahsan Randhawa, Shafqat Tanvir Mirza, Ahmad Salim, and Najm Hosain Syed, Munir Niazi, Pir Hadi Abdul Mannan enriched Punjabi literature in Pakistan, whereas Amrita Pritam (1919–2005), Jaswant Singh Rahi (1930–1996), Shiv Kumar Batalvi (1936–1973), Surjit Patar (1944–) and Pash (1950–1988) are some of the more prominent poets and writers from India.
Despite Punjabi's rich literary history, it was not until 1947 that it would be recognised as an official language. Previous governments in the area of the Punjab had favoured Persian, Hindustani, or even earlier standardised versions of local registers as the language of the court or government. After the annexation of the Sikh Empire by the British East India Company following the Second Anglo-Sikh War in 1849, the British policy of establishing a uniform language for administration was expanded into the Punjab. The British Empire employed Hindi and Urdu in its administration of North-Central and Northwestern India, while in the North-East of India, Bengali language was used as the language of administration. Despite its lack of official sanction, the Punjabi language continued to flourish as an instrument of cultural production, with rich literary traditions continuing until modern times. The Sikh religion, with its Gurmukhi script, played a special role in standardising and providing education in the language via Gurdwaras, while writers of all religions continued to produce poetry, prose, and literature in the language.
In India, Punjabi is one of the 22 scheduled languages of India. It is the first official language of the Indian State of Punjab. Punjabi also has second language official status in Delhi along with Urdu, and in Haryana.
In Pakistan, no regional ethnic language has been granted official status at the national level, and as such Punjabi is not an official language at the national level, even though it is the most spoken language in Pakistan after Urdu, the national language of Pakistan. It is, however, the official provincial language of Punjab, Pakistan, the second largest and the most populous province of Pakistan as well as in Islamabad Capital Territory. The only two official national languages in Pakistan are Urdu and English, which are considered the lingua francas of Pakistan.
When Pakistan was created in 1947, although Punjabi was the majority language in West Pakistan and Bengali the majority in East Pakistan and Pakistan as whole, English and Urdu were chosen as the national languages. The selection of Urdu was due to its association with South Asian Muslim nationalism and because the leaders of the new nation wanted a unifying national language instead of promoting one ethnic group's language over another. Broadcasting in Punjabi language by Pakistan Broadcasting Corporation decreased on TV and radio after 1947. Article 251 of the Constitution of Pakistan declares that these two languages would be the only official languages at the national level, while provincial governments would be allowed to make provisions for the use of other languages. However, in the 1950s the constitution was amended to include the Bengali language. Eventually, Punjabi was granted status as a provincial language in Punjab Province, while the Sindhi language was given official status in 1972 after 1972 Language violence in Sindh.
Despite gaining official recognition at the provincial level, Punjabi is not a language of instruction for primary or secondary school students in Punjab Province (unlike Sindhi and Pashto in other provinces). Pupils in secondary schools can choose the language as an elective, while Punjabi instruction or study remains rare in higher education. One notable example is the teaching of Punjabi language and literature by the University of the Punjab in Lahore which began in 1970 with the establishment of its Punjabi Department.
In the cultural sphere, there are many books, plays, and songs being written or produced in the Punjabi-language in Pakistan. Until the 1970s, there were a large number of Punjabi-language films being produced by the Lollywood film industry, however since then Urdu has become a much more dominant language in film production. Additionally, television channels in Punjab Province (centred on the Lahore area) are broadcast in Urdu. The preeminence of Urdu in both broadcasting and the Lollywood film industry is seen by critics as being detrimental to the health of the language.
The use of Urdu and English as the near exclusive languages of broadcasting, the public sector, and formal education have led some to fear that Punjabi in Pakistan is being relegated to a low-status language and that it is being denied an environment where it can flourish. Several prominent educational leaders, researchers, and social commentators have echoed the opinion that the intentional promotion of Urdu and the continued denial of any official sanction or recognition of the Punjabi language amounts to a process of "Urdu-isation" that is detrimental to the health of the Punjabi language In August 2015, the Pakistan Academy of Letters, International Writer’s Council (IWC) and World Punjabi Congress (WPC) organised the "Khawaja Farid Conference" and demanded that a Punjabi-language university should be established in Lahore and that Punjabi language should be declared as the medium of instruction at the primary level. In September 2015, a case was filed in Supreme Court of Pakistan against Government of Punjab, Pakistan as it did not take any step to implement the Punjabi language in the province. Additionally, several thousand Punjabis gather in Lahore every year on International Mother Language Day. Thinktanks, political organisations, cultural projects, and individuals also demand authorities at the national and provincial level to promote the use of the language in the public and official spheres.
At the federal level, Punjabi has official status via the Eighth Schedule to the Indian Constitution, earned after the Punjabi Suba movement of the 1950s. At the state level, Punjabi is the sole official language of the state of Punjab, while it has secondary official status in the states of Haryana and Delhi. In 2012, it was also made additional official language of West Bengal in areas where the population exceeds 10% of a particular block, sub-division or district.
Both federal and state laws specify the use of Punjabi in the field of education. The state of Punjab uses the Three Language Formula, and Punjabi is required to be either the medium of instruction, or one of the three languages learnt in all schools in Punjab. Punjabi is also a compulsory language in Haryana, and other states with a significant Punjabi speaking minority are required to offer Punjabi medium education.
There are vibrant Punjabi language movie and news industries in India, however Punjabi serials have had a much smaller presence within the last few decades in television due to market forces. Despite Punjabi having far greater official recognition in India, "where the Punjabi language is officially admitted in all necessary social functions, while in Pakistan it is used only in a few radio and TV programs," attitudes of the English-educated elite towards the language are ambivalent as they are in neighbouring Pakistan. There are also claims of state apathy towards the language in non-Punjabi majority areas like Haryana and Delhi.
The Punjabi Sahit academy, Ludhiana, established in 1954 is supported by the Punjab state government and works exclusively for promotion of the Punjabi language, as does the Punjabi academy in Delhi. The Jammu and Kashmir academy of art, culture and literature in Jammu and Kashmir UT, India works for Punjabi and other regional languages like Urdu, Dogri, Gojri etc. Institutions in neighbouring states as well as in Lahore, Pakistan also advocate for the language. | https://en.wikipedia.org/wiki?curid=25044 |
Power associativity
In mathematics, specifically in abstract algebra, power associativity is a property of a binary operation that is a weak form of associativity.
An algebra (or more generally a magma) is said to be power-associative if the subalgebra generated by any element is associative. Concretely, this means that if an element formula_1 is performed an operation formula_2 by itself several times, it doesn't matter in which order the operations are carried out, so for instance formula_3.
Every associative algebra is power-associative, but so are all other alternative algebras (like the octonions, which are non-associative) and even some non-alternative algebras like the sedenions and Okubo algebras. Any algebra whose elements are idempotent is also power-associative.
Exponentiation to the power of any positive integer can be defined consistently whenever multiplication is power-associative. For example, there is no need to distinguish whether "x"3 should be defined as ("xx")"x" or as "x"("xx"), since these are equal. Exponentiation to the power of zero can also be defined if the operation has an identity element, so the existence of identity elements is useful in power-associative contexts.
Over a field of characteristic 0, an algebra is power-associative if and only if it satisfies formula_4 and formula_5, where formula_6 is the associator (Albert 1948).
Over a field of prime characteristic formula_7 there is no finite set of identities that characterizes power-associativity, but there are infinite independent sets, as described by Gainov (1970):
A substitution law holds for real power-associative algebras with unit, which basically asserts that multiplication of polynomials works as expected. For "f" a real polynomial in "x", and for any "a" in such an algebra define "f"("a") to be the element of the algebra resulting from the obvious substitution of "a" into "f". Then for any two such polynomials "f" and "g", we have that . | https://en.wikipedia.org/wiki?curid=25054 |
Pierre de Coubertin
Charles Pierre de Frédy, Baron de Coubertin (, ; born Pierre de Frédy; 1 January 1863 – 2 September 1937, also known as Pierre de Coubertin and Baron de Coubertin) was a French educator and historian, founder of the International Olympic Committee, and its second president. He is known as the father of the modern Olympic Games.
Born into a French aristocratic family, he became an academic and studied a broad range of topics, most notably education and history. He graduated with a degree in law and public affairs from the Paris Institute of Political Studies (Sciences Po). It was at Sciences Po that he came up with the idea of the Summer Olympic Games.
The Pierre de Coubertin medal (also known as the Coubertin medal or the True Spirit of Sportsmanship medal) is an award given by the International Olympic Committee to athletes who demonstrate the spirit of sportsmanship in the Olympic Games.
Pierre de Frédy was born in Paris on 1 January 1863, into an aristocratic family. He was the fourth child of Baron Charles Louis de Frédy, Baron de Coubertin and Marie–Marcelle Gigault de Crisenoy. Family tradition held that the Frédy name had first arrived in France in the early 15th century, and the first recorded title of nobility granted to the family was given by Louis XI to an ancestor, also named Pierre de Frédy, in 1477. But other branches of his family tree delved even further into French history, and the annals of both sides of his family included nobles of various stations, military leaders and associates of kings and princes of France.
His father Charles was a staunch royalist and accomplished artist whose paintings were displayed and given prizes at the Parisian salon, at least in those years when he was not absent in protest of the rise to power of Louis Napoleon. His paintings often centred on themes related to the Roman Catholic Church, classicism, and nobility, which reflected those things he thought most important. In a later semi-fictional autobiographical piece called "Le Roman d'un rallié", Coubertin describes his relationship with both his mother and his father as having been somewhat strained during his childhood and adolescence. His memoirs elaborated further, describing as a pivotal moment his disappointment upon meeting Henri, Count of Chambord, whom the elder Coubertin believed to be the rightful king.
Coubertin grew up in a time of profound change in France: France's defeat in the Franco-Prussian War, the Paris Commune, and the establishment of the French Third Republic, and later the Dreyfus affair. But while these events were the setting of his childhood, his school experiences were just as formative. In October 1874, his parents enrolled him in a new Jesuit school called "Externat de la rue de Vienne", which was still under construction for his first five years there. While many of the school's attendees were day students, Coubertin boarded at the school under the supervision of a Jesuit priest, which his parents hoped would instill him with a strong moral and religious education. There, he was among the top three students in his class, and was an officer of the school's elite academy made up of its best and brightest. This suggests that despite his rebelliousness at home, Coubertin adapted well to the strict rigors of a Jesuit education.
As an aristocrat, Coubertin had a number of career paths from which to choose, including potentially prominent roles in the military or politics. But he chose instead to pursue a career as an intellectual, studying and later writing on a broad range of topics, including education, history, literature and sociology.
The subject which he seems to have been most deeply interested in was education, and his study focused in particular on physical education and the role of sport in schooling. In 1883, he visited England for the first time, and studied the program of physical education instituted under Thomas Arnold at the Rugby School. Coubertin credited these methods with leading to the expansion of British power during the 19th century and advocated their use in French institutions. The inclusion of physical education in the curriculum of French schools would become an ongoing pursuit and passion of Coubertin's.
Coubertin is thought to have exaggerated the importance of sport to Thomas Arnold, whom he viewed as "one of the founders of athletic chivalry". The character-reforming influence of sport with which Coubertin was so impressed is more likely to have originated in the novel "Tom Brown's School Days" rather than exclusively in the ideas of Arnold himself. Nonetheless, Coubertin was an enthusiast in need of a cause and he found it in England and in Thomas Arnold. "Thomas Arnold, the leader and classic model of English educators," wrote Coubertin, "gave the precise formula for the role of athletics in education. The cause was quickly won. Playing fields sprang up all over England".
Intrigued by what he had read about English public schools, in 1883, at the age of twenty, Frédy went to Rugby and to other English schools to see for himself. He described the results in a book, "L'Education en Angleterre", which was published in Paris in 1888. This hero of his book is Thomas Arnold, and on his second visit in 1886, Coubertin reflected on Arnold's influence in the chapel at Rugby School.
What Coubertin saw on the playing fields of Rugby and the other English schools he visited was how "organised sport can create moral and social strength". Not only did organised games help to set the mind and body in equilibrium, it also prevented the time being wasted in other ways. First developed by the ancient Greeks, it was an approach to education that he felt the rest of the world had forgotten and to whose revival he was to dedicate the rest of his life.
As a historian and a thinker on education, Coubertin romanticised ancient Greece. Thus, when he began to develop his theory of physical education, he naturally looked to the example set by the Athenian idea of the gymnasium, a training facility that simultaneously encouraged physical and intellectual development. He saw in these gymnasia what he called a triple unity between old and young, between disciplines, and between different types of people, meaning between those whose work was theoretical and those whose work was practical. Coubertin advocated for these concepts, this triple unity, to be incorporated into schools.
But while Coubertin was certainly a romantic, and while his idealised vision of ancient Greece would lead him later to the idea of reviving the Olympic Games, his advocacy for physical education was based on practical concerns as well. He believed that men who received physical education would be better prepared to fight in wars, and better able to win conflicts like the Franco-Prussian War, in which France had been humiliated. He also saw sport as democratic, in that sports competition crossed class lines, although it did so without causing a mingling of classes, which he did not support.
Unfortunately for Coubertin, his efforts to incorporate more physical education into French schools failed. The failure of this endeavour, however, was closely followed by the development of a new idea, the revival of the ancient Olympic Games, the creation of a festival of international athleticism.
He was the referee of the first ever French championship rugby union final on 20 March 1892, between Racing Club de France and Stade Français.
Coubertin is the instigator of the modern Olympic movement, a man whose vision and political skill led to the revival of the Olympic Games which had been practised in antiquity. Coubertin idealized the Olympic Games as the ultimate ancient athletic competition.
Thomas Arnold, the Head Master of Rugby School, was an important influence on Coubertin's thoughts about education, but his meetings with William Penny Brookes also influenced his thinking about athletic competition to some extent. A trained physician, Brookes believed that the best way to prevent illness was through physical exercise. In 1850, he had initiated a local athletic competition that he referred to as "Meetings of the Olympian Class" at the Gaskell recreation ground at Much Wenlock, Shropshire. Along with the Liverpool Athletic Club, who began holding their own Olympic Festival in the 1860s, Brookes created a National Olympian Association which aimed to encourage such local competition in cities across Britain. These efforts were largely ignored by the British sporting establishment. Brookes also maintained communication with the government and sporting advocates in Greece, seeking a revival of the Olympic Games internationally under the auspices of the Greek government. There, the philanthropist cousins Evangelos and Konstantinos Zappas had used their wealth to fund Olympics within Greece, and paid for the restoration of the Panathinaiko Stadium that was later used during the 1896 Summer Olympics. The efforts of Brookes to encourage the internationalization of these games came to naught. However, Dr. Brookes did organize a national Olympic Games in London, at Crystal Palace, in 1866 and this was the first Olympics to resemble an Olympic Games to be held outside of Greece. But while others had created Olympic contests within their countries, and broached the idea of international competition, it was Coubertin whose work would lead to the establishment of the International Olympic Committee and the organisation of the first modern Olympic Games.
In 1888, Coubertin founded the Comité pour la Propagation des Exercises Physiques more well known as the Comité Jules Simon. Coubertin's earliest reference to the modern notion of Olympic Games criticizes the idea. The idea for reviving the Olympic Games as an international competition came to Coubertin in 1889, apparently independently of Brookes, and he spent the following five years organizing an international meeting of athletes and sports enthusiasts that might make it happen. Dr Brookes had organised a national Olympic Games that was held at Crystal Palace in London in 1866. In response to a newspaper appeal, Brookes wrote to Coubertin in 1890, and the two began an exchange of letters on education and sport. Although he was too old to attend the 1894 Congress, Brookes would continue to support Coubertin's efforts, most importantly by using his connections with the Greek government to seek its support in the endeavour. While Brookes' contribution to the revival of the Olympic Games was recognised in Britain at the time, Coubertin in his later writings largely neglected to mention the role the Englishman played in their development. He did mention the roles of Evangelis Zappas and his cousin Konstantinos Zappas, but drew a distinction between their founding of athletic Olympics and his own role in the creation of an international contest. However, Coubertin together with A. Mercatis, a close friend of Konstantinos, encouraged the Greek government to utilise part of Konstantinos' legacy to fund the 1896 Athens Olympic Games separately and in addition to the legacy of Evangelis Zappas that Konstantinos had been executor of. Moreover, George Averoff was invited by the Greek government to fund the second refurbishment of the Panathinaiko Stadium that had already been fully funded by Evangelis Zappas forty years earlier.
Coubertin's advocacy for the Games centred on a number of ideals about sport. He believed that the early ancient Olympics encouraged competition among amateur rather than professional athletes, and saw value in that. The ancient practice of a sacred truce in association with the Games might have modern implications, giving the Olympics a role in promoting peace. This role was reinforced in Coubertin's mind by the tendency of athletic competition to promote understanding across cultures, thereby lessening the dangers of war. In addition, he saw the Games as important in advocating his philosophical ideal for athletic competition: that the competition itself, the struggle to overcome one's opponent, was more important than winning. Coubertin expressed this ideal thus:
"L'important dans la vie ce n'est point le triomphe, mais le combat, l'essentiel ce n'est pas d'avoir vaincu mais de s'être bien battu."
"The important thing in life is not the triumph but the struggle, the essential thing is not to have conquered but to have fought well."
As Coubertin prepared for his Congress, he continued to develop a philosophy of the Olympic Games. While he certainly intended the Games to be a forum for competition between amateur athletes, his conception of amateurism was complex. By 1894, the year the Congress was held, he publicly criticised the type of amateur competition embodied in English rowing contests, arguing that its specific exclusion of working-class athletes was wrong. While he believed that athletes should not be paid to be such, he did think that compensation was in order for the time when athletes were competing and would otherwise have been earning money. Following the establishment of a definition for an amateur athlete at the 1894 Congress, he would continue to argue that this definition should be amended as necessary, and as late as 1909 would argue that the Olympic movement should develop its definition of amateurism gradually.
Along with the development of an Olympic philosophy, Coubertin invested time in the creation and development of a national association to coordinate athletics in France, the Union des Sociétés Françaises de Sports Athlétiques (USFSA). In 1889, French athletics associations had grouped together for the first time and Coubertin founded a monthly magazine "La Revue Athletique", the first French periodical devoted exclusively to athletics and modelled on "The Athlete", an English journal established around 1862. Formed by seven sporting societies with approximately 800 members, by 1892 the association had expanded to 62 societies with 7,000 members.
That November, at the annual meeting of the USFSA, Coubertin first publicly suggested the idea of reviving the Olympics. His speech met general applause, but little commitment to the Olympic ideal he was advocating for, perhaps because sporting associations and their members tended to focus on their own area of expertise and had little identity as sportspeople in a general sense. This disappointing result was prelude to a number of challenges he would face in organising his international conference. In order to develop support for the conference, he began to play down its role in reviving Olympic Games and instead promoted it as a conference on amateurism in sport which, he thought, was slowly being eroded by betting and sponsorships. This led to later suggestions that participants were convinced to attend under false pretenses. Little interest was expressed by those he spoke to during trips to the United States in 1893 and London in 1894, and an attempt to involve the Germans angered French gymnasts who did not want the Germans invited at all. Despite these challenges, the USFSA continued its planning for the games, adopting in its first program for the meeting eight articles to address, only one of which had to do with the Olympics. A later program would give the Olympics a much more prominent role in the meeting.
The congress was held on 23 June 1894 at the Sorbonne in Paris. Once there, participants divided the congress into two commissions, one on amateurism and the other on reviving the Olympics. A Greek participant, Demetrius Vikelas, was appointed to head the commission on the Olympics, and would later become the first President of the International Olympic Committee. Along with Coubertin, C. Herbert of Britain's Amateur Athletic Association and W.M. Sloane of the United States helped lead the efforts of the commission. In its report, the commission proposed that Olympic Games be held every four years and that the program for the Games be one of modern rather than ancient sports. They also set the date and location for the first modern Olympic Games, the 1896 Summer Olympics in Athens, Greece, and the second, the 1900 Summer Olympics in Paris. Coubertin had originally opposed the choice of Greece, as he had concerns about the ability of a weakened Greek state to host the competition, but was convinced by Vikelas to support the idea. The commission's proposals were accepted unanimously by the congress, and the modern Olympic movement was officially born. The proposals of the other commission, on amateurism, were more contentious, but this commission also set important precedents for the Olympic Games, specifically the use of heats to narrow participants and the banning of prize money in most contests.
Following the Congress, the institutions created there began to be formalized into the International Olympic Committee (IOC), with Demetrius Vikelas as its first President. The work of the IOC increasingly focused on the planning the 1896 Athens Games, and de Coubertin played a background role as Greek authorities took the lead in logistical organisation of the Games in Greece itself, offering technical advice such as a sketch of a design of a velodrome to be used in cycling competitions. He also took the lead in planning the program of events, although to his disappointment neither polo, football, or boxing were included in 1896. The Greek organizing committee had been informed that four foreign football teams were to participate however not one foreign football team showed up and despite Greek preparations for a football tournament it was cancelled during the Games.
The Greek authorities were frustrated that he could not provide an exact estimate of the number of attendees more than a year in advance. In France, Coubertin's efforts to elicit interest in the Games among athletes and the press met difficulty, largely because the participation of German athletes angered French nationalists who begrudged Germany their victory in the Franco-Prussian War. Germany also threatened not to participate after rumours spread that Coubertin had sworn to keep Germany out, but following a letter to the Kaiser denying the accusation, the German National Olympic Committee decided to attend. Coubertin himself was frustrated by the Greeks, who increasingly ignored him in their planning and who wanted to continue to hold the Games in Athens every four years, against de Coubertin's wishes. The conflict was resolved after he suggested to the King of Greece that he hold pan-Hellenic games in between Olympiads, an idea which the King accepted, although Coubertin would receive some angry correspondence even after the compromise was reached and the King did not mention him at all during the banquet held in honour of foreign athletes during the 1896 Games.
Coubertin took over the IOC presidency when Demetrius Vikelas stepped down after the Olympics in his own country. Despite the initial success, the Olympic Movement faced hard times, as the 1900 (in De Coubertin's own Paris) and 1904 Games were both swallowed by World's Fairs in the same cities, and received little attention. The Paris Games were not organised by Coubertin or the IOC nor were they called Olympics at that time. The St. Louis Games was hardly internationalized.
The 1906 Summer Olympics revived the momentum, and the Olympic Games have come to be regarded as the world's foremost sports competition. Coubertin created the modern pentathlon for the 1912 Olympics, and subsequently stepped down from his IOC presidency after the 1924 Olympics in Paris, which proved much more successful than the first attempt in that city in 1900. He was succeeded as president, in 1925, by Belgian Henri de Baillet-Latour.
Years later Coubertin came out of retirement to lend his prestige to assisting Berlin to land the 1936 games. In exchange, Germany nominated him for the Nobel Peace Prize. The 1935 winner, however, was the anti-Nazi Carl von Ossietzky.
Coubertin won the gold medal for literature at the 1912 Summer Olympics for his poem "Ode to Sport". Coubertin entered his poem 'Ode to Sport' under the pseudonym of Georges Hohrod and M. Eschbach which were the names of villages close to his wife's place of birth.
Following Amoros ideas, De Coubertin developed a new type of utilitarian sport: "les débrouillards". (the "resourceful men") from 1900.
The first débrouillards season was organized in 1905/1906, and the programme was wide: running, jumping, throwing, climbing, swimming, sword fight, boxing, shooting, walking, horse riding, rowing, cycling. (source: FFEPGV archives)
In 1911, Pierre de Coubertin founded the inter-religious Scouting organisation aka "Éclaireurs Français" (EF) in France, which later merged to form the Éclaireuses et Éclaireurs de France.
In 1895 Pierre de Coubertin had married Marie Rothan, the daughter of family friends. Their son Jacques (1896–1952) became ill after being in the sun too long when he was a little child. Their daughter Renée (1902–1968) suffered emotional disturbances and never married. Marie and Pierre tried to console themselves with two nephews, but they were killed at the front in World War I. Coubertin died of a heart attack in Geneva, Switzerland on 2 September 1937. Marie died in 1963.
Pierre was the last person to possess his family name. In the words of his biographer John MacAloon, "The last of his lineage, Pierre de Coubertin was the only member of it whose fame would outlive him."
A number of scholars have criticized Coubertin's legacy. David C. Young believes that Coubertin's assertion that ancient Olympic athletes were amateurs was incorrect. The issue is the subject of scholarly debate. Young and others argue that the athletes of the ancient Games were professional, while opponents led by Pleket argue that the earliest Olympic athletes were in fact amateur, and that the Games only became professionalized after about 480 BC. Coubertin agreed with this latter view, and saw this professionalization as undercutting the morality of the competition.
Further, Young asserts that the effort to limit international competition to amateur athletes, which Coubertin was a part of, was in fact part of efforts to give the upper classes greater control over athletic competition, removing such control from the working classes. Coubertin may have played a role in such a movement, but his defenders argue that he did so unconscious of any class repercussions.
However, it is clear that his romanticized vision of the Olympic Games was fundamentally different from that described in the historical record. For example, Coubertin's idea that participation is more important than winning ("L'important c'est de participer") is at odds with the ideals of the Greeks.
Coubertin's assertion that the Games were the impetus for peace was also an exaggeration; the peace which he spoke of only existed to allow athletes to travel safely to Olympia, and neither prevented the outbreak of wars nor ended ongoing ones.
Scholars have critiqued the idea that athletic competition might lead to greater understanding between cultures and, therefore, to peace. Christopher Hill claims that modern participants in the Olympic movement may defend this particular belief, "in a spirit similar to that in which the Church of England remains attached to the Thirty-Nine Articles of Religion, which a Priest in that Church must sign." In other words, that they may not wholly believe it but hold to it for historical reasons.
Questions have also been raised about the veracity of Coubertin's account of his role in the planning of the 1896 Athens Games. Reportedly, Coubertin played little role in planning, despite entreaties by Vikelas. Young suggests that the story about Coubertin's having sketched the velodrome were untrue, and that he had in fact given an interview in which he suggested he did not want Germans to participate. Coubertin later denied this.
The Olympic motto "Citius, Altius, Fortius" (Faster, Higher, Stronger) was proposed by Coubertin in 1894 and has been official since 1924. The motto was coined by Henri Didon OP, a friend of Coubertin, for a Paris youth gathering of 1891.
The Pierre de Coubertin medal (also known as the Coubertin medal or the True Spirit of Sportsmanship medal) is an award given by the International Olympic Committee to those athletes that demonstrate the spirit of sportsmanship in the Olympic Games. This medal is considered by many athletes and spectators to be the highest award that an Olympic athlete can receive, even greater than a gold medal. The International Olympic Committee considers it as its highest honour.
A minor planet, 2190 Coubertin, was discovered in 1976 by Soviet astronomer Nikolai Stepanovich Chernykh and is named in his honour.
The street where the Olympic Stadium in Montreal is located (which hosted the 1976 Summer Olympic Games) was named after Pierre de Coubertin, giving the stadium the address 4549 Pierre de Coubertin Avenue. It is the only Olympic Stadium in the world that lies on a street named after Coubertin. There are also two schools in Montreal named after Pierre de Coubertin.
He was portrayed by Louis Jourdan in the 1984 NBC miniseries, "".
In 2007, he was inducted into the World Rugby Hall of Fame for his services to the sport of rugby union.
This is a listing of Pierre de Coubertin's books. In addition to these, he wrote numerous articles for journals and magazines: | https://en.wikipedia.org/wiki?curid=25055 |
Polish notation
Polish notation (PN), also known as normal Polish notation (NPN), Łukasiewicz notation, Warsaw notation, Polish prefix notation or simply prefix notation, is a mathematical notation in which operators "precede" their operands, in contrast to the more common infix notation, in which operators are placed "between" operands, as well as reverse Polish notation (RPN), in which operators "follow" their operands. It does not need any parentheses as long as each operator has a fixed number of operands. The description "Polish" refers to the nationality of logician Jan Łukasiewicz, who invented Polish notation in 1924.
The term "Polish notation" is sometimes taken (as the opposite of "infix notation") to also include reverse Polish notation.
When Polish notation is used as a syntax for mathematical expressions by programming language interpreters, it is readily parsed into abstract syntax trees and can, in fact, define a one-to-one representation for the same. Because of this, Lisp (see below) and related programming languages define their entire syntax in prefix notation (and others use postfix notation).
A quotation from a paper by Jan Łukasiewicz, "Remarks on Nicod's Axiom and on "Generalizing Deduction"", page 180, states how the notation was invented:
I came upon the idea of a parenthesis-free notation in 1924. I used that notation for the first time in my article Łukasiewicz(1), p. 610, footnote.
The reference cited by Łukasiewicz is apparently a lithographed report in Polish. The referring paper by Łukasiewicz "Remarks on Nicod's Axiom and on "Generalizing Deduction"" was reviewed by Henry A. Pogorzelski in the "Journal of Symbolic Logic" in 1965. Heinrich Behmann, editor in 1924 of the article of Moses Schönfinkel, already had the idea of eliminating parentheses in logic formulas.
Alonzo Church mentions this notation in his classic book on mathematical logic as worthy of remark in notational systems even contrasted to Alfred Whitehead and Bertrand Russell's logical notational exposition and work in Principia Mathematica.
In Łukasiewicz's 1951 book, "Aristotle's Syllogistic from the Standpoint of Modern Formal Logic", he mentions that the principle of his notation was to write the functors before the arguments to avoid brackets and that he had employed his notation in his logical papers since 1929. He then goes on to cite, as an example, a 1930 paper he wrote with Alfred Tarski on the sentential calculus.
While no longer used much in logic, Polish notation has since found a place in computer science.
The expression for adding the numbers 1 and 2 is written in Polish notation as (pre-fix), rather than as (in-fix). In more complex expressions, the operators still precede their operands, but the operands may themselves be expressions including again operators and their operands. For instance, the expression that would be written in conventional infix notation as
can be written in Polish notation as
Assuming a given arity of all involved operators (here the "−" denotes the binary operation of subtraction, not the unary function of sign-change), any well formed prefix representation thereof is unambiguous, and brackets within the prefix expression are unnecessary. As such, the above expression can be further simplified to
The processing of the product is deferred until its two operands are available (i.e., 5 minus 6, and 7). As with "any" notation, the innermost expressions are evaluated first, but in Polish notation this "innermost-ness" can be conveyed by the sequence of operators and operands rather than by bracketing.
In the conventional infix notation, parentheses are required to override the standard precedence rules, since, referring to the above example, moving them
or removing them
changes the meaning and the result of the expression. This version is written in Polish notation as
When dealing with non-commutative operations, like division or subtraction, it is necessary to coordinate the sequential arrangement of the operands with the definition of how the operator takes its arguments, i.e., from left to right. For example, , with 10 left to 5, has the meaning of 10 ÷ 5 (read as "divide 10 by 5"), or , with 7 left to 6, has the meaning of 7 - 6 (read as "subtract from 7 the operand 6").
Prefix/postfix notation is especially popular for its innate ability to express the intended order of operations without the need for parentheses and other precedence rules, as are usually employed with infix notation. Instead, the notation uniquely indicates which operator to evaluate first. The operators are assumed to have a fixed arity each, and all necessary operands are assumed to be explicitly given. A valid prefix expression always starts with an operator and ends with an operand. Evaluation can either proceed from left to right, or in the opposite direction. Starting at the left, the input string, consisting of tokens denoting operators or operands, is pushed token for token on a stack, until the top entries of the stack contain the number of operands that fits to the top most operator (immediately beneath). This group of tokens at the stacktop (the last stacked operator and the according number of operands) is replaced by the result of executing the operator on these/this operand(s). Then the processing of the input continues in this manner. The rightmost operand in a valid prefix expression thus empties the stack, except for the result of evaluating the whole expression. When starting at the right, the pushing of tokens is performed similarly, just the evaluation is triggered by an operator, finding the appropriate number of operands that fits its arity already at the stacktop. Now the leftmost token of a valid prefix expression must be an operator, fitting to the number of operands in the stack, which again yields the result. As can be seen from the description, a push-down store with no capability of arbitrary stack inspection suffices to implement this parsing.
The above sketched stack manipulation works –with mirrored input– also for expressions in reverse Polish notation.
The table below shows the core of Jan Łukasiewicz's notation for sentential logic. Some letters in the Polish notation table stand for particular words in Polish, as shown:
Note that the quantifiers ranged over propositional values in Łukasiewicz's work on many-valued logics.
Bocheński introduced a system of Polish notation that names all 16 binary connectives of classical propositional logic. For classical propositional logic, it is a compatible extension of the notation of Łukasiewicz. But the notations are incompatible in the sense that Bocheński uses L and M (for nonimplication and converse nonimplication) in propositional logic and Łukasiewicz uses L and M in modal logic.
Prefix notation has seen wide application in Lisp s-expressions, where the brackets are required since the operators in the language are themselves data (first-class functions). Lisp functions may also be variadic. The Tcl programming language, much like Lisp also uses Polish notation through the mathop library. The Ambi programming language uses Polish notation for arithmetic operations and program construction. LDAP filter syntax uses Polish prefix notation.
Postfix notation is used in many stack-oriented programming languages like PostScript and Forth. CoffeeScript syntax also allows functions to be called using prefix notation, while still supporting the unary postfix syntax common in other languages.
The number of return values of an expression equals the difference between the number of operands in an expression and the total arity of the operators minus the total number of return values of the operators.
Polish notation, usually in postfix form, is the chosen notation of certain calculators, notably from Hewlett-Packard. At a lower level, postfix operators are used by some stack machines such as the Burroughs large systems. | https://en.wikipedia.org/wiki?curid=25056 |
Primary school
A primary school, junior school (in UK), elementary school or grade school (in US & Canada) is a school for children from about four to eleven years old, in which they receive primary or elementary education. It can refer to both the physical structure (buildings) and the organisation. Typically it comes after preschool, and before secondary school.
The International Standard Classification of Education considers primary education as a single phase where programmes are typically designed to provide fundamental skills in reading, writing and mathematics and to establish a solid foundation for learning. This is ISCED Level 1: Primary education or first stage of basic education.
During Greek and Roman times, boys were educated by their mothers until the age of seven, then according to the culture of their location and times, would start a formal education. In Sparta until twelve, it would be at a military academy building up physical fitness and combat skills, but also reading, writing and arithmetic while in Athens the emphasis would be on understanding the laws of the polis, reading, writing, arithmetic and music with gymnastics and athletics, and learning the moral stories of Homer. Girls received all their education at home. In Rome the primary school was called the "ludus"; the curriculum developed over the centuries featuring the learning of both Latin and Greek. In AD 94, Quintilian published the systematic educational work, "Institutio oratoria". He distinguished between teaching and learning, and that a child aged between 7 and 14 learned by sense experience, learns to form ideas, develops language and memory. He recommended that teachers should motivate their pupils by making the teaching interesting, rather than by corporal punishment. The trivium (grammar, rhetoric and logic) and quadrivium (arithmetic, geometry, astronomy and music) were legacies of the Roman curriculum.
As the Roman influence waned the great cathedral schools were established to provide a source of choristers and clergy. Kings School, Canterbury dates from 597. The Council of Rome in 853 specified that each parish should provide elementary education: religious ritual but also reading and writing Latin.
The purpose of education was pass on salvation not social change. The church had a monopoly on education and the feudal lords concurred and allowed their sons to be educated at the few church schools. The economy was agrarian and the children of serfs started work as soon as they were able. It was a truth that man was created by God in the image of Adam with his share of original sin and a boy was born sinful. Only the teaching of the church and the sacraments could redeem him. The parishes provide elementary education- but had no requirement to provide it to every child. The need was to produce priests, and in a stable kingdom such as that of Charlemagne, administrators with elementary writing skills in Latin and the arithmetic needed to collect taxes and administer them. Alcuin (735–804) developed teaching material that were based on the catechetical method- repeating and memorizing questions and answers, though often not understanding. These skills were also needed in the great abbeys such as Cluny. There was a divergence between the needs of town and monasteries and we see the development of parish, chantry, monastic and cathedral schools. With the entry of women into church life, convents were established and with them convent schools. Girls entered here at the age of eight and were taught Latin grammar, religious doctrine and music, and the women's arts of spinning, weaving, tapestry, painting and embroidery. Bede entered the monastic school at Jarrow at the age of seven and became a writer and historian. Chantry schools were the result of a charitable donations and educated the poor. Parishes had to have a school from 804, and cathedrals had to establish schools after the Lateran Council of 1179. Elementary education was mainly to teach the Latin needed for the trivium and the quadrivium that formed the basis of the secondary curriculum.
While Humanism had a great change on the secondary curriculum, the primary curriculum was unaffected. It was believed that by studying the works of the greats, ancients who had governed empires, one became fit to succeed in any field. Renaissance boys from the age of five learned Latin grammar using the same books as the Roman child. There were the grammars of Donatus and Priscian followed by "Caesar's Commentaries" and then St Jerome's Latin Vulgate.
Wealthy boys were educated by tutors. Others were educated in schools attached to the parishes, cathedrals or abbeys. From the 13th century, wealthy merchants endowed money for priests to "establish as school to teach grammar". These early grammar schools were to teach basic, or elementary grammar, to boys. No age limit was specified. Early examples in England included Lancaster Royal Grammar School, Royal Latin School, Buckingham, and Stockport Grammar School. The Reformation and the Dissolution of the Monasteries (1548) disrupted the funding of many schools. The schools petitioned the King, Edward VI, for an endowment. Examples of schools receiving endowments are King Edward VI Grammar School, Louth, King Edward VI Grammar School, Norwich and King Edward VI School, Stratford-upon-Avon, where William Shakespeare was thought to be pupil from the age of 7 to 14.
Though the Grammar schools which were set up to deliver elementary education, they did require their entrants to already have certain skills. In particular, they expected them to be able to read and write in the vernacular. There was a need for something more basic.
This was addressed by Dame schools, then charity schools, often set up by the churches (C of E schools), Bell's British Schools and
Joseph Lancaster's National Schools.
Certain movements in education had a relevance in all of Europe and its diverging colonies. The Americans were interested in the thoughts of Pestalozzi, Joseph Lancaster, Owen and the Prussian schools.
Within the English speaking world, there are three widely used systems to describe the age of the child. The first is the "equivalent ages", then countries that base their education systems on the "English model" use one of two methods to identify the year group, while countries that base their systems on the "American K–12 model" refer to their year groups as "grades". Canada also follows the American model, although its names for year groups are put the number after the grade: For instance, "Grade 1" in Canada, rather than "First Grade" in the United States. This terminology extends into research literature.
In Canada, education is a Provincial, not a Federal responsibility. For example, the province of Ontario also had a "Grade 13," designed to help students enter the workforce or post-secondary education, but this was phased out in the year 2003.
In most parts of the world, primary education is the first stage of compulsory education, and is normally available without charge, but may also be offered by fee-paying independent schools. The term grade school is sometimes used in the US, although this term may refer to both primary education and secondary education.
The term "primary school" is derived from the French "école primaire", which was first used in an English text in 1802. In the United Kingdom, "elementary education" was taught in "elementary schools" until 1944, when free elementary education was proposed for students over 11: there were to be primary elementary schools and secondary elementary schools; these became known as primary schools and secondary schools.
In some parts of the United States, "primary school" refers to a school covering kindergarten through to second grade or third grade (K through 2 or 3); the "elementary school" includes grade three through five or grades four to six. In Canada, "elementary school" almost everywhere refers to Grades 1 through 6; with Kindergarten being referred to as "preschool."
Though often used as a synonym, "elementary school" has specific meanings in different locations.
School building design does not happen in isolation. The building (or school campus) needs to accommodate:
Each country will have a different education system and priorities. Schools need to accommodate students, staff, storage, mechanical and electrical systems, storage, support staff, ancillary staff and administration. The number of rooms required can be determined from the predicted roll of the school and the area needed.
According to standards used in the United Kingdom, a general classroom for 30 reception class or infant (Keystage 1) students needs to be 62 m2, or 55 m2 for juniors (Keystage 2). Examples are given on how this can be configured for a 210 place primary with attached 26 place nursery and two-storey 420 place (two form entry) primary school with attached 26 place nursery.
The building providing the education has to fulfil the needs of: The students, the teachers, the non-teaching support staff, the administrators and the community. It has to meet general government building guidelines, health requirements, minimal functional requirements for classrooms, toilets and showers, electricity and services, preparation and storage of textbooks and basic teaching aids. An optimum school will meet the minimum conditions and will have:
Government accountants having read the advice then publish minimum guidelines on schools. These enable environmental modelling and establishing building costs. Future design plans are audited to ensure that these standards are met but not exceeded. Government ministries continue to press for the 'minimum' space and cost standards to be reduced.
The UK government published this downwardly revised space formula for primary schools in 2014. It said the floor area should be 350 m2 + 4.1 m2/pupil place. The external finishes were to be downgraded to meet a build cost of £1113/m2.
There are three main ways of funding a school: it can funded by the state through general taxation, it can be funded by a pressure group such as the mosque or the church, it can be funded by a charity or it can be funded by contributions from the parents or a combination of these methods. Day to day oversight of the school can through a board of governors, the pressure group or by the owner.
The United Kingdom allowed most elementary education to be delivered in church schools whereas in France this was illegal as there is strict separation of church and state.
This can be through informal assessment by the staff and governors such as in Finland, or by a state run testing regime such as Ofsted in the United Kingdom. | https://en.wikipedia.org/wiki?curid=25058 |
Piedmont
Piedmont ( ; , ; Piedmontese, Occitan and , ) is a region in northwest Italy, one of the 20 regions of the country. It borders the Liguria region to the south, the Lombardy and Emilia-Romagna regions to the east and the Aosta Valley region to the northwest; it also borders Switzerland to the northeast and France to the west. It has an area of and a population of 4,377,941 as of 30 November 2017. The capital of Piedmont is Turin.
The name Piedmont comes from medieval Latin Pedemontium or Pedemontis, i.e., "ad pedem montium", meaning "at the foot of the mountains" (the Alps), attested in documents from the end of the 12th century.
Other towns of Piedmont with more than 20,000 inhabitants sorted by population :
Piedmont is surrounded on three sides by the Alps, including Monviso, where the Po rises, and Monte Rosa. It borders with France (Auvergne-Rhône-Alpes and Provence-Alpes-Côte d'Azur), Switzerland (Ticino and Valais) and the Italian regions of Lombardy, Liguria, Aosta Valley and for a very small part with Emilia Romagna.
The geography of Piedmont is 43.3% mountainous, along with extensive areas of hills (30.3%) and plains (26.4%).
Piedmont is the second largest of Italy's 20 regions, after Sicily. It is broadly coincident with the upper part of the drainage basin of the river Po, which rises from the slopes of Monviso in the west of the region and is Italy's largest river. The Po drains the semicircle formed by the Alps and Apennines, which surround the region on three sides.
From the highest peaks, the land slopes down to hilly areas, (sometimes with a brusque transition from mountain to plain) and then to the Padan Plain. The boundary between the two is characterised by resurgent springs—typical of the Padan Plain—which supply fresh water to the rivers and a dense network of irrigation canals.
The countryside is very diverse: from the rugged peaks of the massifs of Monte Rosa and Gran Paradiso to the damp rice paddies of Vercelli and Novara, from the gentle hillsides of the Langhe, Roero and Montferrat to the plains. 7.6% of the entire territory is considered protected area. There are 56 different national or regional parks; one of the most famous is the Gran Paradiso National Park, between Piedmont and the Aosta Valley.
Piedmont was inhabited in early historic times by Celtic-Ligurian tribes such as the Taurini and the Salassi. They were later subdued by the Romans (c. 220 BC), who founded several colonies there including "Augusta Taurinorum "(Turin) and "Eporedia" (Ivrea). After the fall of the Western Roman Empire, the region was successively invaded by the Burgundians, the Ostrogoths (5th century), East Romans, Lombards (6th century), and Franks (773).
In the 9th–10th centuries there were further incursions by the Magyars, Saracens and Muslim Moors. At the time Piedmont, as part of the Kingdom of Italy within the Holy Roman Empire, was subdivided into several marches and counties.
In 1046, Oddo of Savoy added Piedmont to the County of Savoy, with a capital at Chambéry (now in France). Other areas remained independent, such as the powerful "comuni" (municipalities) of Asti and Alessandria and the marquisates of Saluzzo and Montferrat. The County of Savoy became the Duchy of Savoy in 1416, and Duke Emanuele Filiberto moved the seat to Turin in 1563. In 1720, the Duke of Savoy became King of Sardinia, founding what evolved into the Kingdom of Sardinia and increasing Turin's importance as a European capital.
The Republic of Alba was created in 1796 as a French client republic in Piedmont. A new client republic, the Piedmontese Republic, existed between 1798 and 1799 before it was reoccupied by Austrian and Russian troops. In June 1800 a third client republic, the Subalpine Republic, was established in Piedmont. It fell under full French control in 1801 and it was annexed by France in September 1802. In the Congress of Vienna, the Kingdom of Sardinia was restored, and furthermore received the Republic of Genoa to strengthen it as a barrier against France.
Piedmont was a springboard for Italian unification in 1859–1861, following earlier unsuccessful wars against the Austrian Empire in 1820–1821 and 1848–1849. This process is sometimes referred to as "Piedmontisation". However, the efforts were later countered by the efforts of rural farmers.
The House of Savoy became Kings of Italy, and Turin briefly became the capital of Italy. However, when the Italian capital was moved to Florence, and then to Rome, the administrative and institutional importance of Piedmont was reduced. The only recognition of Piedmont's historical role was that the crown prince of Italy was known as the Prince of Piedmont. After Italian unification, Piedmont was one of the most important regions in the first Italian industrialization.
The region contains major industrial centres, the most important of which is Turin, home to the FIAT automobile works. Olivetti, once a major electronics industry whose plants were in Scarmagno and Ivrea, has now turned into a small-scale computer service company. Biella produces wool, tissues and silks. Alba is the home of Ferrero's chocolate factories and some mechanical industries.
Since 2006, the Piedmont region has benefited from the start of the Slow Food movement and Terra Madre, events that highlighted the rich agricultural and viticultural value of the Po valley and northern Italy. In the same year, the Piemonte Agency for Investments, Export and Tourism began to facilitate outside investment and promote Piedmont's industry and tourism. It was the first Italian institution to combine the activities being carried out by pre-existing local organizations to promote the territory internationally.
The gross domestic product (GDP) of the region was 137.4 billion euros in 2018, accounting for 7.8% of Italy's GDP. GDP per capita at purchasing power parity was 31,300 euros or 104% of the EU27 average in the same year. The GDP per employee was 111% of the EU average.
The unemployment rate stood at 8.2% in 2018.
Lowland Piedmont is a fertile agricultural region. The main agricultural products in Piedmont are cereals, including rice, representing more than 10% of national production, maize, grapes for wine-making, fruit and milk. With more than 800,000 head of cattle in 2000, livestock production accounts for half of total agricultural production in Piedmont.
Piedmont is one of the great winegrowing regions in Italy. More than half of its of vineyards are registered with DOC designations. It produces prestigious wines as Barolo and Barbaresco from the Langhe near Alba, and the Moscato d'Asti and sparkling Asti from the vineyards around Asti. The city of Asti is about 55 kilometres (34 miles) east of Turin in the plain of the Tanaro River and is one of the most important centres of Montferrat, one of the best known Italian wine districts in the world, declared officially on 22 June 2014 a UNESCO World Heritage site. Indigenous grape varieties include Nebbiolo, Barbera, Dolcetto, Freisa, Grignolino and Brachetto.
Tourism in Piedmont employs 75,534 people and involves 17,367 companies operating in the hospitality and catering sector, with 1,473 hotels and other tourist accommodation. The sector generates a turnover of €2,671 million, 3.3% of the €80,196 million total estimated spending on tourism in Italy. The region is popular with both foreign visitors and those from other parts of Italy. In 2002 there were 2,651,068 total arrivals, 1,124,696 (42%) of whom were foreign. The traditional leading areas for tourism in Piedmont are the Lake District ("Piedmont's riviera"), which accounts for 32.84% of total overnight stays, and the metropolitan area of Turin, which accounts for 26.51%.
In 2006, Turin hosted the XX Olympic Winter Games and in 2007 it hosted the XXIII Universiade. Alpine tourism tends to concentrate in a few highly developed stations like Alagna Valsesia and Sestriere. Around 1980, the long-distance trail Grande Traversata delle Alpi (GTA) was created to draw more attention to the variety of remote, sparsely inhabited valleys.
There are links with neighbouring France via the Fréjus and Colle di Tenda tunnels as well as the Montgenèvre Pass. Piedmont also connects with Switzerland by the Simplon and Great St Bernard passes. It is possible to reach Switzerland via a normal road that crosses eastern Piedmont, starting from Arona and ending in Locarno, on the Swiss border. Turin International Airport has domestic and international flights. The region has the longest motorway network amongst the Italian regions (about 800 km). It radiates from Turin, connecting it with the other provinces in the region, as well as with the other regions in Italy. In 2001, the number of passenger cars per 1,000 inhabitants was 623 (above the national average of 575).
The economy of Piedmont is anchored on a rich history of state support for excellence in higher education, including some of the leading universities in Italy. The Piedmont valley is home to the famous University of Turin, the Polytechnic University of Turin, the University of Eastern Piedmont and, more recently the United Nations Interregional Crime and Justice Research Institute.
The population density in Piedmont is lower than the national average. In 2008 it was equal to 174 inhabitants per km2, compared to a national figure of about 200. It rises however to 335 inhabitants per km2 when just the Metropolitan City of Turin is considered, whereas Verbano-Cusio-Ossola is the less densely populated province (72 inhabitants per km2).
The population of Piedmont followed a downward trend throughout the 1980s. This drop is the result of the natural negative balance (of some 3 to 4% per year), while the migratory balance since 1986 has again become positive because of an excess of new immigration over a stable figure for emigration.
The population as a whole has remained stable in the 1990s, although this is the result of a negative natural balance and a positive net migration.
The Turin metro area grew rapidly in the 1950s and 1960s due to an increase of immigrants from southern Italy and Veneto and today it has a population of approximately two million. , the Italian national institute of statistics (ISTAT) estimated that 310,543 foreign-born immigrants live in Piedmont, equal to 7.0% of the total regional population. Most immigrants come from Eastern Europe (mostly from Romania, Albania, and Ukraine) with smaller communities of African immigrants.
The Regional Government ("Giunta Regionale") is presided by the President of the Region ("Presidente della Regione"), who is elected for a five-year term and is composed by the President and the Ministers, who are currently 14, including a Vice President ("Vice Presidente").
In the last regional election, which took place on 29–30 March 2010, Roberto Cota (Lega Nord) defeated incumbent Mercedes Bresso (Democratic Party). In 2014 Cota chose not to stand again for President and the parties composing his coalition failed to agree on a single candidate, resulting in a landslide victory for Sergio Chiamparino, a Democrat who had been Mayor of Turin from 2001 to 2011.
Piedmont is divided into eight provinces:
As in the rest of Italy, Italian is the official national language. The main local languages are Piedmontese, Insubric (spoken in the eastern part of the region), Occitan (spoken by a minority in the Occitan Valleys situated in the Province of Cuneo and the Metropolitan City of Turin), and Franco-Provençal (spoken by another minority in the alpine heights of the Metropolitan City of Turin), like in the Susa valley and Walser (spoken by a minority in the Province of Vercelli and Province of Verbano-Cusio-Ossola).
Turin hosted the 2006 Winter Olympics.
In football, notable clubs in Piedmont include Turin-based Juventus and Torino, who have won 38 official top-flight league championships (as of the 2014-15 season) between them, more than any other city in Italy. Other smaller teams include the old "Piedmont Quadrilateral" components Novara, Alessandria, Casale, Pro Vercelli. With the pre-World War II success of Pro Vercelli and the dominance of Torino during the "Grande Torino" years and Juventus in more recent times, the region is the most successful in terms of championships won. Also Casale and Novese contributed with one "scudetto" each.
Other local teams include volleyball teams Cuneo (male) and AGIL Novara (female), basketball teams Biella Basketball and Junior Casale, ice hockey team Hockey Club Turin, and roller hockey side Amatori Vercelli, who have won three league titles, an Italian Cup and two CERS Cups. | https://en.wikipedia.org/wiki?curid=25061 |
Product ring
In mathematics, it is possible to combine several rings into one large product ring. This is done by giving the Cartesian product of a (possibly infinite) family of rings coordinatewise addition and multiplication. The resulting ring is called a direct product of the original rings.
An important example is the ring Z/"n"Z of integers modulo "n". If "n" is written as a product of prime powers (see fundamental theorem of arithmetic),
where the "pi" are distinct primes, then Z/"n"Z is naturally isomorphic to the product ring
This follows from the Chinese remainder theorem.
If is a product of rings, then for every "i" in "I" we have a surjective ring homomorphism which projects the product on the "i"th coordinate. The product "R", together with the projections "pi", has the following universal property:
This shows that the product of rings is an instance of products in the sense of category theory.
When "I" is finite, the underlying additive group of coincides with the direct sum of the additive groups of the "R""i". In this case, some authors call "R" the "direct sum of the rings "R""i"" and write , but this is incorrect from the point of view of category theory, since it is usually not a coproduct in the category of rings: for example, when two or more of the "R""i" are nonzero, the inclusion map fails to map 1 to 1 and hence is not a ring homomorphism.
Direct products are commutative and associative (up to isomorphism), meaning that it doesn't matter in which order one forms the direct product.
If "Ai" is an ideal of "Ri" for each "i" in "I", then is an ideal of "R". If "I" is finite, then the converse is true, i.e., every ideal of "R" is of this form. However, if "I" is infinite and the rings "Ri" are non-zero, then the converse is false: the set of elements with all but finitely many nonzero coordinates forms an ideal which is not a direct product of ideals of the "Ri". The ideal "A" is a prime ideal in "R" if all but one of the "Ai" are equal to "Ri" and the remaining "Ai" is a prime ideal in "Ri". However, the converse is not true when "I" is infinite. For example, the direct sum of the "Ri" form an ideal not contained in any such "A", but the axiom of choice gives that it is contained in some maximal ideal which is a fortiori prime.
An element "x" in "R" is a unit if and only if all of its components are units, i.e., if and only if is a unit in "Ri" for every "i" in "I". The group of units of "R" is the product of the groups of units of "Ri".
A product of two or more non-zero rings always has nonzero zero divisors: if "x" is an element of the product whose coordinates are all zero except , and "y" is an element of the product with all coordinates zero except where , then in the product ring. | https://en.wikipedia.org/wiki?curid=25063 |
Posthumanism
Posthumanism or post-humanism (meaning "after humanism" or "beyond humanism") is a term with at least seven definitions according to philosopher Francesca Ferrando:
Philosopher Ted Schatzki suggests there are two varieties of posthumanism of the philosophical kind:
One, which he calls 'objectivism', tries to counter the overemphasis of the subjective or intersubjective that pervades humanism, and emphasises the role of the nonhuman agents, whether they be animals and plants, or computers or other things.
A second prioritizes practices, especially social practices, over individuals (or individual subjects) which, they say, constitute the individual.
There may be a third kind of posthumanism, propounded by the philosopher Herman Dooyeweerd. Though he did not label it as 'posthumanism', he made an extensive and penetrating immanent critique of Humanism, and then constructed a philosophy that presupposed neither Humanist, nor Scholastic, nor Greek thought but started with a different religious ground motive. Dooyeweerd prioritized law and meaningfulness as that which enables humanity and all else to exist, behave, live, occur, etc. ""Meaning" is the "being" of all that has been "created"," Dooyeweerd wrote, "and the nature even of our selfhood." Both human and nonhuman alike function subject to a common 'law-side', which is diverse, composed of a number of distinct law-spheres or "aspects". The temporal being of both human and non-human is multi-aspectual; for example, both plants and humans are bodies, functioning in the biotic aspect, and both computers and humans function in the formative and lingual aspect, but humans function in the aesthetic, juridical, ethical and faith aspects too. The Dooyeweerdian version is able to incorporate and integrate both the objectivist version and the practices version, because it allows nonhuman agents their own subject-functioning in various aspects and places emphasis on aspectual functioning.
Ihab Hassan, theorist in the academic study of literature, once stated:
This view predates most currents of posthumanism which have developed over the late 20th century in somewhat diverse, but complementary, domains of thought and practice. For example, Hassan is a known scholar whose theoretical writings expressly address postmodernity in society. Beyond postmodernist studies, posthumanism has been developed and deployed by various cultural theorists, often in reaction to problematic inherent assumptions within humanistic and enlightenment thought.
Theorists who both complement and contrast Hassan include Michel Foucault, Judith Butler, cyberneticists such as Gregory Bateson, Warren McCullouch, Norbert Wiener, Bruno Latour, Cary Wolfe, Elaine Graham, N. Katherine Hayles, Benjamin H. Bratton, Donna Haraway, Peter Sloterdijk, Stefan Lorenz Sorgner, Evan Thompson, Francisco Varela, Humberto Maturana and Douglas Kellner. Among the theorists are philosophers, such as Robert Pepperell, who have written about a "posthuman condition", which is often substituted for the term "posthumanism".
Posthumanism differs from classical humanism by relegating humanity back to one of many natural species, thereby rejecting any claims founded on anthropocentric dominance. According to this claim, humans have no inherent rights to destroy nature or set themselves above it in ethical considerations "a priori". Human knowledge is also reduced to a less controlling position, previously seen as the defining aspect of the world. Human rights exist on a spectrum with animal rights and posthuman rights. The limitations and fallibility of human intelligence are confessed, even though it does not imply abandoning the rational tradition of humanism.
Proponents of a posthuman discourse, suggest that innovative advancements and emerging technologies have transcended the traditional model of the human, as proposed by Descartes among others associated with philosophy of the Enlightenment period. In contrast to humanism, the discourse of posthumanism seeks to redefine the boundaries surrounding modern philosophical understanding of the human. Posthumanism represents an evolution of thought beyond that of the contemporary social boundaries and is predicated on the seeking of truth within a postmodern context. In so doing, it rejects previous attempts to establish 'anthropological universals' that are imbued with anthropocentric assumptions. Recently, critics have sought to describe the emergence of posthumanism as a critical moment in modernity, arguing for the origins of key posthuman ideas in modern fiction, in Nietzsche, or in a modernist response to the crisis of historicity.
The philosopher Michel Foucault placed posthumanism within a context that differentiated humanism from enlightenment thought. According to Foucault, the two existed in a state of tension: as humanism sought to establish norms while Enlightenment thought attempted to transcend all that is material, including the boundaries that are constructed by humanistic thought. Drawing on the Enlightenment’s challenges to the boundaries of humanism, posthumanism rejects the various assumptions of human dogmas (anthropological, political, scientific) and takes the next step by attempting to change the nature of thought about what it means to be human. This requires not only decentering the human in multiple discourses (evolutionary, ecological, technological) but also examining those discourses to uncover inherent humanistic, anthropocentric, normative notions of humanness and the concept of the human.
Posthumanistic discourse aims to open up spaces to examine what it means to be human and critically question the concept of "the human" in light of current cultural and historical contexts. In her book "How We Became Posthuman", N. Katherine Hayles, writes about the struggle between different versions of the posthuman as it continually co-evolves alongside intelligent machines. Such coevolution, according to some strands of the posthuman discourse, allows one to extend their subjective understandings of real experiences beyond the boundaries of embodied existence. According to Hayles's view of posthuman, often referred to as technological posthumanism, visual perception and digital representations thus paradoxically become ever more salient. Even as one seeks to extend knowledge by deconstructing perceived boundaries, it is these same boundaries that make knowledge acquisition possible. The use of technology in a contemporary society is thought to complicate this relationship.
Hayles discusses the translation of human bodies into information (as suggested by Hans Moravec) in order to illuminate how the boundaries of our embodied reality have been compromised in the current age and how narrow definitions of humanness no longer apply. Because of this, according to Hayles, posthumanism is characterized by a loss of subjectivity based on bodily boundaries. This strand of posthumanism, including the changing notion of subjectivity and the disruption of ideas concerning what it means to be human, is often associated with Donna Haraway’s concept of the cyborg. However, Haraway has distanced herself from posthumanistic discourse due to other theorists’ use of the term to promote utopian views of technological innovation to extend the human biological capacity (even though these notions would more correctly fall into the realm of transhumanism).
While posthumanism is a broad and complex ideology, it has relevant implications today and for the future. It attempts to redefine social structures without inherently humanly or even biological origins, but rather in terms of social and psychological systems where consciousness and communication could potentially exist as unique disembodied entities. Questions subsequently emerge with respect to the current use and the future of technology in shaping human existence, as do new concerns with regards to language, symbolism, subjectivity, phenomenology, ethics, justice and creativity.
Sociologist James Hughes comments that there is considerable confusion between the two terms. In the introduction to their book on post- and transhumanism, Robert Ranisch and Stefan Sorgner address the source of this confusion, stating that posthumanism is often used as an umbrella term that includes both transhumanism and critical posthumanism.
Although both subjects relate to the future of humanity, they differ in their view of anthropocentrism. Pramod Nayar, author of "Posthumanism", states that posthumanism has two main branches: ontological and critical. Ontological posthumanism is synonymous with transhumanism. The subject is regarded as “an intensification of humanism.” Transhumanist thought suggests that humans are not post human yet, but that human enhancement, often through technological advancement and application, is the passage of becoming post human. Transhumanism retains humanism’s focus on the homo sapien as the center of the world but also considers technology to be an integral aid to human progression. Critical posthumanism, however, is opposed to these views. Critical posthumanism “rejects both human exceptionalism (the idea that humans are unique creatures) and human instrumentalism (that humans have a right to control the natural world).” These contrasting views on the importance of human beings are the main distinctions between the two subjects.
Transhumanism is also more ingrained in popular culture than critical posthumanism, especially in science fiction. The term is referred to by Pramod Nayar as "the pop posthumanism of cinema and pop culture."
Some critics have argued that all forms of posthumanism, including transhumanism, have more in common than their respective proponents realize. Linking these different approaches, Paul James suggests that 'the key political problem is that, in effect, the position allows the human as a category of being to flow down the plughole of history':
However, some posthumanists in the humanities and the arts are critical of transhumanism (the brunt of Paul James's criticism), in part, because they argue that it incorporates and extends many of the values of Enlightenment humanism and classical liberalism, namely scientism, according to performance philosopher Shannon Bell:
While many modern leaders of thought are accepting of nature of ideologies described by posthumanism, some are more skeptical of the term. Donna Haraway, the author of "A Cyborg Manifesto", has outspokenly rejected the term, though acknowledges a philosophical alignment with posthumanism. Haraway opts instead for the term of companion species, referring to nonhuman entities with which humans coexist.
Questions of race, some argue, are suspiciously elided within the "turn" to posthumanism. Noting that the terms "post" and "human" are already loaded with racial meaning, critical theorist Zakiyyah Iman Jackson argues that the impulse to move "beyond" the human within posthumanism too often ignores "praxes of humanity and critiques produced by black people", including Frantz Fanon and Aime Cesaire to Hortense Spillers and Fred Moten. Interrogating the conceptual grounds in which such a mode of “beyond” is rendered legible and viable, Jackson argues that it is important to observe that ""blackness conditions and constitutes the very nonhuman disruption and/or disruption"" which posthumanists invite. In other words, given that race in general and blackness in particular constitutes the very terms through which human/nonhuman distinctions are made, for example in enduring legacies of scientific racism, a gesture toward a “beyond” actually “returns us to a Eurocentric transcendentalism long challenged”. Posthumanist scholarship, due to characteristic rhetorical techniques, is frequently subject to the same critiques made of postmodernist scholarship in the 1980s and 1990s. | https://en.wikipedia.org/wiki?curid=25064 |
Parameter
A parameter (from the Ancient Greek παρά, "para": "beside", "subsidiary"; and μέτρον, "metron": "measure"), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc.
"Parameter" has more specific meanings within various disciplines, including mathematics, computer programming, engineering, statistics, logic and linguistics.
When a system is modeled by equations, the values that describe the system are called "parameters". For example, in mechanics, the masses, the dimensions and shapes (for solid bodies), the densities and the viscosities (for fluids), appear as parameters in the equations modeling movements. There are often several choices for the parameters, and choosing a convenient set of parameters is called "parametrization".
For example, if one were considering the movement of an object on the surface of a sphere much larger than the object (e.g. the Earth), there are two commonly used parametrizations of its position: angular coordinates (like latitude/longitude), which neatly describe large movements along circles on the sphere, and directional distance from a known point (e.g. "10km NNW of Toronto" or equivalently "8km due North, and then 6km due West, from Toronto" ), which are often simpler for movement confined to a (relatively) small area, like within a particular country or region. Such parametrizations are also relevant to the modelization of geographic areas (i.e. map drawing).
Mathematical functions have one or more arguments that are designated in the definition by variables. A function definition can also contain parameters, but unlike variables, parameters are not listed among the arguments that the function takes. When parameters are present, the definition actually defines a whole family of functions, one for every valid set of values of the parameters. For instance, one could define a general quadratic function by declaring
Here, the variable "x" designates the function's argument, but "a", "b", and "c" are parameters that determine which particular quadratic function is being considered. A parameter could be incorporated into the function name to indicate its dependence on the parameter. For instance, one may define the base-"b" logarithm by the formula
where "b" is a parameter that indicates which logarithmic function is being used. It is not an argument of the function, and will, for instance, be a constant when considering the derivative formula_3.
In some informal situations it is a matter of convention (or historical accident) whether some or all of the symbols in a function definition are called parameters. However, changing the status of symbols between parameter and variable changes the function as a mathematical object. For instance, the notation for the falling factorial power
defines a polynomial function of "n" (when "k" is considered a parameter), but is not a polynomial function of "k" (when "n" is considered a parameter). Indeed, in the latter case, it is only defined for non-negative integer arguments. More formal presentations of such situations typically start out with a function of several variables (including all those that might sometimes be called "parameters") such as
as the most fundamental object being considered, then defining functions with fewer variables from the main one by means of currying.
Sometimes it is useful to consider all functions with certain parameters as "parametric family", i.e. as an indexed family of functions. Examples from probability theory are given further below.
W.M. Woods ... a mathematician ... writes ... "... a variable is one of the many things a "parameter" is not." ... The dependent variable, the speed of the car, depends on the independent variable, the position of the gas pedal.
[Kilpatrick quoting Woods] "Now ... the engineers ... change the lever arms of the linkage ... the speed of the car ... will still depend on the pedal position ..." but in a ... different manner". You have changed a parameter"
In the context of a mathematical model, such as a probability distribution, the distinction between variables and parameters was described by Bard as follows:
In analytic geometry, curves are often given as the image of some function. The argument of the function is invariably called "the parameter". A circle of radius 1 centered at the origin can be specified in more than one form:
Hence these equations, which might be called functions elsewhere are in analytic geometry characterized as parametric equations and the independent variables are considered as parameters.
In mathematical analysis, integrals dependent on a parameter are often considered. These are of the form
In this formula, "t" is the argument of the function "F", and on the right-hand side the "parameter" on which the integral depends. When evaluating the integral, "t" is held constant, and so it is considered to be a parameter. If we are interested in the value of "F" for different values of "t", we then consider "t" to be a variable. The quantity "x" is a "dummy variable" or "variable of integration" (confusingly, also sometimes called a "parameter of integration").
In statistics and econometrics, the probability framework above still holds, but attention shifts to estimating the parameters of a distribution based on observed data, or testing hypotheses about them. In frequentist estimation parameters are considered "fixed but unknown", whereas in Bayesian estimation they are treated as random variables, and their uncertainty is described as a distribution.
In estimation theory of statistics, "statistic" or estimator refers to samples, whereas "parameter" or estimand refers to populations, where the samples are taken from. A statistic is a numerical characteristic of a sample that can be used as an estimate of the corresponding parameter, the numerical characteristic of the population from which the sample was drawn.
For example, the sample mean (estimator), denoted formula_9, can be used as an estimate of the "mean" parameter (estimand), denoted "μ", of the population from which the sample was drawn. Similarly, the sample variance (estimator), denoted "S"2, can be used to estimate the "variance" parameter (estimand), denoted "σ"2, of the population from which the sample was drawn. (Note that the sample standard deviation ("S") is not an unbiased estimate of the population standard deviation ("σ"): see Unbiased estimation of standard deviation.)
It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of "non-parametric statistics" as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation.
In probability theory, one may describe the distribution of a random variable as belonging to a "family" of probability distributions, distinguished from each other by the values of a finite number of "parameters". For example, one talks about "a Poisson distribution with mean value λ". The function defining the distribution (the probability mass function) is:
This example nicely illustrates the distinction between constants, parameters, and variables. "e" is Euler's number, a fundamental mathematical constant. The parameter λ is the mean number of observations of some phenomenon in question, a property characteristic of the system. "k" is a variable, in this case the number of occurrences of the phenomenon actually observed from a particular sample. If we want to know the probability of observing "k"1 occurrences, we plug it into the function to get formula_11. Without altering the system, we can take multiple samples, which will have a range of values of "k", but the system is always characterized by the same λ.
For instance, suppose we have a radioactive sample that emits, on average, five particles every ten minutes. We take measurements of how many particles the sample emits over ten-minute periods. The measurements exhibit different values of "k", and if the sample behaves according to Poisson statistics, then each value of "k" will come up in a proportion given by the probability mass function above. From measurement to measurement, however, λ remains constant at 5. If we do not alter the system, then the parameter λ is unchanged from measurement to measurement; if, on the other hand, we modulate the system by replacing the sample with a more radioactive one, then the parameter λ would increase.
Another common distribution is the normal distribution, which has as parameters the mean μ and the variance σ².
In these above examples, the distributions of the random variables are completely specified by the type of distribution, i.e. Poisson or normal, and the parameter values, i.e. mean and variance. In such a case, we have a parameterized distribution.
It is possible to use the sequence of moments (mean, mean square, ...) or cumulants (mean, variance, ...) as parameters for a probability distribution: see Statistical parameter.
In computer programming, two notions of parameter are commonly used, and are referred to as parameters and arguments—or more formally as a formal parameter and an actual parameter.
For example, in the definition of a function such as
"x" is the "formal parameter" (the "parameter") of the defined function.
When the function is evaluated for a given value, as in
3 is the "actual parameter" (the "argument") for evaluation by the defined function; it is a given value (actual value) that is substituted for the "formal parameter" of the defined function. (In casual usage the terms "parameter" and "argument" might inadvertently be interchanged, and thereby used incorrectly.)
These concepts are discussed in a more precise way in functional programming and its foundational disciplines, lambda calculus and combinatory logic. Terminology varies between languages; some computer languages such as C define parameter and argument as given here, while Eiffel uses an alternative convention.
In engineering (especially involving data acquisition) the term "parameter" sometimes loosely refers to an individual measured item. This usage isn't consistent, as sometimes the term "channel" refers to an individual measured item, with "parameter" referring to the setup information about that channel.
"Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Properties can have all sorts of dimensions, depending upon the system being considered; parameters are dimensionless, or have the dimension of time or its reciprocal."
The term can also be used in engineering contexts, however, as it is typically used in the physical sciences.
In environmental science and particularly in chemistry and microbiology, a parameter is used to describe a discrete chemical or microbiological entity that can be assigned a value: commonly a concentration, but may also be a logical entity (present or absent), a statistical result such as a 95%ile value or in some cases a subjective value.
Within linguistics, the word "parameter" is almost exclusively used to denote a binary switch in a Universal Grammar within a Principles and Parameters framework.
In logic, the parameters passed to (or operated on by) an "open predicate" are called "parameters" by some authors (e.g., Prawitz, "Natural Deduction"; Paulson, "Designing a theorem prover"). Parameters locally defined within the predicate are called "variables". This extra distinction pays off when defining substitution (without this distinction special provision must be made to avoid variable capture). Others (maybe most) just call parameters passed to (or operated on by) an open predicate "variables", and when defining substitution have to distinguish between "free variables" and "bound variables".
In music theory, a parameter denotes an element which may be manipulated (composed), separately from the other elements. The term is used particularly for pitch, loudness, duration, and timbre, though theorists or composers have sometimes considered other musical aspects as parameters. The term is particularly used in serial music, where each parameter may follow some specified series. Paul Lansky and George Perle criticized the extension of the word "parameter" to this sense, since it is not closely related to its mathematical sense, but it remains common. The term is also common in music production, as the functions of audio processing units (such as the attack, release, ratio, threshold, and other variables on a compressor) are defined by parameters specific to the type of unit (compressor, equalizer, delay, etc.). | https://en.wikipedia.org/wiki?curid=25065 |
Paavo Nurmi
Paavo Johannes Nurmi (; 13 June 1897 – 2 October 1973) was a Finnish middle-distance and long-distance runner. He was called the "Flying Finn" or the "Phantom Finn", as he dominated distance running in the early 20th century. Nurmi set 22 official world records at distances between 1500 metres and 20 kilometres, and won nine gold and three silver medals in his twelve events in the Olympic Games. At his peak, Nurmi was undefeated for 121 races at distances from 800 m upwards. Throughout his 14-year career, he remained unbeaten in cross country events and the 10,000 m.
Born into a working-class family, Nurmi left school at the age of twelve to provide for his family. In 1912, he was inspired by the Olympic feats of Hannes Kolehmainen and began developing a strict training program. Nurmi started to flourish during his military service, setting national records en route to his international debut at the 1920 Summer Olympics. After winning a silver medal in the 5000 m, he took gold in the 10,000 m and the cross country events. In 1923, Nurmi became the first runner to hold simultaneous world records in the mile, the 5000 m and the 10,000 m races, a feat which has never since been repeated. He set new world records for the 1500 m and the 5000 m with just an hour between the races, and took gold medals in both distances in less than two hours at the 1924 Olympics. Seemingly unaffected by the Paris heat wave, Nurmi won all his races and returned home with five gold medals, although he was frustrated that Finnish officials had refused to enter him for the 10,000 m.
Struggling with injuries and motivation issues after his exhaustive U.S. tour in 1925, Nurmi found his long-time rivals Ville Ritola and Edvin Wide ever more serious challengers. At the 1928 Summer Olympics, Nurmi recaptured the 10,000 m title but was beaten for the gold in the 5000 m and the 3000 m steeplechase. He then turned his attention to longer distances, breaking the world records for events such as the one hour run and the 25-mile marathon. Nurmi intended to end his career with a marathon gold medal, as his idol Kolehmainen had done. In a controversial case that strained Finland–Sweden relations and sparked an inter-IAAF battle, Nurmi was suspended before the 1932 Games by an IAAF council that questioned his amateur status; two days before the opening ceremonies, the council rejected his entries. Although he was never declared a professional, Nurmi's suspension became definite in 1934 and he retired from running.
Nurmi later coached Finnish runners, raised funds for Finland during the Winter War, and worked as a haberdasher, building contractor, and share trader, eventually becoming one of Finland's richest people. In 1952, he was the lighter of the Olympic Flame at the Summer Olympics in Helsinki. Nurmi's running speed and elusive personality spawned nicknames such as the "Phantom Finn", while his achievements, training methods and running style influenced future generations of middle- and long-distance runners. Nurmi, who rarely ran without a stopwatch in his hand, has been credited for introducing the "even pace" strategy and analytic approach to running, and for making running a major international sport.
Nurmi was born in Turku, Finland, to carpenter Johan Fredrik Nurmi and his wife Matilda Wilhelmiina Laine. Nurmi's siblings, Siiri, Saara, Martti and Lahja, were born in 1898, 1902, 1905 and 1908, respectively. In 1903, the Nurmi family moved from Raunistula into a 40-square-meter apartment in central Turku, where Paavo Nurmi would live until 1932. The young Nurmi and his friends were inspired by the English long-distance runner Alfred Shrubb. They regularly ran or walked six kilometres (four miles) to swim in Ruissalo, and back, sometimes twice a day. By the age of eleven, Nurmi ran the 1500 metres in 5:02. Nurmi's father Johan died in 1910 and his sister Lahja a year later. The family struggled financially, renting out their kitchen to another family and living in a single room. Nurmi, a talented student, left school to work as an errand boy for a bakery. Although he stopped running actively, he got plenty of exercise pushing heavy carts up the steep slopes in Turku. He later credited these climbs for strengthening his back and leg muscles.
At 15, Nurmi rekindled his interest in athletics after being inspired by the performances of Hannes Kolehmainen, who was said to "have run Finland onto the map of the world" at the 1912 Summer Olympics. He bought his first pair of sneakers a few days later. Nurmi trained primarily by doing cross country running in the summers and cross country skiing in the winters. In 1914, Nurmi joined the sports club Turun Urheiluliitto and won his first race on the 3000 metres. Two years later, he revised his training program to include walking, sprints and calisthenics. He continued to provide for his family through his new job at the Ab. H. Ahlberg & Co workshop in Turku, where he worked until he started his military service at a machine gun company in the Pori Brigade in April 1919. During the Finnish Civil War in 1918, Nurmi remained politically passive and concentrated on his work and his Olympic ambitions. After the war, he decided not to join the newly founded Finnish Workers' Sports Federation, but wrote articles for the federation's chief organ and criticized the discrimination against many of his fellow workers and athletes.
In the army, Nurmi quickly impressed in the athletic competitions: While others marched, Nurmi ran the whole distances with a rifle on his shoulder and a backpack full of sand. Nurmi's stubbornness caused him difficulties with his non-commissioned officers, but he was favoured by the superior officers, despite his refusal to take the soldier's oath. As the unit commander Hugo Österman was a known sports aficionado, Nurmi and few other athletes were given free time to practice. Nurmi improvised new training methods in the army barracks; he ran behind trains, holding on to the rear bumper, to stretch his stride, and used heavy iron-clad army boots to strengthen his legs. Nurmi soon began setting personal bests and got close for the Olympic selection. In March 1920, he was promoted to corporal ("alikersantti"). On 29 May 1920, he set his first national record on the 3000 m and went on to win the 1500 m and the 5000 m at the Olympic trials in July.
Nurmi made his international debut in August at the 1920 Summer Olympics in Antwerp, Belgium. He took his first medal by finishing second to Frenchman Joseph Guillemot in the 5000 m. This would remain the only time that Nurmi lost to a non-Finnish runner in the Olympics. He went on to win gold medals in his other three events: the 10,000 m, sprinting past Guillemot on the final curve and improving his personal best by over a minute, the cross country race, beating Sweden's Eric Backman, and the cross country team event where he helped Heikki Liimatainen and Teodor Koskenniemi defeat the British and Swedish teams. Nurmi's success brought electric lighting and running water for his family in Turku. Nurmi, however, was given a scholarship to study at the Teollisuuskoulu industrial school in Helsinki.
Buoyed by his defeat to Guillemot, Nurmi's races became a series of experiments which he analyzed meticulously. Previously known for his blistering pace on the first few laps, Nurmi started to carry a stopwatch and spread his efforts more uniformly over the distance. He aimed to perfect his technique and tactics to a point where the performances of his rivals would be rendered meaningless. Nurmi set his first world record on the 10,000 m in Stockholm in 1921. In 1922, he broke the world records for the 2000 m, the 3000 m and the 5000 m. A year later, Nurmi added the records for the 1500 m and the mile. His feat of holding the world records for the mile, the 5000 m and the 10,000 m at the same time has not been matched by any other athlete before or since. Nurmi also tested his speed in the 800 m, winning the 1923 Finnish Championships with a new national record. After excelling in mathematics, Nurmi graduated as an engineer in 1923 and returned home to prepare for the upcoming Olympic Games.
Nurmi's trip to the 1924 Summer Olympics was endangered by a knee injury in the spring of 1924, but he recovered and resumed training twice a day. On 19 June, Nurmi tried out the 1924 Olympic schedule at the Eläintarha Stadium in Helsinki by running the 1500 m and the 5000 m inside an hour, setting new world records for both distances. In the 1500 m final at the Olympics in Paris, Nurmi ran the first 800 m almost three seconds faster. His only challenger, Ray Watson of the United States, gave up before the last lap and Nurmi was able to slow down and coast to victory ahead of Willy Schärer, H. B. Stallard and Douglas Lowe, still breaking the Olympic record by three seconds. The 5000 m final started in less than two hours, and Nurmi faced a tough challenge from countryman Ville Ritola, who had already won the 3000 m steeplechase and the 10,000 m. Ritola and Edvin Wide figured that Nurmi must be tired and tried to burn him off by running at world-record pace. Realizing that he was now racing the two men and not the clock, Nurmi tossed his stopwatch onto the grass. The Finns later passed the Swede as his pace faded and continued their duel. On the home straight, Ritola sprinted from the outside but Nurmi increased his pace to keep his rival a metre behind.
In the cross country events, the heat of 45 °C (113 °F), caused all but 15 of the 38 competitors to abandon the race. Eight finishers were taken away on stretchers. One athlete began to run in tiny circles after reaching the stadium, until setting off into the stands and knocking himself unconscious. Early leader Wide was among those who blacked out along the course, and was incorrectly reported to have died at the hospital. Nurmi exhibited only slight signs of exhaustion after beating Ritola to the win by nearly a minute and a half. As Finland looked to have lost the team medal, the disoriented Liimatainen staggered into the stadium, but was barely moving forward. An athlete ahead of him fainted 50 metres from the finish, and Liimatainen stopped and tried to find his way off the track, thinking he had reached the finish line. After having ignored shouts and kept the spectators in suspense for a while, he turned into the right direction, realised his situation and reached the finish in 12th place and secured team gold. Those present at the stadium were shocked by what they had witnessed, and Olympic officials decided to ban cross country running from future Games.
In the 3000 m team race on the next day, Nurmi and Ritola again finished first and second, and Elias Katz secured the gold medal for the Finnish team by finishing fifth. Nurmi had won five gold medals in five events, but he left the Games embittered as the Finnish officials had allocated races between their star runners and prevented him from defending his title in the 10,000 m, the distance that was dearest to him. After returning to Finland, Nurmi set a 10,000 m world record that would last for almost 13 years. He now held the 1500 m, the mile, the 3000 m, the 5000 m and the 10,000 m world records simultaneously.
In early 1925, Nurmi embarked on a widely publicised tour of the United States. He competed in 55 events (45 indoors) during a five-month period, starting at a sold-out Madison Square Garden on 6 January. His debut was a copy of his feats in Helsinki and Paris. Nurmi defeated Joie Ray and Lloyd Hahn to win the mile and Ritola to win the 5000 m, again setting new world records for both distances. Nurmi broke ten more indoor world records in regular events and set several new best times for rarer distances. He won 51 of the events, abandoned one race and lost two handicap races along with his final event; a half-mile race at the Yankee Stadium, where he finished second to American track star Alan Helffrich. Helffrich's victory ended Nurmi's 121-race, four-year win streak in individual scratch races at distances from 800 m upwards. Although he hated losing more than anything, Nurmi was the first to congratulate Helffrich. The tour made Nurmi extremely popular in the United States, and the Finn agreed to meet President Calvin Coolidge at the White House. Nurmi left America fearing that he had competed too often and burned himself out.
Nurmi struggled to maintain motivation for running, heightened by his rheumatism and Achilles tendon problems. He quit his job as a machinery draughtsman in 1926 and began studying business intensively. As Nurmi started a new career as a share dealer, his financial advisors included Risto Ryti, director of the Bank of Finland. In 1926, Nurmi broke Wide's world record for the 3000 m in Berlin and then improved the record in Stockholm, despite Nils Eklöf repeatedly trying to slow his pace down in an effort to aid Wide. Nurmi was furious at the Swedes and vowed never to race Eklöf again. In October 1926, he lost a 1500 m race along with his world record to Germany's Otto Peltzer. This marked the first time in over five years and 133 races that Nurmi had been defeated at a distance over 1000 m. In 1927, Finnish officials barred him from international competition for refusing to run against Eklöf at the Finland-Sweden international, cancelling the Peltzer rematch scheduled for Vienna. Nurmi ended his season and threatened, until late November, to withdraw from the 1928 Summer Olympics. At the 1928 Olympic trials, Nurmi was left third in the 1500 m by eventual gold and bronze medalists Harri Larva and Eino Purje, and he decided to concentrate on the longer distances. He added steeplechase to his program, although he had only tried the event twice before, the latest being a two-mile steeplechase victory at the 1922 British Championships.
At the 1928 Olympics in Amsterdam, Nurmi competed in three events. He won the 10,000 m by staying right behind Ritola until sprinting past him on the home straight. Before the 5000 m final, Nurmi injured himself in his qualifying heat for the 3000 m steeplechase. He fell on his back at the water jump, spraining his hip and foot. Lucien Duquesne stopped to help him up, and Nurmi thanked the Frenchman by pacing him past the field and offered him the heat win, which Duquesne gracefully refused. In the 5000 m, Nurmi tried to repeat his move on Ritola but had to watch his teammate pull away instead. Nurmi, looking more exhausted than ever before, only barely managed to keep Wide behind and take silver. Nurmi had little time to rest or nurse his injuries as the 3000 m steeplechase started the next day. Struggling with the hurdles, Nurmi let Finland's steeplechase specialist Toivo Loukola escape into the distance. On the final lap, he sprinted clear of the others and finished nine seconds behind the world-record setting Loukola; Nurmi's time also bettered the previous record. Although Ritola did not finish, Ove Andersen completed a Finnish sweep of the medals.
Nurmi stated to a Swedish newspaper that "this is absolutely my last season on the track. I am beginning to get old. I have raced for fifteen years and have had enough of it." However, Nurmi continued running, turning his attention to longer distances. In October, he broke the world records for the 15 km, the 10 miles and the one hour run in Berlin. Nurmi's one-hour record stood for 17 years, until Viljo Heino ran 129 metres further in 1945. In January 1929, Nurmi started his second U.S. tour from Brooklyn. He suffered his first-ever defeat in the mile to Ray Conger at the indoor Wanamaker Mile. Nurmi was seven seconds slower than in his world record run in 1925, and it was immediately speculated if the mile had become too short a distance for him. In 1930, he set a new world record for the 20 km. In July 1931, Nurmi showed he still had pace for the shorter distances by beating Lauri Lehtinen, Lauri Virtanen and Volmari Iso-Hollo, and breaking the world record on the now-rare two miles. He was the first runner to complete the distance in less than nine minutes. Nurmi planned to compete only in the 10,000 m and the marathon in the 1932 Summer Olympics in Los Angeles, stating that he "won't enter the 5000 metres for Finland has at least three excellent men for that event."
In April 1932, the executive council of the International Amateur Athletics Federation (IAAF) suspended Nurmi from international athletics events pending an investigation into his amateur status by the Finnish Athletics Federation. The Finnish authorities criticized the IAAF for acting without a hearing, but agreed to launch an investigation. It was customary of the IAAF to accept the final decision of its national branch, and the Associated Press wrote that "there is little doubt that if the Finnish federation clears Nurmi the international body will accept its decision without question." A week later, the Finnish Athletics Federation ruled in favor of Nurmi, finding no evidence for the allegations of professionalism. Nurmi was hopeful that his suspension would be lifted in time for the Games.
On 26 June 1932 Nurmi started his first marathon at the Olympic trials. Not drinking a drop of liquid, he ran the old-style 'short marathon' of 40.2 km (25 miles) in 2:22:03.8 — on the pace to finish in about 2:29:00, just under Albert Michelsen's marathon world record of 2:29:01.8. At the time, he led Armas Toivonen, the eventual Olympic bronze medalist, by six minutes. Nurmi's time was the new unofficial world record for the short marathon. Confident that he had done enough, Nurmi stopped and retired from the race owing to problems with his Achilles tendon. The Finnish Olympic Committee entered Nurmi for both the 10,000 m and the marathon. "The Guardian" reported that "some of his trial times were almost unbelievable," and Nurmi went on to train at the Olympic Village in Los Angeles despite his injury. Nurmi had set his heart on ending his career with a marathon gold medal, as Kolehmainen had done shortly after the First World War.
Less than three days before the 10,000 m, a special commission of the IAAF, consisting of the same seven members that had suspended Nurmi, rejected the Finn's entries and barred him from competing in Los Angeles. Sigfrid Edström, president of the IAAF and chairman of its executive council, stated that the full congress of the IAAF, which was scheduled to start the next day, could not reinstate Nurmi for the Olympics but merely review the phases and political angles related to the case. The AP called this "one of the slickest political maneuvers in international athletic history", and wrote that the Games would now be "like Hamlet without the celebrated Dane in the cast." Thousands protested against the action in Helsinki. Details of the case were not released to the press, but the evidence against Nurmi was believed be the sworn statements from German race promoters that Nurmi had received $250–500 per race when running in Germany in autumn 1931. The statements were produced by Karl Ritter von Halt, after Edström had sent him increasingly threatening letters warning that if evidence against Nurmi were not provided he would be "unfortunately obliged to take stringent action against the German Athletics Association."
On the eve of the marathon, all the entrants of the race except for the Finns, whose positions were known, filed a petition asking Nurmi's entry to be accepted. Edström's right-hand man Bo Ekelund, secretary general of the IAAF and head of the Swedish Athletics Federation, approached the Finnish officials and stated that he might be able to arrange for Nurmi to participate in the marathon outside the competition. However, Finland maintained that as long as the athlete is not declared a professional, he must have the right to participate in the race officially. Although he had been diagnosed with a pulled Achilles tendon two weeks earlier, Nurmi stated he would have won the event by five minutes. The congress concluded without Nurmi being declared a professional, but the council's authority to disbar an athlete was upheld on a 13–12 vote. However, due to the close vote, the matter was postponed until the 1934 meet in Stockholm. Finns charged that the Swedish officials had used devious tricks in their campaign against Nurmi's amateur status, and ceased all athletic relations with Sweden. A year earlier, controversies on the track and in the press had led Finland to withdraw from the Finland-Sweden athletics international. After Nurmi's suspension, Finland did not agree to return to the event until 1939.
Nurmi refused to turn professional, and continued running as an amateur in Finland. In 1933, he ran his first 1500 m in three years and won the national title with his best time since 1926. At the IAAF meet in August 1934, Finland launched two proposals that lost. The council then brought forward its resolution empowering it to suspend athletes that it finds in violation of the IAAF amateur code. With a 12–5 vote, with many not voting, Nurmi's suspension from international amateur athletics became definite. Less than three weeks later, Nurmi retired from running with a 10,000 m victory in Viipuri on 16 September 1934. Nurmi remained undefeated in the distance throughout his 14-year top-level career. In cross country running, his win streak lasted 19 years.
While active as a runner, Nurmi was known to be secretive about his training methods. Always running alone, he upped his pace and quickly exhausted anyone who was bold enough to join him. Even his club mate Harri Larva had learned little from him. After ending his career, Nurmi became a coach for the Finnish Athletics Federation and trained runners for the 1936 Summer Olympics in Berlin. In 1935, Nurmi along with the entire board of directors quit the federation after a heated 40–38 vote to resume athletic relations with Sweden. However, Nurmi returned to coaching three months later and the Finnish distance runners went on to take three gold medals, three silvers and a bronze at the Games. In 1936, Nurmi also opened a men's clothing store (haberdashery) in Helsinki. It became a popular tourist attraction, and Emil Zátopek was among those who visited the store trying to meet Nurmi. The Finn spent his time in the back room, running another new business venture; construction. As a contractor, Nurmi built forty apartment buildings in Helsinki with about a hundred flats in each. Within five years, he was rated a millionaire. His fiercest rival Ritola ended up living in one of Nurmi's flats, at half price. Nurmi also made money on the stock market, eventually becoming one of Finland's richest people.
In February 1940, during the Winter War between Finland and the Soviet Union, Nurmi returned to the United States with his protégé Taisto Mäki, who had become the first man to run the 10,000 m under 30 minutes, to raise funds and rally support to the Finnish cause. The relief drive, directed by former president Herbert Hoover, included a coast-to-coast tour by Nurmi and Mäki. Hoover welcomed the two as "ambassadors of the greatest sporting nation in the world." While in San Francisco, Nurmi received news that one of his apprentices, 1936 Olympic champion Gunnar Höckert, had been killed in action. Nurmi left for Finland in late April, and later served in the Continuation War in a delivery company and as a trainer in the military staff. Before he was discharged in January 1942, Nurmi was promoted first to a staff sergeant ("ylikersantti") and later to a sergeant first class ("vääpeli").
In 1952, Nurmi was persuaded by Urho Kekkonen, Prime Minister of Finland and former chairman of the Finnish Athletics Federation, to carry the Olympic torch into the Olympic Stadium at the 1952 Summer Olympics in Helsinki. His appearance astonished the spectators, and "Sports Illustrated" wrote that "his celebrated stride was unmistakable to the crowd. When he came into view, waves of sound began to build throughout the stadium, rising to a roar, then to a thunder. When the national teams, assembled in formation on the infield, saw the flowing figure of Nurmi, they broke ranks like excited schoolchildren, dashing toward the edge of the track." After lighting the flame in the Olympic Cauldron, Nurmi passed the torch to his idol Kolehmainen, who lighted the beacon in the tower. In the cancelled 1940 Summer Olympics, Nurmi had been planned to lead a group of fifty Finnish gold medal winners.
Nurmi felt that he got too much credit as an athlete and too little as a businessman, but his interest in running never died. He even returned to the track himself a few times. In 1946, he faced his old rival Edvin Wide in Stockholm in a benefit for the victims of the Greek Civil War. Nurmi ran for the last time on 18 February 1966 at the Madison Square Garden, invited by the New York Athletic Club. In 1962, Nurmi predicted that welfare countries would start to struggle in the distance events: "The higher the standard of living in a country, the weaker the results often are in the events which call for work and trouble. I would like to warn this new generation: 'Do not let this comfortable life make you lazy. Do not let the new means of transport kill your instinct for physical exercise. Too many young people get used to driving in a car even for small distances.'" In 1966, he took the microphone in front of 300 sports club guests and criticised the state of distance running in Finland, reproaching the sports executives as publicity seekers and tourists, and demanding athletes sacrifice everything to accomplish something. Nurmi lived to see the renaissance of Finnish running in the 1970s, led by athletes such as the 1972 Olympic gold medalists Lasse Virén and Pekka Vasala. He had complimented the running style of Virén, and advised Vasala to concentrate on Kipchoge Keino.
Although he accepted an invitation from President Lyndon B. Johnson to revisit the White House in 1964, Nurmi lived a very secluded life until the late 1960s when he began granting some press interviews. On his 70th birthday, Nurmi agreed to an interview for Yle, Finland's national public-broadcasting company, only after learning that President Kekkonen would act as the interviewer. Suffering from health problems, with at least one heart attack, a stroke and failing eyesight, Nurmi at times spoke bitterly about sports, calling it a waste of time compared to science and art. He died in 1973 in Helsinki and was given a state funeral. Kekkonen attended the funeral and praised Nurmi: "People explore the horizons for a successor. But none comes and none will, for his class is extinguished with him." At the request of Nurmi, who enjoyed classical music and played the violin, Konsta Jylhä's "Vaiennut viulu" ("The Silenced Violin") was played during the ceremony. Nurmi's last record fell in 1996; his 1925 world record for the indoor 2000 m lasted as the Finnish national record for 71 years.
Nurmi was married to socialite Sylvi Laaksonen (1907-1968) from 1932 to 1935. Laaksonen, who was not interested in athletics, opposed Nurmi raising their newborn son Matti to be a runner and stated to the Associated Press in 1933, "[H]is concentration on athletics at last forced me to go to the judge for a divorce." Matti Nurmi did become a middle-distance runner, and later a "self-made" businessman. Nurmi's relationship with his son was termed "uneasy". Matti admired his father more as a businessman than as an athlete, and the two never discussed his running career. As a runner, Matti was at his best in the 3000 m, where he equalled his father's time. In the famous race on 11 July 1957 when the "three Olavis" (Salsola, Salonen and Vuorisalo) broke the world record for the 1500 m, Matti Nurmi finished a distant ninth with his personal best, 2.2 seconds slower than his father's world record from 1924. Hollywood actress Maila Nurmi, best known as the horror icon "Vampira", was often referred to as Paavo Nurmi's niece. However, the kinship is not supported by official documents.
Nurmi enjoyed the Finnish sports massage and sauna-bathing traditions, crediting the Finnish sauna for his performances during the Paris heat wave in 1924. He had a versatile diet, although he had practiced vegetarianism between the ages of 15 and 21. Nurmi, who identified as neurasthenic, was known to be "taciturn", "stony-faced" and "stubborn". He was not believed to have had any close friends, but he had occasionally socialized and showed his "sarcastic sense of humour" among the small circles he knew. Acclaimed the biggest sporting figure in the world at his peak, Nurmi was averse to publicity and the media, stating later on his 75th birthday, "[W]orldly fame and reputation are worth less than a rotten lingonberry." French journalist Gabriel Hanot questioned Nurmi's intensive approach to sports and wrote in 1924 that Nurmi "is ever more serious, reserved, concentrated, pessimistic, fanatic. There is such coldness in him and his self-control is so great that never for a moment does he show his feelings." Some contemporary Finns nicknamed him "Suuri vaikenija" (The Great Silent One), and Ron Clarke noted that Nurmi's persona remained a mystery even to Finnish runners and journalists: "Even to them, he was never quite real. He was enigmatic, sphinx-like, a god in a cloud. It was as if he was all the time playing a role in a drama."
Nurmi was more responsive to his fellow athletes than to the media. He exchanged ideas with sprinter Charley Paddock and even trained with his rival Otto Peltzer. Nurmi told Peltzer to forget his opponents: "Conquering yourself is the greatest challenge of an athlete." Nurmi was known to emphasize the importance of psychological strength: "Mind is everything; muscle, pieces of rubber. All that I am, I am because of my mind." Regarding Nurmi's track antics, Peltzer found that "in his impenetrability he was a Buddha gliding on the track. Stopwatch in hand, lap after lap, he ran towards the tape, subject only to the laws of a mathematical table." Marathoner Johnny Kelley, who first met his idol at the 1936 Olympics, said that while Nurmi appeared cold to him at first, the two chatted for quite a while after Nurmi had asked for his name: "He grabbed ahold of me — he was so excited. I couldn't believe it!"
Nurmi's speed and elusive personality led to nicknames such as the "Phantom Finn", the "King of Runners" and "Peerless Paavo", while his mathematical prowess and use of a stopwatch led the press to characterize him as a running machine. One newspaperman dubbed Nurmi "a mechanical Frankenstein created to annihilate time." Phil Cousineau noted that "his own innovation — the tactic of pacing himself with a stopwatch — both inspired and troubled people in an era when the robot was becoming symbolic of the modern soulless human being." Among the popular newspaper rumours about Nurmi was that he had a "freakish heart" with a very low pulse rate. During the debate over his amateur status, Nurmi was joked to have "the lowest heartbeat and the highest asking price of any athlete in the world."
Nurmi broke 22 official world records on distances between 1500 m and 20 km; a record in running. He also set many more unofficial ones for a total of 58. His indoor world records were all unofficial as the IAAF did not ratify indoor records until the 1980s. Nurmi's record for most Olympic gold medals was matched by gymnast Larisa Latynina in 1964, swimmer Mark Spitz in 1972 and fellow track and field athlete Carl Lewis in 1996, and broken by swimmer Michael Phelps in 2008. Nurmi's record for most medals in the Olympic Games stood until Edoardo Mangiarotti won his 13th medal in fencing in 1960. "Time" selected Nurmi as the greatest Olympian of all time in 1996, and IAAF named him among the first twelve athletes to be inducted into the IAAF Hall of Fame in 2012.
Nurmi introduced the "even pace" strategy to running, pacing himself with a stopwatch and spreading his energy uniformly over the race. He reasoned that "when you race against time, you don't have to sprint. Others can't hold the pace if it is steady and hard all through to the tape." Archie Macpherson stated that "with the stopwatch always in his hand, he elevated athletics to a new plane of intelligent application of effort and was the harbinger of the modern scientifically prepared athlete." Nurmi was considered a pioneer also in regards to training; he developed a systematic all-year-round training program that included both long-distance work and interval running. Peter Lovesey wrote in "The Kings of Distance: A Study of Five Great Runners" that Nurmi "accelerated the progress of world records; developed and actually came to personify the analytic approach to running; and he was a profound influence not only in Finland, but throughout the world of athletics. Nurmi, his style, technique and tactics were held to be infallible, and really seemed so, as successive imitators in Finland steadily improved the records." Cordner Nelson, founder of "Track & Field News", credited Nurmi for popularizing running as a spectator sport: "His imprint on the track world was greater than any man’s before or after. He, more than any man, raised track to the glory of a major sport in the eyes of international fans, and they honored him as one of the truly great athletes of all sports.
Nurmi's achievements and training methods inspired future track stars of many generations. Emil Zátopek chanted "I am Nurmi! I am Nurmi!" when he trained as a child, and based his training system on what he was able to find out about Nurmi's methods. Lasse Virén idolized Nurmi and was scheduled to meet him for the first time on the day that Nurmi died. Hicham El Guerrouj was inspired to become a runner so that he could "repeat the achievements of the great man of whom his grandfather spoke." He became the first man after Nurmi to win the 1500 m and the 5000 m at the same Games. Nurmi's influence stretched further than running on the Olympic arena. At the 1928 Olympics, Kazimierz Wierzyński won the lyric gold medal with his poem "Olympic Laurel" that included a verse on Nurmi. In 1936, Ludwig Stubbendorf and his horse "Nurmi" won the individual and team gold medals in eventing.
A bronze statue of Nurmi was sculpted by Wäinö Aaltonen in 1925. The original is held at the art museum Ateneum, but copies cast from the original mould exist in Turku, in Jyväskylä, in front of the Helsinki Olympic Stadium and at the Olympic Museum in Lausanne, Switzerland. In a widely publicized prank by the students of the Helsinki University of Technology, a miniature copy of the statue was discovered from the 300-year-old wreck of the Swedish war ship "Vasa" when it was lifted from the bottom of the sea in 1961. Statues of Nurmi were also sculpted by Renée Sintenis in 1926 and by Carl Eldh, whose 1937 work "Löpare" ("Runners") depicts a battle between Nurmi and Edvin Wide. "Boken om Nurmi" ("The Book about Nurmi"), released in Sweden in 1925, was the first biographical book on a Finnish sportsman. Finnish astronomer Yrjö Väisälä named the main belt asteroid 1740 Paavo Nurmi after Nurmi in 1939, while Finnair named its first DC-8 "Paavo Nurmi" in 1969. Nurmi's former rival Ville Ritola boarded the plane when he moved back to Finland in 1970.
Paavo Nurmi Marathon, held annually since 1969, is the oldest marathon in Wisconsin and the second-oldest in the American Midwest. In Finland, another marathon bearing the name has been held in Nurmi's hometown of Turku since 1992, along with the athletics competition Paavo Nurmi Games that was started in 1957. Finlandia University, an American college with Finnish roots, named their athletic center after Nurmi. A ten-mark bill featuring a portrait of Nurmi was issued by the Bank of Finland in 1987. The other revised bills honored architect Alvar Aalto, composer Jean Sibelius, Enlightenment thinker Anders Chydenius and author Elias Lönnrot, respectively. The Nurmi bill was replaced by a new 20-mark note featuring Väinö Linna in 1993. In 1997, a historic stadium in Turku was renamed the "Paavo Nurmi Stadium". Twenty world records have been set at the stadium, including John Landy's records on the 1500 m and the mile, Nurmi's record on the 3000 m and Zátopek's record on the 10,000 m. In fiction, Nurmi appears in William Goldman's 1974 novel "Marathon Man" as the idol of the protagonist, who aims to become a greater runner than Nurmi. The opera on Nurmi, "Paavo the Great. Great Race. Great Dream.", written by Paavo Haavikko and composed by Tuomas Kantelinen, debuted at the Helsinki Olympic Stadium in 2000. In a 2005 episode of "The Simpsons", Mr. Burns brags that he once outraced Nurmi in his antique motorcar.
The starts figure excludes heats, handicap races, relays, and events where Nurmi raced alone against relay teams.
The starts figure excludes heats, handicap races, relays, and events where Nurmi raced alone against relay teams. | https://en.wikipedia.org/wiki?curid=25071 |
Purple Heart
The Purple Heart is a United States military decoration awarded in the name of the President to those wounded or killed while serving, on or after April 5, 1917, with the U.S. military. With its forerunner, the Badge of Military Merit, which took the form of a heart made of purple cloth, the Purple Heart is the oldest military award still given to U.S. military members – the only earlier award being the obsolete Fidelity Medallion. The National Purple Heart Hall of Honor is located in New Windsor, New York.
The original Purple Heart, designated as the Badge of Military Merit, was established by George Washington – then the commander-in-chief of the Continental Army – by order from his Newburgh, New York headquarters on August 7, 1782. The Badge of Military Merit was only awarded to three Revolutionary War soldiers by General George Washington himself. Washington authorized his subordinate officers to issue Badges of Merit as appropriate. Although never abolished, the award of the badge was not proposed again officially until after World War I.
On October 10, 1927, Army Chief of Staff General Charles Pelot Summerall directed that a draft bill be sent to Congress "to revive the Badge of Military Merit". The bill was withdrawn and action on the case ceased January 3, 1928, but the office of the Adjutant General was instructed to file all materials collected for possible future use. A number of private interests sought to have the medal re-instituted in the Army; this included the board of directors of the Fort Ticonderoga Museum in Ticonderoga, New York.
On January 7, 1931, Summerall's successor, General Douglas MacArthur, confidentially reopened work on a new design, involving the Washington Commission of Fine Arts. Elizabeth Will, an Army heraldic specialist in the Office of the Quartermaster General, was named to redesign the newly revived medal, which became known as the Purple Heart. Using general specifications provided to her, Will created the design sketch for the present medal of the Purple Heart. The new design, which exhibits a bust and profile of George Washington, was issued on the bicentennial of Washington's birth. Will's obituary, in the edition of February 8, 1975 of "The Washington Post" newspaper, reflects her many contributions to military heraldry.
The Commission of Fine Arts solicited plaster models from three leading sculptors for the medal, selecting that of John R. Sinnock of the Philadelphia Mint in May 1931. By Executive Order of the President of the United States, the Purple Heart was revived on the 200th Anniversary of George Washington's birth, out of respect to his memory and military achievements, by War Department , dated February 22, 1932.
The criteria were announced in a War Department circular dated February 22, 1932, and authorized award to soldiers, upon their request, who had been awarded the Meritorious Service Citation Certificate, Army Wound Ribbon, or were authorized to wear Wound Chevrons subsequent to April 5, 1917, the day before the United States entered World War I. The first Purple Heart was awarded to MacArthur. During the early period of American involvement in World War II (December 8, 1941 – September 22, 1943), the Purple Heart was awarded both for wounds received in action against the enemy and for meritorious performance of duty. With the establishment of the Legion of Merit, by an Act of Congress, the practice of awarding the Purple Heart for meritorious service was discontinued. By , dated December 3, 1942, the decoration was applied to all services; the order required reasonable uniform application of the regulations for each of the Services. This executive order also authorized the award only for wounds received. For both military and civilian personnel during the World War II era, to meet eligibility for the Purple Heart, AR 600–45, dated September 22, 1943, and May 3, 1944, required identification of circumstances.
After the award was re-authorized in 1932 some U.S. Army wounded from conflicts prior to the first World War applied for, and were awarded, the Purple Heart: "...veterans of the Civil War and Indian Wars, as well as the Spanish–American War, China Relief Expedition (Boxer Rebellion), and Philippine Insurrection also were awarded the Purple Heart. This is because the original regulations governing the award of the Purple Heart, published by the Army in 1932, provided that any soldier who had been wounded in any conflict involving U.S. Army personnel might apply for the new medal. There were but two requirements: the applicant had to be alive at the time of application (no posthumous awards were permitted) and he had to prove that he had received a wound that necessitated treatment by a medical officer."
Subject to approval of the Secretary of Defense, , dated February 12, 1952, revised authorizations to include the Service Secretaries. Dated April 25, 1962, , included provisions for posthumous award of the Purple Heart. Dated February 23, 1984, , authorized award of the Purple Heart as a result of terrorist attacks, or while serving as part of a peacekeeping force, subsequent to March 28, 1973.
On June 13, 1985, the Senate approved an amendment to the 1985 Defense Authorization Bill, which changed the precedence of the Purple Heart award, from immediately above the Good Conduct Medal to immediately above the Meritorious Service Medals. Public Law 99-145 authorized the award for wounds received as a result of friendly fire. Public Law 104-106 expanded the eligibility date, authorizing award of the Purple Heart to a former prisoner of war who was wounded after April 25, 1962. The National Defense Authorization Act for Fiscal Year 1998 (Public Law 105-85) changed the criteria to delete authorization for award of the Purple Heart to any non-military U.S. national serving under competent authority in any capacity with the Armed Forces. This change was effective May 18, 1998.
During World War II, 1,506,000 Purple Heart medals were manufactured, many in anticipation of the estimated casualties resulting from the planned Allied invasion of Japan. By the end of the war, even accounting for medals lost, stolen or wasted, nearly 500,000 remained. To the present date, total combined American military casualties of the seventy years following the end of World War II—including the Korean and Vietnam Wars—have not exceeded that number. In 2000, there remained 120,000 Purple Heart medals in stock. The existing surplus allowed combat units in Iraq and Afghanistan to keep Purple Hearts on-hand for immediate award to soldiers wounded in the field.
The "History" section of the November 2009 edition of "National Geographic" estimated the number of Purple Hearts given. Above the estimates, the text reads, "Any tally of Purple Hearts is an estimate. Awards are often given during conflict; records aren't always exact" (page 33). The estimates are as follows:
August 7 of every year is recognized as "National Purple Heart Day."
The Purple Heart is awarded in the name of the President of the United States to any member of the Armed Forces of the United States who, while serving under competent authority in any capacity with one of the U.S. Armed Services after April 5, 1917, has been wounded or killed. Specific examples of services which warrant the Purple Heart includes:
The two letters c) and e) were added by on April 25, 1962, as U.S. service personnel were being sent to South Vietnam during the Vietnam War as military advisors rather than combatants. As many were being killed or wounded while serving in that capacity in South Vietnam, and because the United States was not formally a participant of the war (until 1965), there was no “enemy” to satisfy the requirement of a wound or death received “in action against an enemy.” In response, President John F. Kennedy signed the executive order that awarded to any person wounded or killed “while serving with friendly foreign forces” or “as a result of action by a hostile foreign force.”
After March 28, 1973, it may be awarded as a result of an international terrorist attack against the United States or a foreign nation friendly to the United States, recognized as such an attack by the Secretary of the Army, or jointly by the Secretaries of the separate armed services concerned if persons from more than one service are wounded in the attack. Also, it may be awarded as a result of military operations while serving outside the territory of the United States as part of a peacekeeping force.
The Purple Heart differs from most other decorations in that an individual is not "recommended" for the decoration; rather he or she is entitled to it upon meeting specific criteria. A Purple Heart is awarded for the first wound suffered under conditions indicated above, but for each subsequent award an oak leaf cluster or 5/16 inch star is worn in lieu of another medal. Not more than one award will be made for more than one wound or injury received at the same instant.
A "wound" is defined as an injury to any part of the body from an outside force or agent sustained under one or more of the conditions listed above. A physical lesion is not required; however, the wound for which the award is made must have required treatment by a medical officer and records of medical treatment for wounds or injuries received in action must have been made a matter of official record. When contemplating an award of this decoration, the key issue that commanders must take into consideration is the degree to which the enemy caused the injury. The fact that the proposed recipient was participating in direct or indirect combat operations is a necessary prerequisite, but is not sole justification for award. The Purple Heart is not awarded for non-combat injuries.
Enemy-related injuries which "justify" the award of the Purple Heart include: injury caused by enemy bullet, shrapnel, or other projectile created by enemy action; injury caused by enemy placed land mine, naval mine, or trap; injury caused by enemy released chemical, biological, or nuclear agent; injury caused by vehicle or aircraft accident resulting from enemy fire; and, concussion injuries caused as a result of enemy generated explosions.
Injuries or wounds which "do not qualify" for award of the Purple Heart include frostbite or trench foot injuries; heat stroke; food poisoning not caused by enemy agents; chemical, biological, or nuclear agents not released by the enemy; battle fatigue; disease not directly caused by enemy agents; accidents, to include explosive, aircraft, vehicular, and other accidental wounding not related to or caused by enemy action; self-inflicted wounds (e.g., a soldier accidentally or intentionally fires their own gun and the bullet strikes his or her leg), except when in the heat of battle, and not involving gross negligence; post-traumatic stress disorders; and jump injuries not caused by enemy action.
It is not intended that such a strict interpretation of the requirement for the wound or injury to be caused by direct result of hostile action be taken that it would preclude the award being made to deserving personnel. Commanders must also take into consideration the circumstances surrounding an injury, even if it appears to meet the criteria. In the case of an individual injured while making a parachute landing from an aircraft that had been brought down by enemy fire; or, an individual injured as a result of a vehicle accident caused by enemy fire, the decision will be made in favor of the individual and the award will be made. As well, individuals wounded or killed as a result of "friendly fire" in the "heat of battle" will be awarded the Purple Heart as long as the "friendly" projectile or agent was released with the full intent of inflicting damage or destroying enemy troops or equipment. Individuals injured as a result of their own negligence, such as by driving or walking through an unauthorized area known to have been mined or placed off limits or searching for or picking up unexploded munitions as war souvenirs, will not be awarded the Purple Heart as they clearly were not injured as a result of enemy action, but rather by their own negligence.
Animals are generally not eligible for the Purple Heart; however, there have been rare instances when animals holding military rank were honored with the award. An example includes the horse Sergeant Reckless during the Korean War.
From 1942 to 1997, non-military personnel serving or closely affiliated with the armed forces—as government employees, Red Cross workers, war correspondents, and the like—were eligible to receive the Purple Heart whether in peacetime or armed conflicts. Among the earliest to receive the award were nine Honolulu Fire Department (HFD) firefighters killed or wounded in peacetime while fighting fires at Hickam Field during the attack on Pearl Harbor. About 100 men and women received the award, the most famous being newspaperman Ernie Pyle who was awarded a Purple Heart posthumously by the Army after being killed by Japanese machine gun fire in the Pacific Theater, near the end of World War II. Before his death, Pyle had seen and experienced combat in the European Theater, while accompanying and writing about infantrymen for the folks back home. Those serving in the Merchant Marine are not eligible for the award. During World War II, members of this service who met the Purple Heart criteria received a Merchant Marine Mariner's Medal instead.
The most recent Purple Hearts presented to non-military personnel occurred after the terrorist attacks at Khobar Towers, Saudi Arabia, in 1996—for their injuries, about 40 U.S. civil service employees received the award.
However, in 1997, at the urging of the Military Order of the Purple Heart, Congress passed legislation prohibiting future awards of the Purple Heart to non-military personnel. Civilian employees of the U.S. Department of Defense who are killed or wounded as a result of hostile action may receive the new Defense of Freedom Medal. This award was created shortly after the terrorist attacks of September 11, 2001.
The Purple Heart award is a heart-shaped medal within a gold border, wide, containing a profile of General George Washington. Above the heart appears a shield of the coat of arms of George Washington (a white shield with two red bars and three red stars in chief) between sprays of green leaves. The reverse consists of a raised bronze heart with the words FOR MILITARY MERIT below the coat of arms and leaves.
The ribbon is wide and consists of the following stripes: white 67101; purple 67115; and white 67101.
Additional awards of the Purple Heart are denoted by oak leaf clusters in the Army and Air Force, and additional awards of the Purple Heart Medal are denoted by inch stars in the Navy, Marine Corps, and Coast Guard.
Current active duty personnel are awarded the Purple Heart upon recommendation from their chain of command, stating the injury that was received and the action in which the service member was wounded. The award authority for the Purple Heart is normally at the level of an Army Brigade, Marine Corps Division, Air Force Wing, or Navy Task Force. While the award of the Purple Heart is considered automatic for all wounds received in combat, each award presentation must still be reviewed to ensure that the wounds received were as a result of enemy action. Modern day Purple Heart presentations are recorded in both hardcopy and electronic service records. The annotation of the Purple Heart is denoted both with the service member's parent command and at the headquarters of the military service department. An original citation and award certificate are presented to the service member and filed in the field service record.
During the Vietnam War, Korean War, and World War II, the Purple Heart was often awarded on the spot, with occasional entries made into service records. In addition, during mass demobilizations following each of America's major wars of the 20th century, it was common occurrence to omit mention from service records of a Purple Heart award. This occurred due to clerical errors, and became problematic once a service record was closed upon discharge. In terms of keeping accurate records, it was commonplace for some field commanders to engage in bedside presentations of the Purple Heart. This typically entailed a general entering a hospital with a box of Purple Hearts, pinning them on the pillows of wounded service members, then departing with no official records kept of the visit, or the award of the Purple Heart. Service members, themselves, complicated matters by unofficially leaving hospitals, hastily returning to their units to rejoin battle so as not to appear a malingerer. In such cases, even if a service member had received actual wounds in combat, both the award of the Purple Heart, as well as the entire visit to the hospital, was unrecorded in official records.
Service members requesting retroactive awards of the Purple Heart must normally apply through the National Personnel Records Center. Following a review of service records, qualified Army members are awarded the Purple Heart by the U.S. Army Human Resources Command in Fort Knox, Kentucky. Air Force veterans are awarded the Purple Heart by the Awards Office of Randolph Air Force Base, while Navy, Marine Corps, and Coast Guard, present Purple Hearts to veterans through the Navy Liaison Officer at the National Personnel Records Center. Simple clerical errors, where a Purple Heart is denoted in military records, but was simply omitted from a (WD AGO Form 53-55 (predecessor to the) DD Form 214 (Report of Separation), are corrected on site at the National Personnel Records Center through issuance of a DD-215 document.
Because the Purple Heart did not exist prior to 1932, decoration records are not annotated in the service histories of veterans wounded, or killed, by enemy action, prior to establishment of the medal. The Purple Heart is, however, retroactive to 1917 meaning it may be presented to veterans as far back as the First World War. Prior to 2006, service departments would review all available records, including older service records, and service histories, to determine if a veteran warranted a retroactive Purple Heart. As of 2008, such records are listed as "Archival", by the National Archives and Records Administration, meaning they have been transferred from the custody of the military, and can no longer be loaned and transferred for retroactive medals determination. In such cases, requestors asking for a Purple Heart (especially from records of the First World War) are provided with a complete copy of all available records (or reconstructed records in the case of the 1973 fire) and advised the Purple Heart may be privately purchased if the requestor feels it is warranted.
A clause to the archival procedures was revised in mid-2008, where if a veteran, or, if deceased, an immediate member of the family, requested the Purple Heart, on an Army or Air Force record, the medal could still be granted by the National Archives. In such cases, where a determination was required made by the military service department, photocopies of the archival record, (but not the record itself), would be forwarded to the headquarters of the military branch in question. This stipulation was granted only for the Air Force and Army; Marine Corps, Navy, and Coast Guard archival medals requests are still typically only offered a copy of the file and told to purchase the medal privately. For requests directly received from veterans, these are routed through a Navy Liaison Office, on site at 9700 Page Avenue, St. Louis, MO 63132-5100 (the location of the Military Personnel Records Center).
Due to the 1973 National Archives Fire, many retroactive Purple Heart requests are difficult to verify because all records to substantiate the award may have been destroyed. As a solution to deal with Purple Heart requests, where service records were destroyed in the 1973 fire, the National Personnel Records Center maintains a separate office. In such cases, NPRC searches through unit records, military pay records, and records of the Department of Veterans Affairs. If a Purple Heart is warranted, all available alternate records sources are forwarded to the military service department for final determination of issuance.
The loaning of fire related records to the military has declined since 2006 because many such records now fall into the "archival records" category of military service records. This means the records were transferred from the military to the National Archives, and in such cases, the Purple Heart may be privately purchased by the requestor (see above section of retroactive requests for further details) but is no longer provided by the military service department.
Ten Purple Hearts:
Nine Purple Hearts:
Eight Purple Hearts: | https://en.wikipedia.org/wiki?curid=25072 |
Polyatomic ion
A molecular ion is a covalently bonded set of two or more atoms, or of a metal complex, that can be considered to behave as a single unit and that has a net charge that is not zero. Unlike a molecule, which has a net charge of zero, this chemical species is an ion. . (The prefix "poly-" carries the meaning "many" in Greek, but even ions of two atoms are commonly described as polyatomic.)
In older literature, a polyatomic ion may instead be referred to as a "radical" (or less commonly, as a "radical group"). (In contemporary usage, the term "radical" refers to various free radicals, which are species that have an unpaired electron and need not be charged.)
A simple example of a polyatomic ion is the hydroxide ion, which consists of one oxygen atom and one hydrogen atom, jointly carrying a net charge of −1; its chemical formula is . In contrast, an ammonium ion consists of one nitrogen atom and ‘’four’’ hydrogen atoms, with a charge of +1; its chemical formula is .
Polyatomic ions often are useful in the context of acid-base chemistry, and in the formation of salts.
Often, a polyatomic ion can be considered as the conjugate acid or base of a neutral molecule. For example, the conjugate base of sulfuric acid (H2SO4) is the polyatomic hydrogen sulfate anion (). The removal of another hydrogen ion produces the sulfate anion ().
There are two "rules" that can be used for learning the nomenclature of polyatomic anions. First, when the prefix "bi" is added to a name, a hydrogen is added to the ion's formula and its charge is increased by 1, the latter being a consequence of the hydrogen ion's +1 charge. An alternative to the "bi-" prefix is to use the word hydrogen in its place: the anion derived from + , , can be called either bicarbonate or hydrogencarbonate.
Most of the common polyatomic anions are oxyanions, conjugate bases of oxyacids (acids derived from the oxides of non-metallic elements). For example, the sulfate anion, , is derived from , which can be regarded as + .
The second rule looks at the number of oxygens in an ion. Consider the chlorine oxyanion family:
First, think of the "-ate" ion as being the "base" name, in which case the addition of a "per-" prefix adds an oxygen. Changing the "-ate" suffix to "-ite" will reduce the oxygens by one, and keeping the suffix "-ite" and adding the prefix "hypo-" reduces the number of oxygens by one more. In all situations, the charge is not affected. The naming pattern follows within many different oxyanion series based on a standard root for that particular series. The "-ite" has one less oxygen than the "-ate", but different "-ate" anions might have different numbers of oxygen atoms.
These rules do not work with all polyatomic anions, but they do work with the most common ones. Following table give examples for some of these common anion groups.
The following tables give additional examples of commonly encountered polyatomic ions. Only a few representatives are given, as the number of polyatomic ions encountered in practice is very large. | https://en.wikipedia.org/wiki?curid=25073 |
Persecution of Christians
The persecution of Christians can be historically traced from the first century of the Christian era to the present day. Early Christians were persecuted for their faith at the hands of both the Jews from whose religion Christianity arose and the Romans who controlled many of the lands across which early Christianity was spread. Early in the fourth century, a form of the religion was legalized by the Edict of Milan, and it eventually became the State church of the Roman Empire.
Christian missionaries and converts to Christianity have both been targets of persecution, sometimes to the point of being martyred for their faith, ever since the emergence of Christianity.
The schisms of the Middle Ages and the later Protestant Reformation, sometimes provoked severe conflicts between Christian denominations and during these conflicts, members of these different denominations frequently and violently persecuted each other.
In the 20th century, Christians were persecuted, sometimes to the point of genocide, by various governments, including the government of the Ottoman Empire, which committed the Armenian, Assyrian and Greek Genocides, and the governments of atheistic states such as the Soviet Union, Communist Albania and North Korea.
Early Christianity began as a sect among Second Temple Jews, and according to the New Testament account, Pharisees, including Paul of Tarsus prior to his conversion to Christianity, persecuted early Christians. The early Christians preached the second coming of a Messiah which did not conform to their religious teachings. However, feeling that their beliefs were supported by Jewish scripture, Christians had been hopeful that their countrymen would accept their faith. Despite individual conversions, the vast majority of Judean Jews did not become Christians.
Claudia Setzer asserts that, "Jews did not see Christians as clearly separate from their own community until at least the middle of the second century." Thus, acts of Jewish persecution of Christians fall within the boundaries of synagogue discipline and were so perceived by Jews acting and thinking as the established community. The Christians, on the other hand, saw themselves as persecuted rather than "disciplined."
Inter-communal dissension began almost immediately with the teachings of the outspoken Stephen at Jerusalem, who was considered an apostate by Jewish authorities. According to the Acts of the Apostles, a year after the Crucifixion of Jesus, Stephen was stoned for his alleged transgression of the faith, with Saul (who later converted and was renamed "Paul") acquiescing and looking on.
In 41 AD, when Agrippa I, who already possessed the territory of Antipas and Phillip, obtained the title of "King of the Jews", in a sense re-forming the Kingdom of Herod, he was reportedly eager to endear himself to his Jewish subjects and continued the persecution in which James the Greater lost his life, Peter narrowly escaped and the rest of the apostles took flight.
After Agrippa's death, the Roman procuratorship began (before 41 they were Prefects in Iudaea Province) and those leaders maintained a neutral peace, until the procurator Festus died and the high priest Annas II took advantage of the power vacuum to attack the Church and executed James the Just, then leader of Jerusalem's Christians. The New Testament states that Paul was himself imprisoned on several occasions by the Roman authorities, stoned by the Pharisees and left for dead on one occasion, and was eventually taken to Rome as a prisoner. Peter and other early Christians were also imprisoned, beaten and harassed. The First Jewish Rebellion, spurred by the Roman killing of 3,000 Jews, led to the destruction of Jerusalem in 70 AD, the end of Second Temple Judaism (and the subsequent slow rise of Rabbinic Judaism ), and the disempowering of the Jewish persecutors. According to an old church tradition, which is mostly doubted by historians, the early Christian community had fled Jerusalem beforehand, to the already pacified region of Pella.
Luke T. Johnson nuances the harsh portrayal of the Jews in the Gospels by contextualizing the polemics within the rhetoric of contemporaneous philosophical debate, showing how rival schools of thought routinely insulted and slandered their opponents. These attacks were formulaic and stereotyped, crafted to define who was the enemy in the debates, but not used with the expectation that their insults and accusations would be taken literally, as they would be centuries later, resulting in millennia of Christian antisemitism.
By the 4th century, John Chrysostom argued that the Pharisees alone, not the Romans, were responsible for the murder of Jesus. However, according to Walter Laqueur, "Absolving Pilate from guilt may have been connected with the missionary activities of early Christianity in Rome and the desire not to antagonize those they want to convert."
The first documented case of imperially supervised persecution of Christians in the Roman Empire begins with Nero (54–68). In 64 AD, a great fire broke out in Rome, destroying portions of the city and economically devastating the Roman population. Some people suspected that Nero himself was the arsonist, as Suetonius reported, claiming that he played the lyre and sang the 'Sack of Ilium' during the fires. In the "Annals" of Tacitus, we read: This passage in Tacitus constitutes the only independent attestation that Nero blamed Christians for the Great Fire of Rome, and while it is generally believed to be authentic and reliable, some modern scholars have cast doubt on this view, largely because there is no further reference to Nero's blaming of Christians for the fire until the late 4th century. Suetonius, later to the period, does not mention any persecution after the fire, but in a previous paragraph unrelated to the fire, mentions punishments inflicted on Christians, defined as men following a new and malefic superstition. Suetonius, however, does not specify the reasons for the punishment; he simply lists the fact together with other abuses put down by Nero.
In the first two centuries Christianity was a relatively small sect which was not a significant concern of the Emperor. The Church was not in a struggle for its existence during its first centuries, before its adoption by the Roman Empire as its national religion. Persecutions of Christians were sporadic and locally inspired.
One traditional account of killing is the Persecution in Lyon in which Christians were purportedly mass-slaughtered by being thrown to wild beasts under the decree of Roman officials for reportedly refusing to renounce their faith according to St. Irenaeus. The sole source for this event is early Christian historian Eusebius of Caesarea's "Church History", an account written in Egypt in the 4th century. Tertullian's "Apologeticus" of 197 was ostensibly written in defense of persecuted Christians and was addressed to Roman governors.
Trajan's policy towards Christians was no different from the treatment of other sects, that is, they would only be punished if they refused to worship the emperor and the gods, but they were not to be sought out. The "edict of Septimius Severus" touted in the Augustan History is considered unreliable by historians. According to Eusebius, the Imperial household of Maximinus' predecessor, Alexander, had contained many Christians. Eusebius states that, hating his predecessor's household, Maximinus ordered that the leaders of the churches should be put to death. According to Eusebius, this persecution of 235 sent Hippolytus of Rome and Pope Pontian into exile but other evidence suggests that the persecutions of 235 were local to the provinces where they occurred rather than happening under the direction of the Emperor.
Under the reign of Emperor Decius, a decree was issued requiring public sacrifice, a formality equivalent to a testimonial of allegiance to the Emperor and the established order. Decius authorized roving commissions visiting the cities and villages to supervise the execution of the sacrifices and to deliver written certificates to all citizens who performed them. Christians were often given opportunities to avoid further punishment by publicly offering sacrifices or by burning incense to Roman gods, and were accused by the Romans of impiety when they refused. Refusal was punished by arrest, imprisonment, torture, and executions. Christians fled to safe havens in the countryside and some purchased their certificates, called "libelli." Several councils held at Carthage debated the extent to which the community should accept these lapsed Christians.
The Christian church, despite no indication in the surviving texts that the edict targeted any specific group, never forgot the reign of Decius whom they labelled as that "fierce tyrant".
Some early Christians sought out and welcomed martyrdom. Roman authorities tried hard to avoid Christians because they "goaded, chided, belittled and insulted the crowds until they demanded their death."
According to Droge and Tabor, "in 185 the proconsul of Asia, Arrius Antoninus, was approached by a group of Christians demanding to be executed. The proconsul obliged some of them and then sent the rest away, saying that if they wanted to kill themselves there was plenty of rope available or cliffs they could jump off." Such seeking after death is found in Tertullian's "Scorpiace" and in the letters of Saint Ignatius of Antioch but was not the only view of martyrdom in the early Christian church. The 2nd-century text "Martyrdom of Polycarp" relates the story of Polycarp, bishop of Smyrna, who did not desire death, but died a martyr, bound and burned at the stake, then stabbed when the fire miraculously failed to touch him. The "Martyrdom of Polycarp" advances an argument for a particular understanding of martyrdom, with Polycarp's death as its prized example. The example of the Phrygian Quintus, who actively sought out martyrdom, is repudiated.
According to two different Christian traditions, Simon bar Kokhba, the leader of the second Jewish revolt against Rome (132–136 AD) who was proclaimed Messiah, persecuted the Christians: Justin Martyr claims that Christians were punished if they did not deny and blaspheme Jesus Christ, while Eusebius asserts that Bar Kokhba harassed them because they refused to join his revolt against the Romans. The latter is likely true, and Christians' refusal to take part in the revolt against the Roman Empire was a key event in the schism of Early Christianity and Judaism.
These persecutions culminated with the reign of Diocletian and Galerius at the end of the third century and the beginning of the 4th century. The Great Persecution is considered the largest. Beginning with a series of four edicts banning Christian practices and ordering the imprisonment of Christian clergy, the persecution intensified until all Christians in the empire were commanded to sacrifice to the Roman gods or face immediate execution. According to legend, one of the martyrs during the Diocletian persecution was Saint George, a Roman soldier who loudly renounced the Emperor's edict, and in front of his fellow soldiers and tribunes claimed to be a Christian by declaring his worship of Jesus Christ. Though Diocletian zealously persecuted Christians in the Eastern part of the empire, his co-emperors in the West did not follow the edicts so Christians in Gaul, Spain, and Britannia were virtually unmolested.
This persecution lasted until Constantine I came to power in 313 and legalized Christianity. It was not until Theodosius I in the later 4th century that Christianity would become the official religion of the Empire. Between these two events Julian II temporarily restored the traditional Roman religion and established broad religious tolerance renewing Pagan and Christian hostilities.
Martyrs were considered uniquely exemplary of the Christian faith, and few early saints were not also martyrs.
The "New Catholic Encyclopedia" states that "Ancient, medieval and early modern hagiographers were inclined to exaggerate the number of martyrs. Since the title of martyr is the highest title to which a Christian can aspire, this tendency is natural". Attempts at estimating the numbers involved are inevitably based on inadequate sources, but one historian of the persecutions estimates the overall numbers as between 5,500 and 6,500., a number also adopted by later writers including Yuval Noah Harari: In the 300 years from the crucifixion of Christ to the conversion of Emperor Constantine, polytheistic Roman emperors initiated no more than four general persecutions of Christians. Local administrators and governors incited some anti-Christian violence of their own. Still, if we combine all the victims of all these persecutions, it turns out that in these three centuries, the polytheistic Romans killed no more than a few thousand Christians.
The Sassanian policy shifted from tolerance of other religions under Shapur I to intolerance under Vahrans and apparently a return to the policy of Shapur until the reign of Shapur II. The persecution at that time was initiated by Constantine's conversion to Christianity which followed that of Armenian king Tiridates in about 301 A.D. The Christians were thus viewed with suspicions of secretly being partisans of the Roman Empire. This didn't change until the fifth century when the Nestorian Church broke off from the Church of Antioch.
Zoroastrian elites continued viewing the Christians with enmity and distrust throughout the fifth century with threat of persecution remaining significant, especially during war against the Romans.
Kartir in his "Kaba'yi Zartust" inscription dated about 280, refers to persecution ("zatan" – "to beat, kill") of Christians ("Nazareans "n'zl'y" and Christians "klstyd'n""). Kartir took Christianity as a serious opponent. The use of the double expression may be indicative of the Greek-speaking Christians deported by Shapur I from Antioch and other cities during his war against the Romans. Constantine's efforts to protect the Persian Christians made them a target of accusations of disloyalty to Sasanians. With the resumption of Roman-Sasanian conflict under Constantius II, the Christian position became untenable. Zoroastrian priests targeted clergy and ascetics of local Christians to eliminate the leaders of the church. A Syriac manuscript in Edessa in 411 documents dozens executed in various parts of western Sasanian Empire.
In 341, Shapur II ordered the persecution of all Christians. In response to their subversive attitude and support of Romans, Shahpur II doubled the tax on Christians. Shemon Bar Sabbae informed him that he could not pay the taxes demanded from him and his community. He was martyred and a forty-year-long period of persecution of Christians began. The Council of Seleucia-Ctesiphon gave up choosing bishops since it would result in death. The local mobads with the help of satraps organized slaughters of Christians in Adiabene, Beth Garmae, Khuzistan and many other provinces.
Yazdegerd I showed tolerance towards Jews and Christians for much of his rule. He allowed Christians to practice their religion freely, demolished monasteries and churches were rebuilt and missionaries were allowed to operate freely. He reversed his policies during the later part of his reign however, suppressing missionary activities. Bahram V continued and intensified their persecution, resulting in many of them fleeing to the Byzantine Empire. Bahram demanded their return, sparking a war between the two. The war ended in 422 with agreement of freedom of religion for Christians in Iran with that of Mazdaism in Byzantium. Meanwhile, Christians suffered destruction of churches,
renounced the faith, had their private property confiscated and many were expelled.
Shah Yazdegerd II (439–457) had ordered all his subjects to embrace Mazdeism in an attempt to unite his empire ideologically. The Caucasus rebelled to defend Christianity which had become integrated in their local culture, with Armenian aristocrats turning to the Romans for help. The rebels were however defeated in a battle on the Avaryr Plain. Yeghishe in his "The History of Vardan and the Armenian War", pays a tribute to the battles waged to defend Christianity. Another revolt was waged from 481–483 which was suppressed. However, the Armenians succeeded in gaining freedom of religion among other improvements.
Accounts of executions for apostasy of Zoroastrians who converted to Christianity during Sasanian rule proliferated from the fifth to early seventh century, and continued to be produced even after collapse of Sasanians. The punishment of apostates increased under Yazdegerd I and continued under successive kings. It was normative for apostates who were brought to the notice of authorities to be executed, although the prosecution of apostasy depended on political circumstances and Zoroastrian jurisprudence. Per Richard E. Payne, the executions were meant to create a mutually recognised boundary between interactions of the people of the two religions and preventing one religion challenging another's viability. Although the violence on Christians was selective and especially carried out on elites, it served to keep Christian communities in a subordinate and yet viable position in relation to Zoroastrianism. Christians were allowed to build religious buildings and serve in the government as long as they didn't expand their institutions and population at the expense of Zoroastrianism.
Khosrow I was generally regarded as tolerant of Christians and interested in the philosophical and theological disputes during his reign. Sebeos claimed he had converted to Christianity on his deathbed. John of Ephesus describes an Armenian revolt where he claims that Khusrow had attempted to impose Zoroastrianism in Armenia. The account, however, is very similar to the one of Armenian revolt of 451. In addition, Sebeos doesn't mention any religious persecution in his account of the revolt of 571. Story about Hormizd IV's tolerance is preserved by the historian al-Tabari. Upon being asked why he tolerated Christians, he replied, "Just as our royal throne cannot stand upon its front legs without its two back ones, our kingdom cannot stand or endure firmly if we cause the Christians and adherents of other faiths, who differ in belief from ourselves, to become hostile to us."
In AD 516, a tribal unrest broke out in Yemen and several tribal elites fought for power. One of those elites was Joseph Dhu Nuwas or "Yousef Asa'ar", a Jewish warlord mentioned in ancient south Arabian inscriptions. Syriac and Byzantine sources claim that he fought his war because Christians in Yemen refused to renounce Christianity. In 2009, a documentary that aired on the BBC defended the claim that the villagers had been offered the choice between conversion to Judaism or death and that 20,000 Christians were then massacred stating that "The production team spoke to many historians over 18 months, among them Nigel Groom, who was our consultant, and Professor Abdul Rahman Al-Ansary, a former professor of archaeology at the King Saud University in Riyadh." Inscriptions documented by Yousef himself show the great pride that he expressed after killing more than 22,000 Christians in Zafar and Najran. Historian Glen Bowersock described this as a "savage pogrom that the Jewish king of the Arabs launched against the Christians in the city of Najran. The king himself reported in excruciating detail to his Arab and Persian allies about the massacres that he had inflicted on all Christians who refused to convert to Judaism."
In the 4th century, the Terving king Athanaric in ca. 375 ordered a persecution of Christians.
The Protestant Reformation provoked a number of persecutions of Christians by other Christians, including false allegations of witchcraft.
Several months after the Persian conquest in AD 614, a riot occurred in Jerusalem, and the Jewish governor of Jerusalem Nehemiah was killed by a band of young Christians along with his "council of the righteous" while he was making plans for the building of the Third Temple. At this time the Christians had allied themselves with the Eastern Roman Empire. Shortly afterward, the events escalated into a full-scale Christian rebellion, resulting in a battle against the Jews and Christians who were living in Jerusalem. In the battle's aftermath, many Jews were killed and the survivors fled to Caesarea, which was still being held by the Persian Army.
The Judeo-Persian reaction was ruthless—Persian Sasanian general Xorheam assembled Judeo-Persian troops and went and encamped around Jerusalem and besieged it for 19 days. Eventually, digging beneath the foundations of the Jerusalem, they destroyed the wall and on the 19th day of the siege, the Judeo-Persian forces took Jerusalem.
According to the account of Sebeos, the siege resulted in a total Christian death toll of 17,000, the earliest and thus most commonly accepted figure. Per Antiochus, 4,518 prisoners alone were massacred near Mamilla reservoir. A cave containing hundreds of skeletons near the Jaffa Gate, 200 metres east of the large Roman-era pool in Mamilla, correlates with the massacre of Christians at hands of the Persians mentioned by Antiochius Strategius. While reinforcing the evidence of massacre of Christians, the archaeological evidence seem less conclusive on the destruction of Christian churches and monasteries in Jerusalem.
According to the later account of Antiochus Strategos, whose perspective appears to be that of a Byzantine Greek and shows an antipathy towards the Jews, thousands of Christians were massacred during the conquest of the city. Estimates based on varying copies of Strategos's manuscripts range from 4,518 to 66,509 killed. Strategos wrote that the Jews offered to help them escape death if they "become Jews and deny Christ", and the Christian captives refused. In anger the Jews allegedly purchased Christians to kill them. In 1989, a mass burial grave at Mamilla cave was discovered in by Israeli archeologist Ronny Reich, near the site where Antiochus recorded the massacre took place. The human remains were in poor condition containing a minimum of 526 individuals.
From the many excavations carried out in the Galilee, it is clear that all churches had been destroyed during the period between the Persian invasion and the Arab conquest in 637. The church at Shave Ziyyon was destroyed and burnt in 614. Similar fate befell churches at Evron, Nahariya, 'Arabe and monastery of Shelomi. The monastery at Kursi was damaged in the invasion.
At the time of the Arab Islamic conquest of the mid 7th century AD the populations of Mesopotamia and Assyria (modern-day Iraq, north east Syria, south east Turkey and Kuwait), Syria, Phoenicia (modern-day Lebanon and coastal Syria), Egypt, Jordan, North Africa (modern-day Sudan, Tunisia, Morocco, Libya and Algeria), Asia Minor (modern-day Turkey) and Armenia were predominantly Christian and non-Arab.
As People of the Book Christians were given dhimmi status (along with Jews, Samaritans, Gnostics and Mandeans), which was inferior to the status of Muslims. Christians thus faced religious discrimination and religious persecution in that they were banned from proselytising (spreading or promoting Christianity) in lands conquered by the Muslims on pain of death, they were banned from bearing arms and undertaking certain professions. Under sharia, non-Muslims were obligated to pay jizya and kharaj taxes, together with periodic heavy ransom levied upon Christian communities by Muslim rulers in order to fund military campaigns, all of which contributed a significant proportion of income to the Islamic states while conversely reducing many Christians to poverty, and these financial and social hardships forced many Christians to convert to Islam. Christians unable to pay these taxes were forced to surrender their children to the Muslim rulers as payment who would sell them as slaves to Muslim households where they were forced into Islam According to the Hanafi school of sharia, the testimony of a non-Muslim (such as a Christian) was not considered valid against the testimony of a Muslim in legal or civil matters. Islamic law forbid Muslim women from marrying Christian men, but Muslim men were permitted to marry Christian women.
Christians under Islamic rule had the right to convert to Islam or any other religion, while conversely a murtad, or an apostate from Islam, faced severe penalties or even hadd, which could include the death penalty. In general, Christians subject to Islamic rule were allowed to practice their religion with some notable limitations stemming from the Pact of Umar. This treaty, enacted in 717 AD, forbade Christians from publicly displaying the cross on church buildings, from summoning congregants to prayer with a bell, from re-building or repairing churches and monasteries after they had been destroyed or damaged, and imposed other restrictions relating to occupations, clothing and weapons. The Umayyad Caliphate persecuted many Berber Christians in the seventh and eighth centuries, who slowly converted to Islam.
Native Christian communities are subject to persecution in several Muslim-majority countries such as Egypt. Pakistan,
Tamerlane instigated large scale massacres of Christians in Mesopotamia, Persia, Asia Minor and Syria in the 14th century AD. Most of the victims were indigenous Assyrians and Armenians, members of the Assyrian Church of the East and Orthodox Churches, which led to the decimation of the hitherto majority Assyrian population in northern Mesopotamia and the abandonment of the ancient Assyrian city of Assur. Other massacres were perpetrated by Helugu Khan against the Assyrians, particularly in and around the ancient Assyrian city of Arbela (modern Erbil).
Before the late 16th century, Albania, despite being under Ottoman rule, had remained overwhelmingly Christian, unlike other regions such as Bosnia, Bulgaria and Northern Greece, and mountainous Albania was a frequent site of revolts against the Ottoman Empire, often incurring enormous human costs such as the decimation of entire villages. To handle this problem, the Ottomans abandoned their usual policy of tolerating Christians as second class citizens in favor of one which was aimed at reducing the Christian population through Islamization, beginning in the restive Christian regions of Reka and Elbasan in 1570. The pressures which resulted from this campaign included particularly harsh economic conditions which were imposed on the Christian population; while earlier taxes on the Christians were around 45 "akçes" a year, by the middle of the 17th century the rate had been multiplied by 27 to 780 "akçes" a year. Albanian elders often opted to save their clans and villages from hunger and economic ruin by advocating village-wide and region-wide conversions to Islam, with many individuals often continuing to practice Christianity in private. A failed Catholic rebellion in 1596 and the Albanian population's support of Austro-Hungary during the Great Turkish War, and its support of the Venetians in the 1644 Venetian-Ottoman War as well as the Orlov Revolt were all factors which led to punitive measures in which outright force was accompanied by economic incentives depending on the region, and ended up forcing the conversion of large Christian populations to Islam in Albania. In the aftermath of the Great Turkish War, massive punitive measures were imposed on Kosovo's Catholic Albanian population and as a result of them, most members of it fled to Hungary and settled around Budapest, where most of them died of disease and starvation. After the Orthodox Serbian population subsequently also fled from Kosovo, the pasha of Ipek (Peja/Pec) forced Albanian Catholic mountaineers to repopulate Kosovo by deporting them to Kosovo, and also forced them adopt Islam. In the 17th and 18th centuries, South Albania also saw numerous instances of violence which was directed against those who remained Christian by local newly converted Muslims, ultimately resulting in many more conversions out of fear as well as flight to faraway lands by the Christian population.
The Dechristianisation of France during the French Revolution is a conventional description of a campaign, conducted by various Robespierre-era governments of France beginning with the start of the French Revolution in 1789, to eliminate any symbol that might be associated with the past, especially the monarchy.
The program included the following policies:
The climax was reached with the celebration of the Goddess "Reason" in Notre Dame Cathedral on 10 November.
Under threat of death, imprisonment, military conscription or loss of income, about 20,000 constitutional priests were forced to abdicate or hand over their letters of ordination and 6,000 – 9,000 were coerced to marry, many ceasing their ministerial duties. Some of those who abdicated covertly ministered to the people. By the end of the decade, approximately 30,000 priests were forced to leave France, and thousands who did not leave were executed. Most of France was left without the services of a priest, deprived of the sacraments and any nonjuring priest faced the guillotine or deportation to French Guiana.
The March 1793 conscription requiring Vendeans to fill their district's quota of 300,000 enraged the populace, who took up arms as "The Catholic Army", "Royal" being added later, and fought for "above all the reopening of their parish churches with their former priests."
With these massacres came formal orders for forced evacuation; also, a 'scorched earth' policy was initiated: farms were destroyed, crops and forests burned and villages razed. There were many reported atrocities and a campaign of mass killing universally targeted at residents of the Vendée regardless of combatant status, political affiliation, age or gender. By July 1796, the estimated Vendean dead numbered between 117,000 and 500,000, out of a population of around 800,000. Some historians call these mass killings the first modern genocide, specifically because intent to exterminate the Catholic Vendeans was clearly stated, though others have rejected these claims.
Beginning in the late 17th century, Christianity was banned for at least a century in China by the Kangxi Emperor of the Qing dynasty after Pope Clement XI forbade Chinese Catholics from venerating their relatives or Confucius.
During the Boxer Rebellion, Muslim unit Kansu Braves serving in the Chinese army attacked Christians.
During the Northern Expedition, the Kuomintang incited anti-foreign, anti-Western sentiment. Portraits of Sun Yat-sen replaced the crucifix in several churches, KMT posters proclaimed "Jesus Christ is dead. Why not worship something alive such as Nationalism?". Foreign missionaries were attacked and anti-foreign riots broke out. In 1926, Muslim General Bai Chongxi attempted to drive out foreigners in Guangxi, attacking American, European, and other foreigners and missionaries, and generally making the province unsafe for foreigners. Westerners fled from the province, and some Chinese Christians were also attacked as imperialist agents.
From 1894 to 1938, there were many Uighur Muslim converts to Christianity. They were killed, tortured and jailed. Christian missionaries were expelled.
Relations between Muslims and Christians have occasionally been turbulent. With the advent of European colonialism in India throughout the 16th, 17th and 18th centuries, Christians were systematically persecuted in a few Muslim ruled kingdoms in India. Modern-day persecution also exists and is carried out by Hindu nationalists. A report by Human Rights Watch stated that there is a rise in anti-Christian violence due to Hindu nationalism and Smita Narula, Researcher, Asia Division of Human Rights Watch stated "Christians are the new scapegoat in India's political battles. Without immediate and decisive action by the government, communal tensions will continue to be exploited for political and economic ends."
Muslim Tipu Sultan, the ruler of the Kingdom of Mysore, took action against the Mangalorean Catholic community from Mangalore and the South Canara district on the southwestern coast of India. Tipu was widely reputed to be anti-Christian. He took Mangalorean Catholics into captivity at Seringapatam on 24 February 1784 and released them on 4 May 1799.
Soon after the Treaty of Mangalore in 1784, Tipu gained control of Canara. He issued orders to seize the Christians in Canara, confiscate their estates, and deport them to Seringapatam, the capital of his empire, through the Jamalabad fort route. There were no priests among the captives. Together with Fr. Miranda, all the 21 arrested priests were issued orders of expulsion to Goa, fined Rs 2 lakhs, and threatened death by hanging if they ever returned. Tipu ordered the destruction of 27 Catholic churches.
According to Thomas Munro, a Scottish soldier and the first collector of Canara, around 60,000 of them, nearly 92 percent of the entire Mangalorean Catholic community, were captured. 7,000 escaped. Observer Francis Buchanan reports that 70,000 were captured, from a population of 80,000, with 10,000 escaping. They were forced to climb nearly through the jungles of the Western Ghat mountain ranges. It was from Mangalore to Seringapatam, and the journey took six weeks. According to British Government records, 20,000 of them died on the march to Seringapatam. According to James Scurry, a British officer, who was held captive along with Mangalorean Catholics, 30,000 of them were forcibly converted to Islam. The young women and girls were forcibly made wives of the Muslims living there and later distributed and sold in prostitution. The young men who offered resistance were disfigured by cutting their noses, upper lips, and ears. According to Mr. Silva of Gangolim, a survivor of the captivity, if a person who had escaped from Seringapatam was found, the punishment under the orders of Tipu was the cutting off of the ears, nose, the feet and one hand.
The Archbishop of Goa wrote in 1800, "It is notoriously known in all Asia and all other parts of the globe of the oppression and sufferings experienced by the Christians in the Dominion of the King of Kanara, during the usurpation of that country by Tipu Sultan from an implacable hatred he had against them who professed Christianity."
Tipu Sultan's invasion of the Malabar Coast had an adverse impact on the Saint Thomas Christian community of the Malabar coast. Many churches in Malabar and Cochin were damaged. The old Syrian Nasrani seminary at Angamaly which had been the center of Catholic religious education for several centuries was razed to the ground by Tipu's soldiers. Many centuries-old religious manuscripts were lost forever. The church was later relocated to Kottayam where it still exists to this date. The Mor Sabor church at Akaparambu and the Martha Mariam Church attached to the seminary were destroyed as well. Tipu's army set fire to the church at Palayoor and attacked the Ollur Church in 1790. Furthernmore, the Arthat church and the Ambazhakkad seminary was also destroyed. Over the course of this invasion, many Saint Thomas Christians were killed or forcibly converted to Islam. Most of the coconut, arecanut, pepper and cashew plantations held by the Saint Thomas Christian farmers were also indiscriminately destroyed by the invading army. As a result, when Tipu's army invaded Guruvayur and adjacent areas, the Syrian Christian community fled Calicut and small towns like Arthat to new centres like Kunnamkulam, Chalakudi, Ennakadu, Cheppadu, Kannankode, Mavelikkara, etc. where there were already Christians. They were given refuge by Sakthan Tamburan, the ruler of Cochin and Karthika Thirunal, the ruler of Travancore, who gave them lands, plantations and encouraged their businesses. Colonel Macqulay, the British resident of Travancore also helped them.
Tipu's persecution of Christians also extended to captured British soldiers. For instance, there were a significant amount of forced conversions of British captives between 1780 and 1784. Following their disastrous defeat at the battle of Pollilur, 7,000 British men along with an unknown number of women were held captive by Tipu in the fortress of Seringapatnam. Of these, over 300 were circumcised and given Muslim names and clothes and several British regimental drummer boys were made to wear "ghagra cholis" and entertain the court as "nautch" girls or dancing girls. After the 10-year-long captivity ended, James Scurry, one of those prisoners, recounted that he had forgotten how to sit in a chair and use a knife and fork. His English was broken and stilted, having lost all his vernacular idiom. His skin had darkened to the swarthy complexion of negroes, and moreover, he had developed an aversion to wearing European clothes.
During the surrender of the Mangalore fort which was delivered in an armistice by the British and their subsequent withdrawal, all the Mesticos (Luso-Indians and Anglo-Indians) and remaining non-British foreigners were killed, together with 5,600 Mangalorean Catholics. Those condemned by Tipu Sultan for treachery were hanged instantly, the gibbets being weighed down by the number of bodies they carried. The Netravati River was so putrid with the stench of dying bodies, that the local residents were forced to leave their riverside homes.
Tokugawa Ieyasu assumed control over Japan in 1600. Like Toyotomi Hideyoshi, he disliked Christian activities in Japan. The Tokugawa shogunate finally decided to ban Catholicism, in 1614 and in the mid-17th century it demanded the expulsion of all European missionaries and the execution of all converts. This marked the end of open Christianity in Japan. The Shimabara Rebellion, led by a young Japanese Christian boy named Amakusa Shirō Tokisada, took place in 1637. After the Hara Castle fell, the shogunate's forces beheaded an estimated 37,000 rebels and sympathizers. Amakusa Shirō's severed head was taken to Nagasaki for public display, and the entire complex at Hara Castle was burned to the ground and buried together with the bodies of all the dead.
Many of the Christians in Japan continued for two centuries to maintain their religion as Kakure Kirishitan, or hidden Christians, without any priests or pastors. Some of those who were killed for their Faith are venerated as the Martyrs of Japan.
Christianity was later allowed during the Meiji era. The Meiji Constitution of 1890 introduced separation of church and state and permitted freedom of religion.
Relations between Muslims and Christians in the Ottoman Empire during the modern era were shaped in no small part by broader dynamics related to European colonial and neo-imperialist activity in the region, dynamics that frequently (though by no means always) generated tensions between the two. Too often, growing European influence in the region during the nineteenth century seemed to disproportionately benefit Christians, thus producing resentment on the part of many Muslims, likewise a suspicion that Christians were colluding with the European powers in order to weaken the Islamic world. Further exacerbating relations was the fact that Christians seemed to benefit disproportionately from efforts at reform (one aspect of which generally sought to elevate the political status of non-Muslims), likewise, the various Christian nationalist uprisings in the Empire's European territories, which often had the support of the European powers.
Since the time of the Austro-Turkish war (1683–1699) relations between Muslims and Christians in the European provinces of the Ottoman Empire gradually took more extreme forms and resulted in occasional calls by some Muslim religious leaders for the expulsion or extermination of local Christians. As a result of Ottoman oppression, the destruction of Churches and Monasteries, and violence against the non-Muslim civilian population, Serbian Christians and their church leaders, headed by Serbian Patriarch Arsenije III, sided with the Austrians in 1689 and again in 1737 under Serbian Patriarch Arsenije IV. In the following punitive campaigns, Ottoman forces conducted systematic atrocities against the Christian population in the Serbian regions, resulted in the Great Migrations of the Serbs.
Similar persecutions and forced migrations of Christian populations were induced by Ottoman forces during the 18th and 19th centuries in the European and Asian provinces of the Ottoman Empire. The Massacres of Badr Khan were conducted by Kurdish and Ottoman forces against the Assyrian Christian population of the Ottoman Empire between 1843 and 1847, resulting in the slaughter of more than 10,000 indigenous Assyrian civilians of the Hakkari region, with many thousands more being sold into slavery.
During the Bulgarian Uprising (1876) against Ottoman rule, and the Russo-Turkish War (1877–1878), the persecution of the Bulgarian Christian population was conducted by Ottoman soldiers. The principal locations were Panagurishte, Perushtitza, and Bratzigovo. Over 15,000 non-combatant Bulgarian civilians were killed by the Ottoman army between 1876 and 1878, with the worst single instance being the Batak massacre.
During the war, whole cities including the largest Bulgarian one (Stara Zagora) were destroyed and most of their inhabitants were killed, the rest being expelled or enslaved. The atrocities included impaling and grilling people alive. Similar attacks were undertaken by Ottoman troops against Serbian Christians during the Serbian-Turkish War (1876–1878).
Between 1894 and 1896 a series of ethno-religiously motivated Anti-Christian pogroms known as the Hamidian massacres were conducted against the ancient Armenian and Assyrian Christian populations by the forces of the Ottoman Empire. The motives for these massacres were an attempt to reassert Pan-Islamism in the Ottoman Empire, resentment of the comparative wealth of the ancient indigenous Christian communities, and a fear that they would attempt to secede from the tottering Ottoman Empire. The massacres mainly took place in what is today southeastern Turkey, northeastern Syria and northern Iraq. Assyrians and Armenians were massacred in Diyarbakir, Hasankeyef, Sivas and other parts of Anatolia and northern Mesopotamia, by Sultan Abdul Hamid II. The death toll is estimated to have been as high as 325,000 people, with a further 546,000 Armenians and Assyrians made destitute by forced deportations of survivors from cities, and the destruction or theft of almost 2500 of their farmsteads towns and villages. Hundreds of churches and monasteries were also destroyed or forcibly converted into mosques. These attacks caused the death of over thousands of Assyrians and the forced "Ottomanisation" of the inhabitants of 245 villages. The Ottoman troops looted the remains of the Assyrian settlements and these were later stolen and occupied by south-east Anatolian tribes. Unarmed Assyrian women and children were raped, tortured and murdered. According to H. Aboona, the independence of the Assyrians was destroyed not directly by the Turks but by their neighbours under Ottoman auspices.
The Adana massacre occurred in the Adana Vilayet of the Ottoman Empire in April 1909. A massacre of Armenian and Assyrian Christians in the city of Adana and its surrounds amidst the Ottoman countercoup of 1909 led to a series of anti-Christian pogroms throughout the province. Reports estimated that the Adana Province massacres resulted in the death of as many as 30,000 Armenians and 1,500 Assyrians.
Between 1915 and 1921 the Young Turks government of the collapsing Ottoman Empire persecuted Eastern Christian populations in Anatolia, Persia, Northern Mesopotamia and The Levant. The onslaught by the Ottoman army, which included Kurdish, Arab and Circassian irregulars resulted in an estimated 3.4 million deaths, divided between roughly 1.5 million Armenian Christians, 0.75 million Assyrian Christians, 0.90 million Greek Orthodox Christians and 0.25 million Maronite Christians (see Great Famine of Mount Lebanon); groups of Georgian Christians were also killed. The massive ethnoreligious cleansing expelled from the empire or killed the Armenians and the Bulgarians who had not converted to Islam, and it came to be known as the Armenian Genocide, Assyrian Genocide, Greek Genocide. and Great Famine of Mount Lebanon. which accounted for the deaths of Armenian, Assyrian, Greek and Maronite Christians, and the deportation and destitution of many more. The Genocide led to the devastation of ancient indigenous Christian populations who had existed in the region for thousands of years.
The Assyrians suffered a further series of persecutions during the Simele massacre in 1933, with the death of approximately 3000 Assyrian civilians at the hands of the Iraqi Army.
After the Russian Revolution of 1917, the Bolsheviks undertook a massive program to remove the influence of the Russian Orthodox Church from the government while outlawing antisemitism in Russian society, and promoting atheism. Tens of thousands of churches were destroyed or converted to other uses, and many members of the clergy were murdered, publicly executed and imprisoned for what the government termed "anti-government activities." An extensive educational and propaganda campaign was launched in order to convince people, especially children and youths, to abandon their religious beliefs. This persecution resulted in the intentional murder of 500,000 Orthodox followers by the government of the Soviet Union during the 20th century.
Under the doctrine of state atheism in the Soviet Union, a "government-sponsored program of forced conversion to atheism" was conducted by the Communists. The Communist Party destroyed churches, mosques and temples, ridiculed, harassed, incarcerated and executed religious leaders, flooded the schools and media with anti-religious teachings, and it introduced a belief system called "scientific atheism," with its own rituals, promises and proselytizers. Many priests were killed and imprisoned; thousands of churches were closed. In 1925 the government founded the League of Militant Atheists in order to intensify the persecution. The League of Militant Atheists was also a "nominally independent organization established by the Communist Party to promote atheism".
The state established atheism as the only scientific truth. Soviet authorities forbade the criticism of atheism and agnosticism until 1936 or of the state's anti-religious policies; such criticism could lead to forced retirement. Militant atheism became central to the ideology of the Communist Party of the Soviet Union and a high priority policy of all Soviet leaders. Christopher Marsh, a professor at the Baylor University writes that "Tracing the social nature of religion from Schleiermacher and Feurbach to Marx, Engles, and Lenin...the idea of religion as a social product evolved to the point of policies aimed at the forced conversion of believers to atheism."
Before and after the October Revolution of 7 November 1917 (25 October Old Calendar) there was a movement within the Soviet Union to unite all of the people of the world under Communist rule (see Communist International). This included the Eastern European bloc countries as well as the Balkan States. Since some of these Slavic states tied their ethnic heritage to their ethnic churches, both the people and their churches were targeted for ethnic and political genocide by the Soviets and their form of State atheism. The Soviets' official religious stance was one of "religious freedom or tolerance", though the state established atheism as the only scientific truth (see also the Soviet or committee of the All-Union Society for the Dissemination of Scientific and Political Knowledge or Znanie which was until 1947 called The League of the Militant Godless and various Intelligentsia groups). Criticism of atheism was strictly forbidden and sometimes resulted in imprisonment. Some of the more high-profile individuals who were executed include Metropolitan Benjamin of Petrograd, Priest and scientist Pavel Florensky and Bishop Gorazd Pavlik.
Across Eastern Europe following World War II, the parts of the Nazi Empire conquered by the Soviet Red Army and Yugoslavia became one-party Communist states and the project of coercive conversion to atheism continued. The Soviet Union ended its war time truce with the Russian Orthodox Church, and extended its persecutions to the newly Communist Eastern bloc: "In Poland, Hungary, Lithuania and other Eastern European countries, Catholic leaders who were unwilling to be silent were denounced, publicly humiliated or imprisoned by the Communists. Leaders of the national Orthodox Churches in Romania and Bulgaria had to be cautious and submissive", wrote Geoffrey Blainey. While the churches were generally not treated as severely as they had been in the USSR, nearly all of their schools and many of their churches were closed, and they lost their formally prominent roles in public life. Children were taught atheism, and clergy were imprisoned by the thousands. In the Eastern Bloc, Christian churches, along with Jewish synagogues and Islamic mosques were forcibly "converted into museums of atheism." According to James M. Nelson a psychology professor at East Carolina University, the total number of Christian victims under the Soviet regime may have been around 12 million, while Todd Johnson and Gina Zurlo of Gordon-Conwell Theological Seminary at Boston University estimate a figure of 15–20 million.
The Communist regime confiscated church property, ridiculed religion, harassed believers, and propagated atheism in the schools. Actions towards particular religions, however, were determined by State interests, and most organized religions were never outlawed. It is estimated that 500,000 Russian Orthodox Christians were martyred in the gulags by the Soviet government, excluding the members of other Christian denominations who were also tortured or killed.
Along with execution, some other actions against Orthodox priests and believers included torture, being sent to prison camps, labour camps or mental hospitals. In the first five years after the Bolshevik revolution, 28 bishops and 1,200 priests were executed.
The main target of the anti-religious campaign in the 1920s and 1930s was the Russian Orthodox Church, which had the largest number of faithful worshippers. A very large segment of its clergy, and many of its believers, were shot or sent to labor camps. Theological schools were closed, and church publications were prohibited. In the period between 1927 and 1940, the number of Orthodox Churches in the Russian Republic fell from 29,584 to less than 500. Between 1917 and 1940, 130,000 Orthodox priests were arrested.
The widespread persecution and internecine disputes within the church hierarchy lead to the seat of Patriarch of Moscow being vacant from 1925 to 1943.
After Nazi Germany's attack on the Soviet Union in 1941, Joseph Stalin revived the Russian Orthodox Church in order to intensify patriotic support for the war effort. By 1957, about 22,000 Russian Orthodox churches had become active. But in 1959, Nikita Khrushchev initiated his own campaign against the Russian Orthodox Church and forced the closure of about 12,000 churches. By 1985, fewer than 7,000 churches remained active.
In the Soviet Union, in addition to the methodical closure and destruction of churches, the charitable and social work formerly done by ecclesiastical authorities was taken over by the state. As with all private property, Church owned property was confiscated by the state and converted to public use. The few places of worship left to the Church were legally viewed as state property which the government permitted the church to use. After the advent of state funded universal education, the Church was not permitted to carry on educational, instructional activity for children. For adults, only training for church-related occupations was allowed. With the exception of sermons during the celebration of the divine liturgy, it could not instruct the faithful or evangelise the youth. Catechism classes, religious schools, study groups, Sunday schools and religious publications were all declared illegal and banned. This caused many religious tracts to be circulated as illegal literature or samizdat. This persecution continued, even after the death of Stalin until the dissolution of the Soviet Union in 1991. Since the fall of the Soviet Union, the Russian Orthodox Church has recognized a number of New Martyrs as saints, some of whom were executed during the Mass operations of the NKVD under directives like NKVD Order No. 00447.
In the 19th century, Mexican President Benito Juárez confiscated church lands. The Mexican government's campaign against the Catholic Church after the Mexican Revolution culminated in the 1917 constitution which contained numerous articles which Catholics perceived as violating their civil rights: outlawing monastic religious orders, forbidding public worship outside of church buildings, restricted religious organizations' rights to own property, and taking away basic civil rights of members of the clergy (priests and religious leaders were prevented from wearing their habits, were denied the right to vote, and were not permitted to comment on public affairs in the press and were denied the right to trial for violation of anticlerical laws). When the Soviet Union opened its first embassy in Mexico, the Soviet ambassador remarked that "no other two countries show more similarities than the Soviet Union and Mexico".
When the Church publicly condemned the anticlerical measures which had not been strongly enforced, the atheist President Plutarco Calles sought to vigorously enforce the provisions and enacted additional anti-Catholic legislation which was known as the Calles Law. At this time, some members of the United States government started to refer to Mexico as "Soviet Mexico" because they considered Calles' regime Bolshevik.
Weary of the persecution, a popular rebellion which was called the Cristero War broke out in many parts of the country (so named because the rebels believed they were fighting for Christ himself). The persecution profoundly effected the Church. Between 1926 and 1934, at least 40 priests were killed. Where there were 4,500 priests serving the people before the rebellion, in 1934 there were 334 priests licensed by the government to serve fifteen million people, the rest having been eliminated by emigration, expulsion and assassination. By 1935, 17 states had no priest at all. In the second Cristero rebellion (1932), the Cristeros took particular exception to the socialist education, which Calles had also implemented but which President Cardenas had added to the 1917 Mexican Constitution.
The Latter Day Saint Movement, (Mormons) have been persecuted since their founding in the 1830s. This persecution drove them from New York and Ohio to Missouri, where they continued to suffer violent attacks. In 1838, Gov. Lilburn Boggs declared that Mormons had made war on the state of Missouri, and "must be treated as enemies, and must be exterminated or driven from the state" At least 10,000 were expelled from the State. In the most violent of the altercations at this time, the Haun's mill Massacre, 17 were murdered by an anti-Mormon mob and 13 were wounded. The Extermination Order signed by Governor Boggs was not formally invalidated until 25 June 1976, 137 years after being signed.
The Mormons subsequently fled to Nauvoo, Illinois, where hostilities again escalated. In Carthage, Ill., where Joseph Smith was being held on the charge of treason, a mob stormed the jail and killed him. Smith's brother, Hyrum, was also killed. After a succession crisis, most united under Brigham Young, who organized an evacuation from the United States after the federal government refused to protect them. 70,000 Mormon pioneers crossed the Great Plains to settle in the Salt Lake Valley and surrounding areas. After the Mexican–American War, the area became the US territory of Utah. Over the next 63 years, several actions by the federal government were directed against Mormons in the Mormon Corridor, including the Utah War, the Morrill Anti-Bigamy Act, the Poland Act, "Reynolds v. United States", the Edmunds Act, the Edmunds–Tucker Act, and the Reed Smoot hearings.
Queen Ranavalona I (reigned 1828–1861) issued a royal edict prohibiting the practice of Christianity in Madagascar, expelled British missionaries from the island, and sought to stem the growth of conversion to Christianity within her realm. Far more, however, were punished in other ways: many were required to undergo the "tangena" ordeal, while others were condemned to hard labor or the confiscation of their land and property, and many of these consequently died. The tangena ordeal was commonly administered to determine the guilt or innocence of an accused person for any crime, including the practice of Christianity, and involved ingestion of the poison contained within the nut of the tangena tree ("Cerbera odollam"). Survivors were deemed innocent, while those who perished were assumed guilty.
In 1838, it was estimated that as many as 100,000 people in Imerina died as a result of the "tangena" ordeal, constituting roughly 20% of the population. contributing to a strongly unfavorable view of Ranavalona's rule in historical accounts. Malagasy Christians would remember this period as "ny tany maizina", or "the time when the land was dark". Persecution of Christians intensified in 1840, 1849 and 1857; in 1849, deemed the worst of these years by British missionary to Madagascar W.E. Cummins (1878), 1,900 people were fined, jailed or otherwise punished in relation to their Christian faith, including 18 executions.
The Second Republic proclaimed in 1931 attempted to establish a regime with a separation between State and Church as it had happened in France (1905). When established, the Republic passed a legislation that prevented the Church from educational activities. A process of political polarisation had characterised the Spanish Second Republic, party divisions became increasingly embittered and questions of religious identity came to assume a major political significance. Different Church institutions presented the situation resulting from the proclamation of the 2nd Republic as an anti-Catholic, Masonic, Jewish, and Communist international conspiracy that heralded a clash between God and atheism, chaos and harmony, Good and Evil. The Church's high-ranking officials like Isidro Goma, bishop of Tudela, reminded their Christian subjects of their obligation to vote "for the righteous", and their priests to "educate the consciences."
A similar approach is attested in 1912, when the bishop of Almería José Ignacio de Urbina (founder of the National anti-Masonic and anti-Semitic League) announced 'a decisive battle that must be unleashed' between the "light" and "darkness." Since the early stages of the 2nd Spanish Republic, far-right forces imbued with an ultra-Catholic spirit attempted to overthrow the Republic. Carlists, Africanistas, and Catholic theologians fostered an atmosphere of social and racial hatred in their speeches and writings.
Stanley Payne suggested that persecution of right-wingers and people associated with Catholic church before and at the beginning of the Spanish Civil War involved the murder of priests and other clergy, as well as thousands of lay people, by sections of nearly all the leftist groups, while a killing spree unleashed also across the Nationalist zone. During the Spanish Civil War of 1936–1939, and especially in the early months of the conflict, individual clergymen and entire religious communities were executed by leftists, which included communists and anarchists. The death toll of the clergy alone included 13 bishops, 4,172 diocesan priests and seminarians, 2,364 monks and friars and 283 nuns, for a total of 6,832 clerical victims.
In addition to murders of clergy and the faithful, destruction of churches and desecration of sacred sites and objects were widespread. On the night of 19 July 1936 alone, some fifty churches were burned. In Barcelona, out of the 58 churches, only the Cathedral was spared, and similar desecrations occurred almost everywhere in Republican Spain.
Exceptions were Biscay and Gipuzkoa where the Christian Democratic Basque Nationalist Party, after some hesitation, supported the Republic while halting persecution in the areas held by the Basque Government. All Catholic churches in the Republican zone were closed. The desecration was not limited to Catholic churches, as synagogues and Protestant churches were also pillaged and closed, but some small Protestant churches were spared. The rising Franco's regime would keep Protestant churches and synagogues closed, as he only permitted Catholic church.
Payne called the terror the "most extensive and violent persecution of Catholicism in Western History, in some way even more intense than that of the French Revolution."
The persecution drove Catholics to the Nationalists, even more than would have been expected, as these defended their religious interests and survival.
Hitler and the Nazis received some support from Christian communities, mainly due to their common cause against the anti-religious Communists, as well as their mutual Judeophobia and anti-Semitism. Once in power, the Nazis moved to consolidate their power over the German churches and bring them in line with Nazi ideals. Some historians say that Hitler had a general covert plan, which some say existed even before the Nazis' rise to power, to destroy Christianity within the Reich, which was to be accomplished through control and subversion of the churches and which would be completed after the war. The Third Reich founded its own version of Christianity called Positive Christianity which made major changes in the interpretation of the Bible by saying that Jesus Christ was the son of God, but not a Jew and it also argued that Jesus despised Jews, and the Jews were the ones who were solely responsible for Jesus's death. Thus, the Nazi government consolidated religious power, using its allies in order to consolidate the Protestant churches into the Protestant Reich Church. The syncretist project of Positive Christianity was abandoned in 1940.
Like other intelligentsia, Christian leaders were sometimes persecuted for their anti-Nazi political activities. Between 1939 and 1945, an estimated 3,000 members, 18% of the Polish clergy, were murdered for their suspected ties to the Polish Resistance or left-wing groups, or for sheltering Jews (punishable by death).
Outside mainstream Christianity, the Jehovah's Witnesses were targets of Nazi Persecution, for their refusal to swear allegiance to the Nazi government. In Nazi Germany in the 1930s and early 1940s, Jehovah's Witnesses refused to renounce their political neutrality and they were placed in concentration camps as a result. The Nazi government gave detained Jehovah's Witnesses the option of release if they signed a document which indicated their renouncement of their faith, their submission to state authority, and their support of the German military. Historian Hans Hesse said, "Some five thousand Jehovah's Witnesses were sent to concentration camps where they alone were 'voluntary prisoners', so termed because the moment they recanted their views, they could be freed. Some lost their lives in the camps, but few renounced their faith".
The Nazi Dissolution of the Bruderhof was also carried out by the Nazi government because the Bruderhof refused to pledge allegiance to Hitler. In 1937 their property was confiscated and the group fled to England.
At times, political and religious animosity against Jehovah's Witnesses has led to mob action and government oppression in various countries, including Cuba, the United States, Canada and Singapore. The religion's doctrine of political neutrality has led to the imprisonment of members who refused conscription (for example in Britain during World War II and afterwards during the period of compulsory national service).
Religion in Albania was subordinated to the interests of Marxism during the rule of the country's communist party when all religions were suppressed. This was used to justify the communist stance of state atheism from 1967 to 1991. The Agrarian Reform Law of August 1945 nationalized most of the property which belonged to religious institutions, including the estates of mosques, monasteries, orders, and dioceses. Many clergy and believers were tried and some of them were executed. All foreign Roman Catholic priests, monks, and nuns were expelled in 1946. Churches, cathedrals and mosques were seized by the military and converted into basketball courts, movie theaters, dance halls, and the like; with members of the Clergy being stripped of their titles and imprisoned. Around 6,000 Albanians were disappeared by agents of the Communist government, with their bodies having never been found or identified. Albanians continued to be imprisoned, tortured and killed for their religious practices well into 1991.
Religious communities or branches that had their headquarters outside the country, such as the Jesuit and Franciscan orders, were henceforth ordered to terminate their activities in Albania. Religious institutions were forbidden to have anything to do with the education of the young, because that had been made the exclusive province of the state. All religious communities were prohibited from owning real estate and they were also prohibited from operating philanthropic and welfare institutions and hospitals. Enver Hoxha's overarching goal was the eventual destruction of all organized religion in Albania, despite some variance in approach.
According to Pope Emeritus Benedict XVI, Christians are the most persecuted group in the contemporary world. The Holy See has reported that over 100,000 Christians are violently killed annually because of some relation to their faith. According to the World Evangelical Alliance, over 200 million Christians are denied fundamental human rights solely because of their faith. Of the 100–200 million Christians alleged to be under assault, the majority are persecuted in Muslim-dominated nations. Paul Vallely has said that Christians suffer numerically more than any other faith group or any group without faith in the world. Of the world's three largest religions Christians are allegedly the most persecuted with 80% of all acts of religious discrimination being directed at Christians who only make up 33% of the world's population.
Every year, the Christian non-profit organization Open Doors publishes the World Watch List – a list of the top 50 countries which it designates as the most dangerous for Christians. The 2018 World Watch List has the following countries as its top ten: North Korea, Afghanistan, Somalia, Sudan, Pakistan, Eritrea, Libya, Iraq, Yemen, Iran.
Christians have faced increasing levels of persecution in the Muslim world. Muslim-majority nations in which Christian populations have suffered acute discrimination, persecution, repression, violence and in some cases death, mass murder or ethnic cleansing include; Iraq, Iran, Syria, Pakistan, Afghanistan, Saudi Arabia, Yemen, Somalia, Qatar, Kuwait, Indonesia, Malaysia, the Maldives.
Furthermore, any Muslim person—including any person born into a Muslim family or any person who became a Muslim at a given point in his or her life—who converts to Christianity or re-converts to it, is considered an apostate. Apostasy, the conscious abandonment of Islam by a Muslim in word or deed, including conversion to Christianity, is punishable as a crime under applications of the Sharia (countries in the graph). There are, however, cases in which a Muslim will adopt the Christian faith, secretly without declaring his/her apostasy. As a result, they are practising Christians, but they are still legally Muslims, and they can face the death penalty according to the Sharia. Meriam Ibrahim, a Sudanese woman, was sentenced to death for apostasy in 2014, because the government of Sudan classified her as a Muslim, even though she was raised as a Christian.
A report by the international catholic charity organisation Aid to the Church in Need said that the religiously motivated ethnic cleansing of Christians is so severe that they are set to disappear completely from parts of the Middle-East within a decade.
A report commissioned by the British foreign secretary Jeremy Hunt and published in May 2019 stated that the level and nature of persecution of Christians in the Middle East "is arguably coming close to meeting the international definition of genocide, according to that adopted by the UN.” The report coted Algeria, Egypt, Iran, Iraq, Syria and Saudi Arabia where "the situation of Christians and other minorities has reached an alarming stage." The report attributed the sources of persecution to both extremist groups and the failure of state institutions.
In Afghanistan, Abdul Rahman, a 41-year-old citizen, was charged in 2006 with rejecting Islam, a crime punishable by death under Sharia law. He has since been released into exile in the West under intense pressure from Western governments.
In 2008, the Taliban killed a British charity worker, Gayle Williams, "because she was working for an organization which was preaching Christianity in Afghanistan" even though she was extremely careful not to try to convert Afghans.
On the night of 26–27 March 1996, seven monks from the monastery of Tibhirine in Algeria, belonging to the Roman Catholic Trappist Order of Cistercians of the Strict Observance (O.C.S.O.), were kidnapped in the Algerian Civil War. They were held for two months and were found dead on 21 May 1996. The circumstances of their kidnapping and death remain controversial; the Armed Islamic Group (GIA) allegedly took responsibility for both, but the then French military attaché, retired General Francois Buchwalter, reports that they were accidentally killed by the Algerian army in a rescue attempt, and claims have been made that the GIA itself was a cat's paw of Algeria's secret services (DRS).
A Muslim gang allegedly looted and burned to the ground, a Pentecostal church in Tizi Ouzou on 9 January 2010. The pastor was quoted as saying that worshipers fled when local police supposedly left a group of local protestors unchecked. Many Bibles were burnt.
There have been large scale persecution including forced conversions, destruction of Churches, land of Christians being usurped and killing of Christians in Bangladesh over decades.
This included abductions, attacks and forced conversions on Rohingya Christians in refugee camps in Bangladesh.
In Chad, Christians form a minority, at 41% of the population. They have faced an increasing level of persecution from local officials as well as Islamist groups like Boko Haram and tribal herdsmen. Persecution includes burning of Christian villages, closing of markets and killings.
Foreign missionaries are allowed in the country if they restrict their activities to social improvements and refrain from proselytizing. Particularly in Upper Egypt, the rise in extremist Islamist groups such as the Gama'at Islamiya during the 1980s was accompanied by increased attacks on Copts and on Coptic Orthodox churches; these have since declined with the decline of those organizations, but still continue. The police have been accused of siding with the attackers in some of these cases.
There have been periodic acts of violence against Christians since, including attacks on Coptic Orthodox churches in Alexandria in April 2006, and sectarian violence in Dahshur in July 2012. From 2011 to 2013, more than 150 kidnappings, for ransom, of Christians had been reported in the Minya governorate. Christians have been convicted for "contempt of religion", such as poet Fatima Naoot in 2016.
Although Christians are minority in Indonesia, Christianity is one of the six officially recognized religions of Indonesia and religious freedom is permitted. But there are some religious tensions and persecutions in the country, and most of the tensions and persecutions are civil and not by state.
In January 1999 tens of thousands died when Muslim gunmen terrorized Christians who had voted for independence in East Timor. These events came toward the end of the East Timor genocide, which began around 1975.
In Indonesia, religious conflicts have typically occurred in Western New Guinea, Maluku (particularly Ambon), and Sulawesi. The presence of Muslims in these traditionally Christian regions is in part a result of the "transmigrasi" program of population re-distribution. Conflicts have often occurred because of the aims of radical Islamist organizations such as Jemaah Islamiah or Laskar Jihad to impose Sharia, with such groups attacking Christians and destroying over 600 churches. In 2005 three Christian girls were beheaded as retaliation for previous Muslim deaths in Christian-Muslim rioting. The men were imprisoned for the murders, including Jemaah Islamiyah's district ringleader Hasanuddin. On going to jail, Hasanuddin said, "It's not a problem (if I am being sentenced to prison), because this is a part of our struggle." Later in November 2011, another fight between Christians against Muslims happen in Ambon. Muslims allegedly set fire to several Christian houses, forcing the occupants to leave the buildings.
In December 2011, a second church in Bogor, West Java was ordered to halt its activities by the local mayor. Another Catholic church had been built there in 2005. Previously a Christian church, GKI Taman Yasmin, had been sealed. Local authorities refused to lift a ban on the activities of the church, despite an order from the Supreme Court of Indonesia. Local authorities have persecuted the Christian church for three years. While the state has ordered religious toleration, it has not enforced these orders.
In Aceh Province, the only province in Indonesia with autonomous Islamic Shari'a Law, 20 churches in Singkil Regency face threat of demolition due to gubernatorial decree requires the approval of 150 worshippers, while the ministrial decree also requires the approval of 60 local residents of different faiths. On 30 April 2012, all the 20 churches (17 Protestant churches, 2 Catholic churches and one place of worship belonging to followers of a local nondenominational faith) have been closed down by order, from the Acting Regent which also ordered members of the congregations to tear down the churches by themselves. Most of the churches slated for demolition were built in the 1930s and 1940s. The regency has 2 churches open, both built after 2000.
On 9 May 2017, Christian governor of Jakarta Basuki Tjahaja Purnama has been sentenced to two years in prison by the North Jakarta District Court after being found guilty of committing a criminal act of blasphemy.
Though Iran recognizes Assyrian and Armenian Christians as ethnic and religious minorities (along with Jews and Zoroastrians) and they have representatives in the Parliament, they are nonetheless forced to adhere to Iran's strict interpretation of Islamic law. After the 1979 Revolution, Muslim converts to Christianity (typically to Protestant Christianity) have been arrested and sometimes executed. Youcef Nadarkhani is an Iranian Christian pastor who was arrested on charges of apostasy in October 2009 and was subsequently sentenced to death. In June 2011 the Iranian Supreme Court overruled his death sentence on condition that he recant, which he refused to do. In a reversal on 8 September 2012 he was acquitted of the charges of apostasy and extortion, and sentenced to time served for the charge of "propaganda against the regime," and immediately released.
According to UNHCR, although Christians (almost exclusively ethnic Assyrians and Armenians) now represent less than 5% of the total Iraqi population, they make up 40% of the refugees now living in nearby countries.
In 1987, the last Iraqi census counted 1.4 million Christians. They were tolerated under the secular regime of Saddam Hussein, who even made one of them, Tariq Aziz his deputy. However persecution by Saddam Hussein continued against the Christians on an ethnic, cultural and racial level, as the vast majority are Mesopotamian Eastern Aramaic-speaking Ethnic Assyrians (aka Chaldo-Assyrians). The Assyro-Aramaic language and script was repressed, the giving of Hebraic/Aramaic Christian names or Akkadian/Assyro-Babylonian names forbidden (Tariq Aziz's real name was Michael Youhanna for example), and Saddam exploited religious differences between Assyrian denominations such as Chaldean Catholics, Assyrian Church of the East, Syriac Orthodox Church, Assyrian Pentecostal Church and Ancient Church of the East, in an attempt to divide them. Many Assyrians and Armenians were ethnically cleansed from their towns and villages under the al Anfal Campaign in 1988, despite this campaign being aimed primarily at Kurds.
In 2004, five churches were destroyed by bombing, and Christians were targeted by kidnappers and Islamic extremists, leading to tens of thousands of Christians fleeing to Assyrian regions in the north or leaving the country altogether.
In 2006, the number of Assyrian Christians dropped to between 500,000 and 800,000, of whom 250,000 lived in Baghdad. An exodus to the Assyrian homeland in northern Iraq, and to neighboring countries of Syria, Jordan, Lebanon and Turkey left behind closed parishes, seminaries and convents. As a small minority, who until recently were without a militia of their own, Assyrian Christians were persecuted by both Shi'a and Sunni Muslim militias, Kurdish Nationalists, and also by criminal gangs.
As of 21 June 2007, the UNHCR estimated that 2.2 million Iraqis had been displaced to neighbouring countries, and 2 million were displaced internally, with nearly 100,000 Iraqis fleeing to Syria and Jordan each month. A 25 May 2007 article notes that in the past seven months 69 people from Iraq have been granted refugee status in the United States.
In 2007, Chaldean Catholic Church priest Fr. Ragheed Aziz Ganni and subdeacons Basman Yousef Dawid, Wahid Hanna Esho, and Gassan Isam Bidawed were killed in the ancient city of Mosul. Ganni was driving with his three deacons when they were stopped and demanded to convert to Islam, when they refused they were shot. Ganni was the pastor of the Chaldean Church of the Holy Spirit in Mosul and a graduate from the Pontifical University of Saint Thomas Aquinas, "Angelicum" in Rome in 2003 with a licentiate in ecumenical theology. Six months later, the body of Paulos Faraj Rahho, archbishop of Mosul, was found buried near Mosul. He was kidnapped on 29 February 2008 when his bodyguards and driver were killed. See 2008 attacks on Christians in Mosul for more details.
In 2010 there was an attack on the Our Lady of Salvation Syriac Catholic cathedral of Baghdad, Iraq, that took place during Sunday evening Mass on 31 October 2010. The attack left at least 58 people dead, after more than 100 had been taken hostage. The al-Qaeda-linked Sunni insurgent group The Islamic State of Iraq claimed responsibility for the attack; though Shia cleric Ayatollah Ali al-Sistani, amongst others condemned the attack.
In 2013, Assyrian Christians were departing for their ancestral heartlands in the Nineveh plains, around Mosul, Erbil and Kirkuk. Assyrian militias were established to protect villages and towns.
During the 2014 Northern Iraq offensive, the Islamic State of Iraq issued a decree in July that all indigenous Assyrian Christians in the area of its control must leave the lands they have occupied for 5000 years, be subject to extortion in the form of a special tax of approximately $470 per family, convert to Islam, or be murdered. Many of them took refuge in nearby Kurdish-controlled regions of Iraq. Christian homes have been painted with the Arabic letter ن ("nūn") for "Nassarah" (an Arabic word Christian) and a declaration that they are the "property of the Islamic State". On 18 July, ISIS militants seemed to have changed their minds and announced that all Christians would need to leave or be killed. Most of those who left had their valuable possessions stolen by the Islamic terrorists. According to Patriarch Louis Sako, there are no Christians remaining in the once Christian dominated city of Mosul for the first time in the nation's history, although this situation has not been verified.
During an attack on the Assyrian Christian town of Qaraqosh, a 5-year-old boy, who's the son of a founding member of St. George's Anglican Church in Baghdad, was slaughtered by Islamic State terrorists, better known as ISIS, who cut the boy in half.
In Malaysia, although Islam is the official religion, Christianity is tolerated under Article 3 and Article 11 of the Malaysian constitution. But at some point, the spread of Christianity is a particular sore point for the Muslim majority, the Malaysian government has also persecuted Christian groups who were perceived to be attempting to proselytize Muslim audiences. Those showing interest in the Christian faith or other faith practices not considered orthodox by state religious authorities are usually sent either by the police or their family members to state funded "Faith Rehabilitation Centres" () where they are counseled to remain faithful to Islam and some states have provisions for penalties under their respective Shariah legislations for apostasy from Islam.
It has been the practice of the church in Malaysia to not actively proselytize to the Muslim community. Christian literature is required by law to carry a caption "for non-Muslims only". Article 11(4) of the Federal Constitution of Malaysia allows the states to prohibit the propagation of other religions to Muslims, and most (with the exception of Penang, Sabah, Sarawak and the Federal Territories) have done so. There is no well-researched agreement on the actual number of Malaysian Muslim converts to Christianity in Malaysia. According to the latest population census released by the Malaysian Statistics Department, there are none, according to Ustaz Ridhuan Tee, they are 135 and according to Tan Sri Dr Harussani Zakaria, they are 260,000. See also Status of religious freedom in Malaysia.
There are, however, cases in which a Muslim will adopt the Christian faith without declaring his/her apostasy openly. In effect, they are practicing Christians, but legally Muslims.
In the 11 Northern states of Nigeria that have introduced the Islamic system of law, the Sharia, sectarian clashes between Muslims and Christians have resulted in many deaths, and some churches have been burned. More than 30,000 Christians were displaced from their homes in Kano, the largest city in northern Nigeria.
The Boko Haram Islamist group has bombed churches and killed numerous Christians who they regard as kafirs (infidels). Some Muslim aid organisations in Nigeria reportedly reserve aid for Muslims displaced by Boko Haram. Christian Bishop William Naga reported to Open Doors UK that, "They will give food to the refugees, but if you are a Christian they will not give you food. They will openly tell you that the relief is not for Christians."
In Pakistan, 1.5% of the population are Christian. Pakistani law mandates that "blasphemies" of the Qur'an are to be met with punishment. At least a dozen Christians have been given death sentences, and half a dozen murdered after being accused of violating blasphemy laws. In 2005, 80 Christians were behind bars due to these laws. The Pakistani-American author Farahnaz Ispahani has called treatment of Christians in Pakistan a "drip-drip genocide."
Ayub , a Christian, was convicted of blasphemy and sentenced to death in 1998. He was accused by a neighbor of stating that he supported British writer Salman Rushdie, author of "The Satanic Verses". Lower appeals courts upheld the conviction. However, before the Pakistan Supreme Court, his lawyer was able to prove that the accuser had used the conviction to force family off their land and then acquired control of the property. has been released.
In October 2001, gunmen on motorcycles opened fire on a Protestant congregation in the Punjab, killing 18 people. The identities of the gunmen are unknown. Officials think it might be a banned Islamic group.
In March 2002, five people were killed in an attack on a church in Islamabad, including an American schoolgirl and her mother.
In August 2002, masked gunmen stormed a Christian missionary school for foreigners in Islamabad; six people were killed and three injured. None of those killed were children of foreign missionaries.
In August 2002, grenades were thrown at a church in the grounds of a Christian hospital in north-west Pakistan, near Islamabad, killing three nurses.
On 25 September 2002, two terrorists entered the "Peace and Justice Institute", Karachi, where they separated Muslims from the Christians, and then murdered seven Christians by shooting them in the head. All of the victims were Pakistani Christians. Karachi police chief Tariq Jamil said the victims had their hands tied and their mouths had been covered with tape.
In December 2002, three young girls were killed when a hand grenade was thrown into a church near Lahore on Christmas Day.
In November 2005, 3,000 Muslims attacked Christians in Sangla Hill in Pakistan and destroyed Roman Catholic, Salvation Army and United Presbyterian churches. The attack was over allegations of violation of blasphemy laws by a Pakistani Christian named Yousaf . The attacks were widely condemned by some political parties in Pakistan.
On 5 June 2006, a Pakistani Christian, Nasir Ashraf, was assaulted for the "sin" of using public drinking water facilities near Lahore.
One year later, in August 2007, a Christian missionary couple, Rev. Arif and Kathleen Khan, were gunned down by Muslim terrorists in Islamabad. Pakistani police believed that the murders was committed by a member of Khan's parish over alleged sexual harassment by Khan. This assertion is widely doubted by Khan's family as well as by Pakistani Christians.
In August 2009, six Christians, including four women and a child, were burnt alive by Muslim militants and a church set ablaze in Gojra, Pakistan when violence broke out after alleged desecration of a Qur'an in a wedding ceremony by Christians.
On 8 November 2010, a Christian woman from Punjab Province, Asia Noreen Bibi, was sentenced to death by hanging for violating Pakistan's blasphemy law. The accusation stemmed from a 2009 incident in which Bibi became involved in a religious argument after offering water to thirsty Muslim farm workers. The workers later claimed that she had blasphemed the Muhammed. Until 2019, Bibi was in solitary confinement. A cleric had offered $5,800 to anyone who killed her. As of May 2019, Bibi and her family have left Pakistan and now reside in Canada.
On 2 March 2011, the only Christian minister in the Pakistan government was shot dead. Shahbaz Bhatti, Minister for Minorities, was in his car along with his niece. Around 50 bullets struck the car. Over 10 bullets hit Bhatti. Before his death, he had publicly stated that he was not afraid of the Taliban's threats and was willing to die for his faith and beliefs. He was targeted for opposing the anti-free speech "blasphemy" law, which punishes insulting Islam or its Prophet. A fundamentalist Muslim group claimed responsibility.
On 22 September 2013, atleast 78 people including 34 women and 7 children were killed and over 100 wounded in a Suicide attack on the over 10 year old All Saints Church in Peshawar after a service on Sunday morning.
on 4 November 2014, a Christian couple were burnt alive in the Punjab province of Pakistan, on a false rumor of blasphemy against the Quran.
On 15 March 2015, 10 people were killed in suicide bombings on Christian Churches in the city of Lahore.
On 27 March 2016, a suicide bomber from a Pakistani Taliban faction killed at least 60 people and injured 300 others in an attack at Gulshan-e-Iqbal Park in Lahore, Pakistan, and the group claimed responsibility for the attack, saying it intentionally targeted Christians celebrating Easter Sunday.
On 18 December 2017, 6 people were killed and dozens injured in a suicide bombing on a Methodist Church in the city of Quetta, Balochistan province.
On 3 April 2018, 4 members of a Christian family were shot to death and a young girl injured in the city of Quetta where they had arrived from Punjab province to celebrate Easter.
On 5 March 2018 an armed mob of over two dozen, attacked the Gospel Assembly church in Punjab province and beat up Christian worshippers including women and children.
Saudi Arabia is an Islamic state that practices Wahhabism and restricts all other religions, including the possession of religious items such as the Bible, crucifixes, and Stars of David. Strict sharia is enforced. Muslims are forbidden to convert to another religion. If one does so and does not recant, they can be executed.
Christians in Somalia face persecution associated with the ongoing civil war in that country.
In September 2011 militants sworn to eradicate Christianity from Somalia beheaded two Christian converts. A third Christian convert was beheaded in Mogadishu in early 2012.
In 1992 there were mass arrests and torture of local priests. Prior to partition, southern Sudan had a number of Christian villages. These were subsequently wiped out by Janjaweed militias.
Syria has been home to Christianity from the 1st to 3rd centuries CE onwards. The majority of Syrian Christians are once Western Aramaic speaking but now largely Arabic speaking Arameans-Syriacs, with smaller minorities of Eastern Aramaic speaking Assyrians and Armenians also extant. While religious persecution has been relatively low level compared to other Middle Eastern nations, many of the Christians have been pressured into identifying as Arab Christians, with the Assyrian and Armenian groups retaining their native languages.
On 17 October 1850 the Muslim majority began rioting against the Uniate Catholics – a minority that lived in the communities of Judayda, in the city of Aleppo.
Christians make up approximately 10% of Syria's population of 17.2 million people.
In FY 2016, when the US dramatically increased the number of refugees admitted from Syria, the US let in 12,587 refugees from the country. Less than 1% were Christian according to the Pew Research Center analysis of State Department Refugee Processing Center data.
The Ecumenical Patriarchate of Constantinople is still in a difficult position. Turkish law requires the Ecumenical Patriarch to be an ethnic Greek who holds Turkish citizenship since birth, although most members of Turkey's Greek minority have been expelled. The state's expropriation of church property is an additional difficulty faced by the Church of Constantinople. In November 2007, a 17th-century chapel of "Our Lord's Transfiguration" at the Halki seminary was almost totally demolished by the Turkish forestry authority. There was no advance warning given for the demolition work and it was only stopped after appeals were filed by the Ecumenical Patriarch.
The difficulties currently experienced by the Assyrians and Armenian Orthodox minorities in Turkey are the result of an anti-Armenian and anti-Christian attitude which is espoused by ultra-nationalist groups such as the Grey Wolves. According to the Minority Rights Group, the Turkish government recognizes Armenians and Assyrians as minorities but in Turkey, this term is used to denote second-class status. In the aftermath of the Sheikh Said rebellion, the Syriac Orthodox Church and the Assyrian Church of the East were subjected to harassment by Turkish authorities, on the grounds that some Assyrians allegedly collaborated with the rebelling Kurds. Consequently, mass deportations took place and Assyrian Patriarch Mar Ignatius Elias III was expelled from the Mor Hananyo Monastery which was turned into a Turkish barrack. The patriarchal seat was then temporarily transferred to Homs.
In February 2006, Father Andrea Santoro was murdered in Trabzon. on 18 April 2007 in the Zirve Publishing House, Malatya, Turkey Three employees of the Bible publishing house were attacked, tortured and murdered by five Sunni Muslim assailants.
The Christian presence in Yemen dates back to the fourth century AD when a number of Himyarites embrace Christianity due to the efforts of Theophilos the Indian. Currently, there are no official statistics on their numbers, but they are estimated to be between 3,000 and 25,000 people, and most of them are either refugees or temporary residents. Freedom of worship, conversion from Islam and establishing facilities dedicated for worship are not recognized as rights in the country's Constitution and laws. At the same time, Wahabbi activities linked to Al-Islah was being facilitated, financed and encouraged from multiple fronts including the Ministry of Endowments and Guidance, which says that its tasks "to contribute to the development of Islamic awareness and circulation of the publication Education and Islamic morals and consolidation in the life of public and private citizens."
The Missionaries of Charity founded by Mother Teresa has worked in Aden since 1992, and it has three other centers in Sana'a, Taiz and Hodeidah. Three Catholic nuns were killed in Hodeidah in 1998, two of them were from India and the third was from the Philippines at the hands of a member of Al-Islah named Abdullah al-Nashiri, who argued that they were calling Muslims to convert to Christianity. In 2002, three Americans were killed in Baptists Hospital at the hands of another Al-Islah member named Abed Abdul Razak Kamel. Survivors say that the suspect (Al-Islah) was "a political football" who had been raised by Islamists, who talked about it often in mosques and who described hospital workers as "spies." But they emphasized that these views are only held by a minority of Yemenis. In December 2015, an old Catholic church in Aden was destroyed.
Since the escalation of the Yemeni crisis in March 2015, six priests from John Bosco remained, and twenty workers for charitable missions in the country, described by Pope Francis by the courage to fortitude amid war and conflict. He called the Apostolic Vicar of Southern Arabia to pray for all the oppressed and tortured, expelled from their homes, and killed unjustly. In all cases, regardless of the values and ethics of the warring forces in Yemen on religious freedom, it is proved that the Missionaries of Charity were not active in the field of evangelization according to the testimonies of beneficiaries of its services.
On 4 March 2016, an incident named Mother Teresa's Massacre in Aden occurred, 16 were killed including 4 Indian Catholic nuns, 2 from Rwanda, and the rest were from India and Kenya, along with a Yemeni, 2 Guards, a cook, 5 Ethiopian women, and all of them were volunteers. One Indian priest named Tom Ozhonaniel was kidnapped. The identities of the attackers are unknown, and media outlets published a statement attributed to Ansar al-Sharia, one of the many jihadist organizations currently active in the country, but the group denies its involvement in the incident.
Bhutan is a conservative Buddhist country. Article 7 of the 2008 constitution guarantees religious freedom, but also forbids conversion "by means of coercion or inducement". According to Open Doors, to many Bhutanese this hinders the ability of Christians to proselytize.
During the Cultural Revolution, Christian churches, monasteries, and cemeteries were closed down and sometimes converted to other uses, looted, and destroyed.
The Chinese Communist Party and government try to maintain tight control over all religions, so the only legal Christian Churches (Three-Self Patriotic Movement and Chinese Patriotic Catholic Association) are those under the Communist Party of China control. Churches which are not controlled by the government are shut down, and their members are imprisoned. Gong Shengliang, head of the South China Church, was sentenced to death in 2001. Although his sentence was commuted to a jail sentence, Amnesty International reports that he has been tortured. A Christian lobby group says that about 300 Christians caught attending unregistered house churches were in jail in 2004.
In January 2016, a prominent Christian church leader Rev Gu Yuese who criticised the mass removal of church crucifixes by the government was arrested for "embezzling funds". Chinese authorities have taken down hundreds of crosses in Zhejiang Province known as "China's bible belt". Gu led China's largest authorised church with capacity of 5,000 in Hangzhou, capital of Zhejiang.
The Associated Press reported in 2018 that China's leader and Communist Party general secretary Xi Jinping "is waging the most severe systematic suppression of Christianity in the country since religious freedom was written into the Chinese constitution in 1982.", which has involved "destroying crosses, burning bibles, shutting churches and ordering followers to sign papers renouncing their faith".
Muslims in India who convert to Christianity have been subjected to harassment, intimidation, and attacks by Muslims. In Jammu and Kashmir, a Christian convert and missionary, Bashir Tantray, was killed, allegedly by Islamic militants in 2006.
A Christian priest, K.K. Alavi, a 1970 convert from Islam, thereby raised the ire of his former Muslim community and received many death threats. An Islamic terrorist group named "The National Development Front" actively campaigned against him. In the southern state of India, Kerala which has an ancient pre-Islamic community of Eastern Rite Christians, Islamic Terrorists chopped off the hand of Professor T.J. Joseph due to allegation of blasphemy of Muhammad.
The organisations involved in persecution of Christians have stated that the violence is an expression of "spontaneous anger" of "vanvasis" against "forcible conversion" activities undertaken by missionaries. These claims have been disputed by Christians a belief described as mythical and propaganda by Sangh Parivar; the opposing organisations objects in any case to all conversions as a "threat to national unity". Religious scholar Cyril Veliath of Sophia University stated that the attacks by Hindus on Christians were the work of individuals motivated by "disgruntled politicians or phony religious leaders" and where religion is concerned the typical Hindu is an "exceptionally amicable and tolerant person (...) Hinduism as a religion could well be one of the most accommodating in the world. Rather than confront and destroy, it has a tendency to welcome and assimilate." According to Rudolf C Heredia, religious conversion was a critical issue even before the creation of the modern state. Mohandas K. Gandhi opposed the Christian missionaries calling them as the remnants of colonial Western culture. He claimed that by converting into Christianity, Hindus have changed their nationality.
In its controversial annual human rights reports for 1999, the United States Department of State criticised India for "increasing societal violence against Christians." The report listed over 90 incidents of anti-Christian violence, ranging from damage of religious property to violence against Christians pilgrims. In 1997, twenty-four such incidents were reported. Recent waves of anti-conversion laws passed by some Indian states like Chhattisgarh, Gujarat, Madhya Pradesh is claimed to be a gradual and continuous institutionalization of Hindutva by the Bureau of Democracy, Human Rights and Labour of the US State Department.
North Korea is an atheist state where the public practice of religion is discouraged. "The Oxford Handbook of Atheism" states that "North Korea maintains a state-sanctioned and enforced atheism".
North Korea leads the list of the 50 countries in which Christians are persecuted the most at the current time according to a watchlist by Open Doors. It is currently estimated that more than 50,000 Christians are locked inside concentration camps because of their faith, where they are systematically subjected to mistreatment such as unrestrained torture, mass-starvation and even imprisonment and death by asphyxiation in gas chambers. This means that 20% of North Korea's Christian community lives in concentration camps. The number of Christians who are being murdered for their faith seems to be increasing as time goes on because in 2013 the death toll was 1,200 and in 2014, this figure doubled, rendering it close to 2,400 murdered Christians. North Korea has earned the top spot 12 years in a row.
The establishment of French Indochina once led to a high Christian population. Regime changes throughout the 19th and 20th centuries led to increased persecutions of minority religious groups. The Center for Public Policy Analysis has claimed that killings, torture or imprisonment and forced starvation of local groups are common in parts of Vietnam and Laos. In more recent years they have said there is growing persecution of Christians. | https://en.wikipedia.org/wiki?curid=25074 |
Pet
A pet, or companion animal, is an animal kept primarily for a person's company or entertainment rather than as a working animal, livestock or a laboratory animal. Popular pets are often considered to have attractive appearances, intelligence and relatable personalities, but some pets may be taken in on an altruistic basis (such as a stray animal) and accepted by the owner regardless of these characteristics.
Two of the most popular pets are dogs and cats; the technical term for a cat lover is an ailurophile and a dog lover a cynophile. Other animals commonly kept include: rabbits; ferrets; pigs; rodents, such as gerbils, hamsters, chinchillas, rats, mice, and guinea pigs; avian pets, such as parrots, passerines and fowls; reptile pets, such as turtles, alligators, crocodiles, lizards, and snakes; aquatic pets, such as fish, freshwater and saltwater snails, amphibians like frogs and salamanders; and arthropod pets, such as tarantulas and hermit crabs. Small pets may be grouped together as pocket pets, while the equine and bovine group include the largest companion animals.
Pets provide their owners (or "guardians") both physical and emotional benefits. Walking a dog can provide both the human and the dog with exercise, fresh air and social interaction. Pets can give companionship to people who are living alone or elderly adults who do not have adequate social interaction with other people. There is a medically approved class of therapy animals, mostly dogs or cats, that are brought to visit confined humans, such as children in hospitals or elders in nursing homes. Pet therapy utilizes trained animals and handlers to achieve specific physical, social, cognitive or emotional goals with patients.
People most commonly get pets for companionship, to protect a home or property or because of the perceived beauty or attractiveness of the animals. A 1994 Canadian study found that the most common reasons for not owning a pet were lack of ability to care for the pet when traveling (34.6%), lack of time (28.6%) and lack of suitable housing (28.3%), with dislike of pets being less common (19.6%). Some scholars, ethicists and animal rights organizations have raised concerns over keeping pets because of the lack of autonomy and the objectification of non-human animals.
In China, spending on domestic animals has grown from an estimated $3.12 billion in 2010 to $25 billion in 2018. The Chinese people own 51 million dogs and 41 million cats, with pet owners often preferring to source pet food internationally. There are a total of 755 million pets, increased from 389 million in 2013.
According to a survey promoted by Italian family associations in 2009, it is estimated that there are approximately 45 million pets in Italy. This includes 7 million dogs, 7.5 million cats, 16 million fish, 12 million birds, and 10 million snakes.
A 2007 survey by the University of Bristol found that 26% of UK households owned cats and 31% owned dogs, estimating total domestic populations of approximately 10.3 million cats and 10.5 million dogs in 2006. The survey also found that 47.2% of households with a cat had at least one person educated to degree level, compared with 38.4% of homes with dogs.
Sixty-eight percent of U.S. households, or about 85 million families, own a pet, according to the 2017-2018 National Pet Owners Survey conducted by the American Pet Products Association (APPA). This is up from 56 percent of U.S. households in 1988, the first year the survey was conducted.There are approximately 86.4 million pet cats and approximately 78.2 million pet dogs in the United States, and a United States 2007–2008 survey showed that dog-owning households outnumbered those owning cats, but that the total number of pet cats was higher than that of dogs. The same was true for 2011. In 2013, pets outnumbered children four to one in the United States.
Keeping animals as pets may be detrimental to their health if certain requirements are not met. An important issue is inappropriate feeding, which may produce clinical effects. The consumption of chocolate or grapes by dogs, for example, may prove fatal.
Certain species of houseplants can also prove toxic if consumed by pets. Examples include philodendrons and Easter lilies (which can cause severe kidney damage to cats) and poinsettias, begonia, and aloe vera (which are mildly toxic to dogs).
Housepets, particularly dogs and cats in industrialized societies, are also highly susceptible to obesity. Overweight pets have been shown to be at a higher risk of developing diabetes, liver problems, joint pain, kidney failure, and cancer. Lack of exercise and high-caloric diets are considered to be the primary contributors to pet obesity.
It is widely believed among the public, and among many scientists, that pets probably bring mental and physical health benefits to their owners; a 1987 NIH statement cautiously argued that existing data was "suggestive" of a significant benefit. A recent dissent comes from a 2017 RAND study, which found that at least in the case of children, having a pet "per se" failed to improve physical or mental health by a statistically significant amount; instead, the study found children who were already prone to being healthy were more likely to get pets in the first place. Unfortunately, conducting long-term randomized trials to settle the issue would be costly or infeasible.
Pets might have the ability to stimulate their caregivers, in particular the elderly, giving people someone to take care of, someone to exercise with, and someone to help them heal from a physically or psychologically troubled past. Animal company can also help people to preserve acceptable levels of happiness despite the presence of mood symptoms like anxiety or depression. Having a pet may also help people achieve health goals, such as lowered blood pressure, or mental goals, such as decreased stress. There is evidence that having a pet can help a person lead a longer, healthier life. In a 1986 study of 92 people hospitalized for coronary ailments, within a year, 11 of the 29 patients without pets had died, compared to only 3 of the 52 patients who had pets. Having pet(s) was shown to significantly reduce triglycerides, and thus heart disease risk, in the elderly. A study by the National Institute of Health found that people who owned dogs were less likely to die as a result of a heart attack than those who did not own one. There is some evidence that pets may have a therapeutic effect in dementia cases. Other studies have shown that for the elderly, good health may be a requirement for having a pet, and not a result. Dogs trained to be guide dogs can help people with vision impairment. Dogs trained in the field of Animal-Assisted Therapy (AAT) can also benefit people with other disabilities.
People residing in a long-term care facility, such as a hospice or nursing home, may experience health benefits from pets. Pets help them to cope with the emotional issues related to their illness. They also offer physical contact with another living creature, something that is often missing in an elder's life. Pets for nursing homes are chosen based on the size of the pet, the amount of care that the breed needs, and the population and size of the care institution. Appropriate pets go through a screening process and, if it is a dog, additional training programs to become a therapy dog. There are three types of therapy dogs: facility therapy dogs, animal-assisted therapy dogs, and therapeutic visitation dogs. The most common therapy dogs are therapeutic visitation dogs. These dogs are household pets whose handlers take time to visit hospitals, nursing homes, detention facilities, and rehabilitation facilities. Different pets require varying amounts of attention and care; for example, cats may have lower maintenance requirements than dogs.
In addition to providing health benefits for their owners, pets also impact the social lives of their owners and their connection to their community. There is some evidence that pets can facilitate social interaction. Assistant Professor of Sociology at the University of Colorado at Boulder, Leslie Irvine has focused her attention on pets of the homeless population. Her studies of pet ownership among the homeless found that many modify their life activities for fear of losing their pets. Pet ownership prompts them to act responsibly, with many making a deliberate choice not to drink or use drugs, and to avoid contact with substance abusers or those involved in any criminal activity for fear of being separated from their pet. Additionally, many refuse to house in shelters if their pet is not allowed to stay with them.
Health risks that are associated with pets include:
The European Convention for the Protection of Pet Animals is a 1987 treaty of the Council of Europe – but accession to the treaty is open to all states in the world – to promote the welfare of pet animals and ensure minimum standards for their treatment and protection. It went into effect on 1 May 1992, and as of June 2020, it has been ratified by 24 states.
States, cities, and towns in Western nations commonly enact local ordinances to limit the number or kind of pets a person may keep personally or for business purposes. Prohibited pets may be specific to certain breeds (such as pit bulls or Rottweilers), they may apply to general categories of animals (such as livestock, exotic animals, wild animals, and canid or felid hybrids), or they may simply be based on the animal's size. Additional or different maintenance rules and regulations may also apply. Condominium associations and owners of rental properties also commonly limit or forbid tenants' keeping of pets.
The keeping of animals as pets can cause concerns with regard to animal rights and welfare. Pets have commonly been considered private property, owned by individual persons. However, many legal protections have existed (historically and today) with the intention of safeguarding pets' (and other animals') well-being. Since the year 2000, a small but increasing number of jurisdictions in North America have enacted laws redefining pet's "owners" as "guardians". Intentions have been characterized as simply changing attitudes and perceptions (but not legal consequences) to working toward legal personhood for pets themselves. Some veterinarians and breeders have opposed these moves. The question of pets' legal status can arise with concern to purchase or adoption, custody, divorce, estate and inheritance, injury, damage, and veterinary malpractice.
In Belgium and the Netherlands, the government publishes white lists and black lists (called 'positive' and 'negative lists') with animal species that are designated to be appropriate to be kept as pets (positive) or not (negative). The Dutch Ministry of Economic Affairs and Climate Policy originally established its first positive list ("positieflijst") per 1 February 2015 for a set of 100 mammals (including cats, dogs and production animals) deemed appropriate as pets on the recommendations of Wageningen University. Parliamentary debates about such a pet list date back to the 1980s, with continuous disagreements about which species should be included and how the law should be enforced. In January 2017, the white list was expanded to 123 species, while the black list that had been set up was expanded (with animals like the brown bear and two great kangaroo species) to contain 153 species unfit fot petting, such as the armadillo, the sloth, the European hare and the wild boar.
Pets have a considerable environmental impact, especially in countries where they are common or held in high densities. For instance, the 163 million dogs and cats kept in the United States consume about 20% of the amount of dietary energy that humans do and an estimated 33% of the animal-derived energy. They produce about 30% ± 13%, by mass, as much feces as Americans, and through their diet, constitute about 25–30% of the environmental impacts from animal production in terms of the use of land, water, fossil fuel, phosphate, and biocides. Dog and cat animal product consumption is responsible for the release of up to 64 ± 16 million tons CO2-equivalent methane and nitrous oxide, two powerful greenhouse gasses. Americans are the largest pet owners in the world, but pet ownership in the US has considerable environmental costs.
While many people have kept many different species of animals in captivity over the course of human history, only a relative few have been kept long enough to be considered domesticated. Other types of animals, notably monkeys, have never been domesticated but are still sold and kept as pets. There are also inanimate objects that have been kept as "pets", either as a form of a game or humorously (e.g. the Pet Rock or Chia Pet). Some wild animals are kept as pets, such as tigers, even though this is illegal. There is a market for illegal pets.
Domesticated pets are most common. A "domesticated animal" is a species that has been made fit for a human environment by being consistently kept in captivity and selectively bred over a long enough period of time that it exhibits marked differences in behavior and appearance from its wild relatives. Domestication contrasts with taming, which is simply when an un-domesticated, wild animal has become tolerant of human presence, and perhaps, even enjoys it.
Wild animals are kept as pets. The term “wild” in this context specifically applies to any species of animal which has not undergone a fundamental change in behavior to facilitate a close co-existence with humans. Some species may have been bred in captivity for a considerable length of time, but are still not recognized as domesticated.
Generally, wild animals are recognized as not suitable to keep as pets, and this practice is completely banned in many places. In other areas, certain species are allowed to be kept, and it is usually required for the owner to obtain a permit. It is considered animal cruelty by some, as most often, wild animals require precise and constant care that is very difficult to meet in captive conditions. Many large and instinctively aggressive animals are extremely dangerous, and numerous times have they killed their handlers.
Archaeology suggests that human ownership of dogs as pets may date back to at least 12,000 years ago.
Ancient Greeks and Romans would openly grieve for the loss of a dog, evidenced by inscriptions left on tombstones commemorating their loss. The surviving epitaphs dedicated to horses are more likely to reference a gratitude for the companionship that had come from war horses rather than race horses. The latter may have chiefly been commemorated as a way to further the owner's fame and glory. In Ancient Egypt, dogs and baboons were kept as pets and buried with their owners. Dogs were given names, which is significant as Egyptians considered names to have magical properties.
Throughout the seventeenth and eighteenth-century pet keeping in the modern sense gradually became accepted throughout Britain. Initially, aristocrats kept dogs for both companionship and hunting. Thus, pet keeping was a sign of elitism within society. By the nineteenth century, the rise of the middle class stimulated the development of pet keeping and it became inscribed within the bourgeois culture.
As the popularity of pet-keeping in the modern sense rose during the Victorian era, animals became a fixture within urban culture as commodities and decorative objects. Pet keeping generated a commercial opportunity for entrepreneurs. By the mid-nineteenth century, nearly twenty thousand street vendors in London dealt with live animals. Also, the popularity of animals developed a demand for animal goods such as accessories and guides for pet keeping. Pet care developed into a big business by the end of the nineteenth century.
Profiteers also sought out pet stealing as a means for economic gain. Utilizing the affection that owners had for their pets, professional dog stealers would capture animals and hold them for ransom. The development of dog stealing reflects the increased value of pets. Pets gradually became defined as the property of their owners. Laws were created that punished offenders for their burglary.
Pets and animals also had social and cultural implications throughout the nineteenth century. The categorization of dogs by their breeds reflected the hierarchical, social order of the Victorian era. The pedigree of a dog represented the high status and lineage of their owners and reinforced social stratification. Middle-class owners, however, valued the ability to associate with the upper-class through ownership of their pets. The ability to care for a pet signified respectability and the capability to be self-sufficient. According to Harriet Ritvo, the identification of “elite animal and elite owner was not a confirmation of the owner’s status but a way of redefining it.”
The popularity of dog and pet keeping generated animal fancy. Dog fanciers showed enthusiasm for owning pets, breeding dogs, and showing dogs in various shows. The first dog show took place on 28 June 1859 in Newcastle and focused mostly on sporting and hunting dogs. However, pet owners produced an eagerness to demonstrate their pets as well as have an outlet to compete. Thus, pet animals gradually were included within dog shows. The first large show, which would host one thousand entries, took place in Chelsea in 1863. The Kennel Club was created in 1873 to ensure fairness and organization within dog shows. The development of the "Stud Book" by the Kennel Club defined policies, presented a national registry system of purebred dogs, and essentially institutionalized dog shows.
Pet ownership by animals in the wild, as an analogue to the human phenomenon, has not been observed and is likely non-existent in nature. One group of capuchin monkeys was observed appearing to care for a marmoset, a fellow New World monkey species, however observations of chimpanzees apparently "playing" with small animals like hyraxes have ended with the chimpanzees killing the animals and tossing the corpses around.
A 2010 study states that human relationships with animals have an exclusive human cognitive component and that pet-keeping is a fundamental and ancient attribute of the human species. Anthropomorphism, or the projection of human feelings, thoughts and attributes on to animals, is a defining feature of human pet-keeping. The study identifies it as the same trait in evolution responsible for domestication and concern for animal welfare. It is estimated to have arisen at least 100,000 years before present (ybp) in "Homo sapiens sapiens".
It is debated whether this redirection of human nurturing behaviour towards non-human animals, in the form of pet-keeping, was maladaptive, due to being biologically costly, or whether it was positively selected for. Two studies suggest that the human ability to domesticate and keep pets came from the same fundamental evolutionary trait and that this trait provided a material benefit in the form of domestication that was sufficiently adaptive to be positively selected for. A 2011 study suggests that the practical functions that some pets provide, such as assisting hunting or removing pests, could've resulted in enough evolutionary advantage to allow for the persistence of this behaviour in humans and outweigh the economic burden held by pets kept as playthings for immediate emotional rewards. Two other studies suggest that the behaviour constitutes an error, side effect or misapplication of the evolved mechanisms responsible for human empathy and theory of mind to cover non-human animals which has not sufficiently impacted its evolutionary advantage in the long run.
Animals in captivity, with the help of caretakers, have been considered to have owned "pets". Examples of this include Koko the gorilla and several pet cats, Tonda the orangutan and a pet cat and Tarra the elephant and a dog named Bella. | https://en.wikipedia.org/wiki?curid=25079 |
Photograph
A photograph (also known as a photo) is an image created by light falling on a photosensitive surface, usually photographic film or an electronic image sensor, such as a CCD or a CMOS chip. Most photographs are created using a camera, which uses a lens to focus the scene's visible wavelengths of light into a reproduction of what the human eye would see. The process and practice of creating such images is called photography.
The word "photograph" was coined in 1839 by Sir John Herschel and is based on the Greek φῶς (""), meaning "light," and γραφή ("graphê"), meaning "drawing, writing," together meaning "drawing with light."
The first permanent photograph, a contact-exposed copy of an engraving, was made in 1822 using the bitumen-based "heliography" process developed by Nicéphore Niépce. The first photographs of a real-world scene, made using a camera obscura, followed a few years later at Le Gras, France, in 1826, but Niépce's process was not sensitive enough to be practical for that application: a camera exposure lasting for hours or days was required. In 1829 Niépce entered into a partnership with Louis Daguerre and the two collaborated to work out a similar but more sensitive and otherwise improved process.
After Niépce's death in 1833 Daguerre concentrated on silver halide-based alternatives. He exposed a silver-plated copper sheet to iodine vapor, creating a layer of light-sensitive silver iodide; exposed it in the camera for a few minutes; developed the resulting invisible latent image to visibility with mercury fumes; then bathed the plate in a hot salt solution to remove the remaining silver iodide, making the results light-fast. He named this first practical process for making photographs with a camera the daguerreotype, after himself. Its existence was announced to the world on 7 January 1839 but working details were not made public until 19 August. Other inventors soon made improvements which reduced the required exposure time from a few minutes to a few seconds, making portrait photography truly practical and widely popular.
The daguerreotype had shortcomings, notably the fragility of the mirror-like image surface and the particular viewing conditions required to see the image properly. Each was a unique opaque positive that could only be duplicated by copying it with a camera. Inventors set about working out improved processes that would be more practical. By the end of the 1850s the daguerreotype had been replaced by the less expensive and more easily viewed ambrotype and tintype, which made use of the recently introduced collodion process. Glass plate collodion negatives used to make prints on albumen paper soon became the preferred photographic method and held that position for many years, even after the introduction of the more convenient gelatin process in 1871. Refinements of the gelatin process have remained the primary black-and-white photographic process to this day, differing primarily in the sensitivity of the emulsion and the support material used, which was originally glass, then a variety of flexible plastic films, along with various types of paper for the final prints.
Color photography is almost as old as black-and-white, with early experiments including John Herschel's Anthotype prints in 1842, the pioneering work of Louis Ducos du Hauron in the 1860s, and the Lippmann process unveiled in 1891, but for many years color photography remained little more than a laboratory curiosity. It first became a widespread commercial reality with the introduction of Autochrome plates in 1907, but the plates were very expensive and not suitable for casual snapshot-taking with hand-held cameras. The mid-1930s saw the introduction of Kodachrome and Agfacolor Neu, the first easy-to-use color films of the modern multi-layer chromogenic type. These early processes produced transparencies for use in slide projectors and viewing devices, but color prints became increasingly popular after the introduction of chromogenic color print paper in the 1940s. The needs of the motion picture industry generated a number of special processes and systems, perhaps the best-known being the now-obsolete three-strip Technicolor process.
Non-digital photographs are produced with a two-step chemical process. In the two-step process the light-sensitive film captures a "negative" image (colors and lights/darks are inverted). To produce a "positive" image, the negative is most commonly transferred ('printed') onto photographic paper. Printing the negative onto transparent film stock is used to manufacture motion picture films.
Alternatively, the film is processed to invert the "negative" image, yielding positive transparencies. Such positive images are usually mounted in frames, called slides. Before recent advances in digital photography, transparencies were widely used by professionals because of their sharpness and accuracy of color rendition. Most photographs published in magazines were taken on color transparency film.
Originally, all photographs were monochromatic or hand-painted in color. Although methods for developing color photos were available as early as 1861, they did not become widely available until the 1940s or 1950s, and even so, until the 1960s most photographs were taken in black and white. Since then, color photography has dominated popular photography, although black and white is still used, being easier to develop than color.
Panoramic format images can be taken with cameras like the Hasselblad Xpan on standard film. Since the 1990s, panoramic photos have been available on the Advanced Photo System (APS) film. APS was developed by several of the major film manufacturers to provide a film with different formats and computerized options available, though APS panoramas were created using a mask in panorama-capable cameras, far less desirable than a true panoramic camera, which achieves its effect through a wider film format. APS has become less popular and has been discontinued.
The advent of the microcomputer and digital photography has led to the rise of digital prints. These prints are created from stored graphic formats such as JPEG, TIFF, and RAW. The types of printers used include inkjet printers, dye-sublimation printer, laser printers, and thermal printers. Inkjet prints are sometimes given the coined name "Giclée".
The Web has been a popular medium for storing and sharing photos ever since the first photograph was published on the web by Tim Berners-Lee in 1992 (an image of the CERN house band Les Horribles Cernettes). Today popular sites such as Flickr, PhotoBucket and 500px are used by millions of people to share their pictures.
Ideal photograph storage involves placing each photo in an individual folder constructed from buffered, or acid-free paper. Buffered paper folders are especially recommended in cases when a photograph was previously mounted onto poor quality material or using an adhesive that will lead to even more acid creation. Store photographs measuring 8x10 inches or smaller vertically along the longer edge of the photo in the buffered paper folder, within a larger archival box, and label each folder with relevant information to identify it. The rigid nature of the folder protects the photo from slumping or creasing, as long as the box is not packed too tightly or under filled. Folder larger photos or brittle photos stacked flat within archival boxes with other materials of comparable size.
The most stable of plastics used in photo preservation, polyester, does not generate any harmful chemical elements, but nor does it have any capability to absorb acids generated by the photograph itself. Polyester sleeves and encapsulation have been praised for their ability to protect the photograph from humidity and environmental pollution, slowing the reaction between the item and the atmosphere. This is true, however the polyester just as frequently traps these elements next to the material it is intended to protect. This is especially risky in a storage environment that experiences drastic fluctuations in humidity or temperature, leading to ferrotyping, or sticking of the photograph to the plastic. Photographs sleeved or encapsulated in polyester cannot be stored vertically in boxes because they will slide down next to each other within the box, bending and folding, nor can the archivist write directly onto the polyester to identify the photograph. Therefore, it is necessary to either stack polyester protected photographs horizontally within a box, or bind them in a three ring binder. Stacking the photos horizontally within a flat box will greatly reduce ease of access, and binders leave three sides of the photo exposed to the effects of light and do not support the photograph evenly on both sides, leading to slumping and bending within the binder. The plastic used for enclosures has been manufactured to be as frictionless as possible to prevent scratching photos during insertion to the sleeves. Unfortunately, the slippery nature of the enclosure generates a build-up of static electricity, which attracts dust and lint particles. The static can attract the dust to the inside of the sleeve, as well, where it can scratch the photograph. Likewise, these components that aid in insertion of the photo, referred to as slip agents, can break down and transfer from the plastic to the photograph, where they deposit as an oily film, attracting further lint and dust. At this time, there is no test to evaluate the long-term effects of these components on photographs. In addition, the plastic sleeves can develop kinks or creases in the surface, which will scratch away at the emulsion during handling.
It is best to leave photographs lying flat on the table when viewing them. Do not pick it up from a corner, or even from two sides and hold it at eye level. Every time the photograph bends, even a little, this can break down the emulsion. The very nature of enclosing a photograph in plastic encourages users to pick it up; users tend to handle plastic enclosed photographs less gently than non-enclosed photographs, simply because they feel the plastic enclosure makes the photo impervious to all mishandling. As long as a photo is in its folder, there is no need to touch it; simply remove the folder from the box, lay it flat on the table, and open the folder. If for some reason the researcher or archivist does need to handle the actual photo, perhaps to examine the verso for writing, he or she can use gloves if there appears to be a risk from oils or dirt on the hands.
Because daguerreotypes were rendered on a mirrored surface, many spiritualists also became practitioners of the new art form. Spiritualists would claim that the human image on the mirrored surface was akin to looking into one's soul. The spiritualists also believed that it would open their souls and let demons in. Among Muslims, it is makruh (disliked) to perform salah (worship) in a place decorated with photographs. Photography and darkroom anomalies and artifacts sometimes lead viewers to believe that spirits or demons have been captured in photos.
The production or distribution of certain types of photograph has been forbidden under modern laws, such as those of government buildings, highly classified regions, private property, copyrighted works, children's genitalia, child pornography and less commonly pornography overall. These laws vary greatly between jurisdictions. | https://en.wikipedia.org/wiki?curid=25080 |
Paradigm shift
A paradigm shift, a concept identified by the American physicist and philosopher Thomas Kuhn, is a fundamental change in the basic concepts and experimental practices of a . Kuhn presented his notion of a paradigm shift in his influential book "The Structure of Scientific Revolutions" (1962).
Kuhn contrasts paradigm shifts, which characterize a scientific revolution, to the activity of normal science, which he describes as scientific work done within a prevailing framework or paradigm. Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm.
As one commentator summarizes:
Even though Kuhn restricted the use of the term to the natural sciences, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events.
The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to the second edition of his "Critique of Pure Reason" (1787). Kant used the phrase "revolution of the way of thinking" () to refer to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars.
In his 1962 book "The Structure of Scientific Revolutions", Kuhn explains the development of paradigm shifts in science into four stages:
A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism: the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is "always better", not just different.
These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are "incommensurable". This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published the highly regarded essay "On the Very Idea of a Conceptual Scheme" ("Proceedings and Addresses of the American Philosophical Association", Vol. 47, (1973–1974), pp. 5–20) in 1974 arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous, with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour (see for example John Hassard, "Sociology and Organization Theory: Positivism, Paradigm and Postmodernity". Cambridge University Press, 1993, ).
Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system.
In "The Structure of Scientific Revolutions", Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12). Kuhn's idea was itself revolutionary in its time as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.
Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it.
Some of the "classical cases" of Kuhnian paradigm shifts in science are:
In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals." Others have applied Kuhn's concept of paradigm shift to the social sciences.
More recently, paradigm shifts are also recognisable in applied sciences:
The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:
Robert Fulford, Globe and Mail (June 5, 1999). http://www.robertfulford.com/Paradigm.html Retrieved on 2008-04-25. In his book "Mind The Gaffe", author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books as abused and overused to the point of becoming meaningless.
In a 2015 retrospective on Kuhn, the philosopher Martin Cohen describes the notion of the paradigm shift as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Kuhn had only a very hazy idea of what it might mean and, in line with the American philosopher of science Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions whose popularity is transitory and far from conclusive.
Cohen says scientific knowledge is less certain than it is usually portrayed, and that science and knowledge generally is not the 'very sensible and reassuringly solid sort of affair' that Kuhn describes, in which progress involves periodic paradigm shifts in which much of the old certainties are abandoned in order to open up new approaches to understanding that scientists would never have considered valid before.
He argues that information cascades can distort rational, scientific debate. He has focused on health issues, including the example of highly mediatised 'pandemic' alarms, and why they have turned out eventually to be little more than scares. | https://en.wikipedia.org/wiki?curid=25081 |
Pliocene
The Pliocene ( ; also Pleiocene) Epoch is the epoch in the geologic timescale that extends from 5.333 million to 2.58 million years BP. It is the second and youngest epoch of the Neogene Period in the Cenozoic Era. The Pliocene follows the Miocene Epoch and is followed by the Pleistocene Epoch. Prior to the 2009 revision of the geologic time scale, which placed the four most recent major glaciations entirely within the Pleistocene, the Pliocene also included the Gelasian stage, which lasted from 2.588 to 1.806 million years ago, and is now included in the Pleistocene.
As with other older geologic periods, the geological strata that define the start and end are well identified but the exact dates of the start and end of the epoch are slightly uncertain. The boundaries defining the Pliocene are not set at an easily identified worldwide event but rather at regional boundaries between the warmer Miocene and the relatively cooler Pliocene. The upper boundary was set at the start of the Pleistocene glaciations.
Charles Lyell (later Sir Charles) gave the Pliocene its name in "Principles of Geology" (volume 3, 1833). | https://en.wikipedia.org/wiki?curid=23291 |
Pharaoh
Pharaoh (, ; "Pǝrro") is the common title of the monarchs of ancient Egypt from the First Dynasty (c. 3150 BCE) until the annexation of Egypt by the Roman Empire in 30 BCE, although the actual term "Pharaoh" was not used contemporaneously for a ruler until Merneptah, c. 1200 BCE. In the early dynasty, ancient Egyptian kings used to have up to three titles, the Horus, the Sedge and Bee ("nswt-bjtj") name, and the Two Ladies ("nbtj") name. The Golden Horus and nomen and prenomen titles were later added.
In Egyptian society, religion was central to everyday life. One of the roles of the pharaoh was as an intermediary between the gods and the people. The pharaoh thus deputised for the gods; his role was both as civil and religious administrator. He owned all of the land in Egypt, enacted laws, collected taxes, and defended Egypt from invaders as the commander-in-chief of the army. Religiously, the pharaoh officiated over religious ceremonies and chose the sites of new temples. He was responsible for maintaining Maat (mꜣꜥt), or cosmic order, balance, and justice, and part of this included going to war when necessary to defend the country or attacking others when it was believed that this would contribute to Maat, such as to obtain resources.
During the early days prior to the unification of Upper and Lower Egypt, the Deshret or the "Red Crown", was a representation of the kingdom of Lower Egypt, while the Hedjet, the "White Crown", was worn by the kings of the kingdom of Upper Egypt. After the unification of both kingdoms into one united Egypt, the Pschent, the combination of both the red and white crowns was the official crown of kings. With time new headdresses were introduced during different dynasties like the Khat, Nemes, Atef, Hemhem crown, and Khepresh. At times, it was depicted that a combination of these headdresses or crowns would be worn together.
The word "pharaoh" ultimately derives from the Egyptian compound ', * "great house", written with the two biliteral hieroglyphs ' "house" and "" "column", here meaning "great" or "high". It was used only in larger phrases such as "smr pr-ꜥꜣ" "Courtier of the High House", with specific reference to the buildings of the court or palace. From the Twelfth Dynasty onward, the word appears in a wish formula "Great House, May it Live, Prosper, and be in Health", but again only with reference to the royal palace and not the person.
Sometime during the era of the New Kingdom, Second Intermediate Period, "pharaoh" became the form of address for a person who was king. The earliest confirmed instance where "pr ꜥꜣ" is used specifically to address the ruler is in a letter to Akhenaten (reigned c. 1353–1336 BCE) which is addressed to "Great House, L, W, H, the Lord". However, there is a possibility that the title "pr ꜥꜣ" was applied to Thutmose III (c. 1479–1425 BCE), depending on whether an inscription on the Temple of Armant can be confirmed to refer to that king. During the Eighteenth Dynasty (16th to 14th centuries BCE) the title pharaoh was employed as a reverential designation of the ruler. About the late Twenty-first Dynasty (10th century BCE), however, instead of being used alone as before, it began to be added to the other titles before the ruler's name, and from the Twenty-Fifth Dynasty (eighth to seventh centuries BCE) it was, at least in ordinary usage, the only epithet prefixed to the royal appellative.
From the nineteenth dynasty onward "pr-ꜥꜣ" on its own was used as regularly as ḥm, "Majesty". The term, therefore, evolved from a word specifically referring to a building to a respectful designation for the ruler, particularly by the Twenty-Second Dynasty and Twenty-third Dynasty.
For instance, the first dated appearance of the title pharaoh being attached to a ruler's name occurs in Year 17 of Siamun on a fragment from the Karnak Priestly Annals. Here, an induction of an individual to the Amun priesthood is dated specifically to the reign of Pharaoh Siamun. This new practice was continued under his successor Psusennes II and the Twenty-second Dynasty kings. For instance, the Large Dakhla stela is specifically dated to Year 5 of king "Pharaoh Shoshenq, beloved of Amun", whom all Egyptologists concur was Shoshenq I—the founder of the Twenty-second Dynasty—including Alan Gardiner in his original 1933 publication of this stela. Shoshenq I was the second successor of Siamun. Meanwhile, the old custom of referring to the sovereign simply as "pr-ˤ3" continued in traditional Egyptian narratives.
By this time, the Late Egyptian word is reconstructed to have been pronounced whence Herodotus derived the name of one of the Egyptian kings, . In the Hebrew Bible, the title also occurs as ; from that, in the Septuagint, , and then in Late Latin "pharaō", both "-n" stem nouns. The Qur'an likewise spells it "firʿawn" with "n" (here, always referring to the one evil king in the Book of Exodus story, by contrast to the good king in surah Yusuf's story). The Arabic combines the original ayin from Egyptian along with the "-n" ending from Greek.
In English, it was at first spelled "Pharao", but the translators of the King James Bible revived "Pharaoh" with "h" from the Hebrew. Meanwhile, in Egypt itself, evolved into Sahidic Coptic "pərro" and then "ərro" by mistaking "p-" as the definite article "the" (from ancient Egyptian "pꜣ").
Other notable epithets are "nswt", translated to "king"; ḥm, "Majesty"; "jty" for "monarch or sovereign"; "nb" for "lord"; and "ḥqꜣ" for "ruler".
Sceptres and staves were a general sign of authority in ancient Egypt. One of the earliest royal scepters was discovered in the tomb of Khasekhemwy in Abydos. Kings were also known to carry a staff, and Pharaoh Anedjib is shown on stone vessels carrying a so-called "mks"-staff. The scepter with the longest history seems to be the "heqa"-sceptre, sometimes described as the shepherd's crook. The earliest examples of this piece of regalia dates to prehistoric Egypt. A scepter was found in a tomb at Abydos that dates to Naqada III.
Another scepter associated with the king is the "was"-sceptre. This is a long staff mounted with an animal head. The earliest known depictions of the "was"-scepter date to the First Dynasty. The "was"-scepter is shown in the hands of both kings and deities.
The flail later was closely related to the "heqa"-scepter (the crook and flail), but in early representations the king was also depicted solely with the flail, as shown in a late pre-dynastic knife handle which is now in the Metropolitan museum, and on the Narmer Macehead.
The earliest evidence known of the Uraeus—a rearing cobra—is from the reign of Den from the First Dynasty. The cobra supposedly protected the pharaoh by spitting fire at its enemies.
The red crown of Lower Egypt, the Deshret crown, dates back to pre-dynastic times and symbolised chief ruler. A red crown has been found on a pottery shard from Naqada, and later, Narmer is shown wearing the red crown on both the Narmer Macehead and the Narmer Palette.
The white crown of Upper Egypt, the Hedjet, was worn in the Predynastic Period by Scorpion II, and, later, by Narmer.
This is the combination of the Deshret and Hedjet crowns into a double crown, called the Pschent crown. It is first documented in the middle of the first dynasty. The earliest depiction may date to the reign of Djet, and is otherwise surely attested during the reign of Den.
The "khat" headdress consists of a kind of "kerchief" whose end is tied similarly to a ponytail. The earliest depictions of the "khat" headdress comes from the reign of Den, but is not found again until the reign of Djoser.
The Nemes headdress dates from the time of Djoser. It is the most common type of crown that has been depicted throughout Pharaonic Egypt. Any other type of crown, apart from the Khat headdress, has been commonly depicted on top of the Nemes. The statue from his Serdab in Saqqara shows the king wearing the "nemes" headdress.
Osiris is shown to wear the Atef crown, which is an elaborate Hedjet with feathers and disks. Depictions of Pharaohs wearing the Atef crown originate from the Old Kingdom.
The Hemhem crown is usually depicted on top of Nemes, Pschent, or Deshret crowns. It is an ornate triple Atef with corkscrew sheep horns and usually two uraei. The usage (depiction) of this crown begins during the Early 18th dynasty of Egypt.
Also called the blue crown, the Khepresh crown has been depicted in art since the New Kingdom. It is often depicted being worn in battle, but it was also frequently worn during ceremonies. It used to be called a war crown by many, but modern historians refrain from defining it thus.
Egyptologist Bob Brier has noted that despite their widespread depiction in royal portraits, no ancient Egyptian crown has ever been discovered. Tutankhamun's tomb, discovered largely intact, did contain such regalia as his crook and flail, but no crown was found among the funerary equipment. Diadems have been discovered.
It is presumed that crowns would have been believed to have magical properties. Brier's speculation is that crowns were religious or state items, so a dead pharaoh likely could not retain a crown as a personal possession. The crowns may have been passed along to the successor.
During the early dynastic period kings had three titles. The Horus name is the oldest and dates to the late pre-dynastic period. The "Nesu Bity" name was added during the first dynasty. The "Nebty" name was first introduced toward the end of the first dynasty. The Golden falcon ("bik-nbw") name is not well understood. The prenomen and nomen were introduced later and are traditionally enclosed in a cartouche. By the Middle Kingdom, the official titulary of the ruler consisted of five names; Horus, nebty, golden Horus, nomen, and prenomen for some rulers, only one or two of them may be known.
The "Nesu Bity" name, also known as Prenomen, was one of the new developments from the reign of Den. The name would follow the glyphs for the "Sedge and the Bee". The title is usually translated as king of Upper and Lower Egypt. The "nsw bity" name may have been the birth name of the king. It was often the name by which kings were recorded in the later annals and king lists.
The Horus name was adopted by the king, when taking the throne. The name was written within a square frame representing the palace, named a serekh. The earliest known example of a serekh dates to the reign of king Ka, before the first dynasty. The Horus name of several early kings expresses a relationship with Horus. Aha refers to "Horus the fighter", Djer refers to "Horus the strong", etc. Later kings express ideals of kingship in their Horus names. Khasekhemwy refers to "Horus: the two powers are at peace", while Nebra refers to "Horus, Lord of the Sun".
The earliest example of a "nebty" name comes from the reign of king Aha from the first dynasty. The title links the king with the goddesses of Upper and Lower Egypt Nekhbet and Wadjet. The title is preceded by the vulture (Nekhbet) and the cobra (Wadjet) standing on a basket (the neb sign).
The Golden Horus or Golden Falcon name was preceded by a falcon on a gold or "nbw" sign. The title may have represented the divine status of the king. The Horus associated with gold may be referring to the idea that the bodies of the deities were made of gold and the pyramids and obelisks are representations of (golden) sun-rays. The gold sign may also be a reference to Nubt, the city of Set. This would suggest that the iconography represents Horus conquering Set.
The prenomen and nomen were contained in a cartouche. The prenomen often followed the King of Upper and Lower Egypt ("nsw bity") or Lord of the Two Lands ("nebtawy") title. The prenomen often incorporated the name of Re. The nomen often followed the title Son of Re ("sa-ra") or the title Lord of Appearances ("neb-kha"). | https://en.wikipedia.org/wiki?curid=23294 |
Printing press
A printing press is a mechanical device for applying pressure to an inked surface resting upon a print medium (such as paper or cloth), thereby transferring the ink. It marked a dramatic improvement on earlier printing methods in which the cloth, paper or other medium was brushed or rubbed repeatedly to achieve the transfer of ink, and accelerated the process. Typically used for texts, the invention and global spread of the printing press was one of the most influential events in the second millennium.
In Germany, around 1440, goldsmith Johannes Gutenberg invented the printing press, which started the Printing Revolution. Modelled on the design of existing screw presses, a single Renaissance printing press could produce up to 3,600 pages per workday, compared to forty by hand-printing and a few by hand-copying. Gutenberg's newly devised hand mould made possible the precise and rapid creation of metal movable type in large quantities. His two inventions, the hand mould and the printing press, together drastically reduced the cost of printing books and other documents in Europe, particularly for shorter print runs.
From Mainz the printing press spread within several decades to over two hundred cities in a dozen European countries. By 1500, printing presses in operation throughout Western Europe had already produced more than twenty million volumes. In the 16th century, with presses spreading further afield, their output rose tenfold to an estimated 150 to 200 million copies. The operation of a press became synonymous with the enterprise of printing, and lent its name to a new medium of expression and communication, "the press".
In Renaissance Europe, the arrival of mechanical movable type printing introduced the era of mass communication, which permanently altered the structure of society. The relatively unrestricted circulation of information and (revolutionary) ideas transcended borders, captured the masses in the Reformation and threatened the power of political and religious authorities. The sharp increase in literacy broke the monopoly of the literate elite on education and learning and bolstered the emerging middle class. Across Europe, the increasing cultural self-awareness of its peoples led to the rise of proto-nationalism, and accelerated by the development of European vernacular languages, to the detriment of Latin's status as lingua franca. In the 19th century, the replacement of the hand-operated Gutenberg-style press by steam-powered rotary presses allowed printing on an industrial scale.
The rapid economic and socio-cultural development of late medieval society in Europe created favorable intellectual and technological conditions for Gutenberg's improved version of the printing press: the entrepreneurial spirit of emerging capitalism increasingly made its impact on medieval modes of production, fostering economic thinking and improving the efficiency of traditional work-processes. The sharp rise of medieval learning and literacy amongst the middle class led to an increased demand for books which the time-consuming hand-copying method fell far short of accommodating.
Technologies preceding the press that led to the press's invention included: manufacturing of paper, development of ink, woodblock printing, and distribution of eyeglasses. At the same time, a number of medieval products and technological processes had reached a level of maturity which allowed their potential use for printing purposes. Gutenberg took up these far-flung strands, combined them into one complete and functioning system, and perfected the printing process through all its stages by adding a number of inventions and innovations of his own:
The screw press which allowed direct pressure to be applied on flat-plane was already of great antiquity in Gutenberg's time and was used for a wide range of tasks. Introduced in the 1st century AD by the Romans, it was commonly employed in agricultural production for pressing wine grapes and olives (for olive oil), both of which formed an integral part of the Mediterranean and medieval diet. The device was also used from very early on in urban contexts as a cloth press for printing patterns. Gutenberg may have also been inspired by the paper presses which had spread through the German lands since the late 14th century and which worked on the same mechanical principles.
During the Islamic Golden Age, Arab Muslims were printing texts, including passages from the Qur’an, embracing the Chinese craft of paper making, developed it and adopted it widely in the Muslim world, which led to a major increase in the production of manuscript texts. In Egypt during the Fatimid era, the printing technique was adopted reproducing texts on paper strips and supplying them in various copies to meet the demand.
Gutenberg adopted the basic design, thereby mechanizing the printing process. Printing, however, put a demand on the machine quite different from pressing. Gutenberg adapted the construction so that the pressing power exerted by the platen on the paper was now applied both evenly and with the required sudden elasticity. To speed up the printing process, he introduced a movable undertable with a plane surface on which the sheets could be swiftly changed.
The concept of movable type existed prior to 15th century Europe; sporadic evidence that the typographical principle, the idea of creating a text by reusing individual characters, was known had been cropping up since the 12th century and possibly before (the oldest known application dating back as far as the Phaistos disc). The known examples range from movable type printing in China during the Song dynasty, in Korea during the Goryeo Dynasty, where metal movable-type printing technology was developed in 1234, to Germany (Prüfening inscription) and England (letter tiles) and Italy (Altarpiece of Pellegrino II). However, the various techniques employed (imprinting, punching and assembling individual letters) did not have the refinement and efficiency needed to become widely accepted. Tsuen-Hsuin and Needham, and Briggs and Burke suggest that the movable type printing in China and Korea was rarely employed. Ibrahim Muteferrika of the Ottoman Empire ran a printing press with movable Arabic type.
Gutenberg greatly improved the process by treating typesetting and printing as two separate work steps. A goldsmith by profession, he created his type pieces from a lead-based alloy which suited printing purposes so well that it is still used today. The mass production of metal letters was achieved by his key invention of a special hand mould, the matrix. The Latin alphabet proved to be an enormous advantage in the process because, in contrast to logographic writing systems, it allowed the type-setter to represent any text with a theoretical minimum of only around two dozen different letters.
Another factor conducive to printing arose from the book existing in the format of the codex, which had originated in the Roman period. Considered the most important advance in the history of the book prior to printing itself, the codex had completely replaced the ancient scroll at the onset of the Middle Ages (AD500). The codex holds considerable practical advantages over the scroll format; it is more convenient to read (by turning pages), more compact, and less costly, and both recto and verso sides could be used for writing or printing, unlike the scroll.
A fourth development was the early success of medieval papermakers at mechanizing paper manufacture. The introduction of water-powered paper mills, the first certain evidence of which dates to 1282, allowed for a massive expansion of production and replaced the laborious handcraft characteristic of both Chinese and Muslim papermaking. Papermaking centres began to multiply in the late 13th century in Italy, reducing the price of paper to one sixth of parchment and then falling further; papermaking centers reached Germany a century later.
Despite this it appears that the final breakthrough of paper depended just as much on the rapid spread of movable-type printing. It is notable that codices of parchment, which in terms of quality is superior to any other writing material, still had a substantial share in Gutenberg's edition of the 42-line Bible. After much experimentation, Gutenberg managed to overcome the difficulties which traditional water-based inks caused by soaking the paper, and found the formula for an oil-based ink suitable for high-quality printing with metal type.
A printing press, in its classical form, is a standing mechanism, ranging from long, wide, and tall. The small individual metal letters known as type would be set up by a compositor into the desired lines of text. Several lines of text would be arranged at once and were placed in a wooden frame known as a galley. Once the correct number of pages were composed, the galleys would be laid face up in a frame, also known as a forme., which itself is placed onto a flat stone, 'bed,' or 'coffin.' The text is inked using two balls, pads mounted on handles. The balls were made of dog skin leather, because it has no pores, and stuffed with sheep's wool and were inked. This ink was then applied to the text evenly. One damp piece of paper was then taken from a heap of paper and placed on the tympan. The paper was damp as this lets the type 'bite' into the paper better. Small pins hold the paper in place. The paper is now held between a frisket and tympan (two frames covered with paper or parchment).
These are folded down, so that the paper lies on the surface of the inked type. The bed is rolled under the platen, using a windlass mechanism. A small rotating handle is used called the 'rounce' to do this, and the impression is made with a screw that transmits pressure through the platen. To turn the screw the long handle attached to it is turned. This is known as the bar or 'Devil's Tail.' In a well-set-up press, the springiness of the paper, frisket, and tympan caused the bar to spring back and raise the platen, the windlass turned again to move the bed back to its original position, the tympan and frisket raised and opened, and the printed sheet removed. Such presses were always worked by hand. After around 1800, iron presses were developed, some of which could be operated by steam power.
The function of the press in the image on the left was described by William Skeen in 1872,
Johannes Gutenberg's work on the printing press began in approximately 1436 when he partnered with Andreas Dritzehn—a man who had previously instructed in gem-cutting—and Andreas Heilmann, owner of a paper mill. However, it was not until a 1439 lawsuit against Gutenberg that an official record existed; witnesses' testimony discussed Gutenberg's types, an inventory of metals (including lead), and his type molds.
Having previously worked as a professional goldsmith, Gutenberg made skillful use of the knowledge of metals he had learned as a craftsman. He was the first to make type from an alloy of lead, tin, and antimony, which was critical for producing durable type that produced high-quality printed books and proved to be much better suited for printing than all other known materials. To create these lead types, Gutenberg used what is considered one of his most ingenious inventions, a special matrix enabling the quick and precise molding of new type blocks from a uniform template. His type case is estimated to have contained around 290 separate letter boxes, most of which were required for special characters, ligatures, punctuation marks, and so forth.
Gutenberg is also credited with the introduction of an oil-based ink which was more durable than the previously used water-based inks. As printing material he used both paper and vellum (high-quality parchment). In the Gutenberg Bible, Gutenberg made a trial of colour printing for a few of the page headings, present only in some copies. A later work, the Mainz Psalter of 1453, presumably designed by Gutenberg but published under the imprint of his successors Johann Fust and Peter Schöffer, had elaborate red and blue printed initials.
The Printing Revolution occurred when the spread of the printing press facilitated the wide circulation of information and ideas, acting as an "agent of change" through the societies that it reached. (Eisenstein (1980))
The invention of mechanical movable type printing led to a huge increase of printing activities across Europe within only a few decades. From a single print shop in Mainz, Germany, printing had spread to no less than around 270 cities in Central, Western and Eastern Europe by the end of the 15th century. As early as 1480, there were printers active in 110 different places in Germany, Italy, France, Spain, the Netherlands, Belgium, Switzerland, England, Bohemia and Poland. From that time on, it is assumed that "the printed book was in universal use in Europe".
In Italy, a center of early printing, print shops had been established in 77 cities and towns by 1500. At the end of the following century, 151 locations in Italy had seen at one time printing activities, with a total of nearly three thousand printers known to be active. Despite this proliferation, printing centres soon emerged; thus, one third of the Italian printers published in Venice.
By 1500, the printing presses in operation throughout Western Europe had already produced more than twenty million copies. In the following century, their output rose tenfold to an estimated 150 to 200 million copies.
European printing presses of around 1600 were capable of producing between 1,500 and 3,600 impressions per workday. By comparison, Far Eastern printing, where the back of the paper was manually rubbed to the page, did not exceed an output of forty pages per day.
Of Erasmus's work, at least 750,000 copies were sold during his lifetime alone (1469–1536). In the early days of the Reformation, the revolutionary potential of bulk printing took princes and papacy alike by surprise. In the period from 1518 to 1524, the publication of books in Germany alone skyrocketed sevenfold; between 1518 and 1520, Luther's tracts were distributed in 300,000 printed copies.
The rapidity of typographical text production, as well as the sharp fall in unit costs, led to the issuing of the first newspapers (see "Relation") which opened up an entirely new field for conveying up-to-date information to the public.
Incunable are surviving pre-16th century print works which are collected by many of the libraries in Europe and North America.
The printing press was also a factor in the establishment of a community of scientists who could easily communicate their discoveries through the establishment of widely disseminated scholarly journals, helping to bring on the scientific revolution. Because of the printing press, authorship became more meaningful and profitable. It was suddenly important who had said or written what, and what the precise formulation and time of composition was. This allowed the exact citing of references, producing the rule, "One Author, one work (title), one piece of information" (Giesecke, 1989; 325). Before, the author was less important, since a copy of Aristotle made in Paris would not be exactly identical to one made in Bologna. For many works prior to the printing press, the name of the author has been entirely lost.
Because the printing process ensured that the same information fell on the same pages, page numbering, tables of contents, and indices became common, though they previously had not been unknown. The process of reading also changed, gradually moving over several centuries from oral readings to silent, private reading. Over the next 200 years, the wider availability of printed materials led to a dramatic rise in the adult literacy rate throughout Europe.
The printing press was an important step towards the democratization of knowledge. Within 50 or 60 years of the invention of the printing press, the entire classical canon had been reprinted and widely promulgated throughout Europe (Eisenstein, 1969; 52). More people had access to knowledge both new and old, more people could discuss these works. Book production became more commercialised, and the first copyright laws were passed. On the other hand, the printing press was criticized for allowing the dissemination of information which may have been incorrect.
A second outgrowth of this popularization of knowledge was the decline of Latin as the language of most published works, to be replaced by the vernacular language of each area, increasing the variety of published works. The printed word also helped to unify and standardize the spelling and syntax of these vernaculars, in effect 'decreasing' their variability. This rise in importance of national languages as opposed to pan-European Latin is cited as one of the causes of the rise of nationalism in Europe.
A third consequence of popularization of printing was on the economy. The printing press was associated with higher levels of city growth. The publication of trade related manuals and books teaching techniques like double-entry bookkeeping increased the reliability of trade and led to the decline of merchant guilds and the rise of individual traders.
At the dawn of the Industrial Revolution, the mechanics of the hand-operated Gutenberg-style press were still essentially unchanged, although new materials in its construction, amongst other innovations, had gradually improved its printing efficiency. By 1800, Lord Stanhope had built a press completely from cast iron which reduced the force required by 90%, while doubling the size of the printed area. With a capacity of 480 pages per hour, the Stanhope press doubled the output of the old style press. Nonetheless, the limitations inherent to the traditional method of printing became obvious.
Two ideas altered the design of the printing press radically: First, the use of steam power for running the machinery, and second the replacement of the printing flatbed with the rotary motion of cylinders. Both elements were for the first time successfully implemented by the German printer Friedrich Koenig in a series of press designs devised between 1802 and 1818. Having moved to London in 1804, Koenig soon met Thomas Bensley and secured financial support for his project in 1807. Patented in 1810, Koenig had designed a steam press "much like a hand press connected to a steam engine." The first production trial of this model occurred in April 1811. He produced his machine with assistance from German engineer Andreas Friedrich Bauer.
Koenig and Bauer sold two of their first models to "The Times" in London in 1814, capable of 1,100 impressions per hour. The first edition so printed was on 28 November 1814. They went on to perfect the early model so that it could print on both sides of a sheet at once. This began the long process of making newspapers available to a mass audience (which in turn helped spread literacy), and from the 1820s changed the nature of book production, forcing a greater standardization in titles and other metadata. Their company Koenig & Bauer AG is still one of the world's largest manufacturers of printing presses today.
The steam-powered rotary printing press, invented in 1843 in the United States by Richard M. Hoe, ultimately allowed millions of copies of a page in a single day. Mass production of printed works flourished after the transition to rolled paper, as continuous feed allowed the presses to run at a much faster pace. Hoe's original design operated at up to 2,000 revolutions per hour where each revolution deposited 4 page images giving the press a throughput of 8,000 pages per hour. By 1891 The New York World and Philadelphia Item were operating presses producing either 90,000 4 page sheets per hour or 48,000 8 page sheets.
Also, in the middle of the 19th century, there was a separate development of jobbing presses, small presses capable of printing small-format pieces such as billheads, letterheads, business cards, and envelopes. Jobbing presses were capable of quick set-up (average setup time for a small job was under 15 minutes) and quick production (even on treadle-powered jobbing presses it was considered normal to get 1,000 impressions per hour [iph] with one pressman, with speeds of 1,500 iph often attained on simple envelope work). Job printing emerged as a reasonably cost-effective duplicating solution for commerce at this time.
The table lists the maximum number of pages which the various press designs could print "per hour".
On the effects of the printing press
Technology of printing | https://en.wikipedia.org/wiki?curid=23295 |
Pat Rafter
Patrick Michael Rafter (born 28 December 1972) is an Australian former professional tennis player. He reached the Association of Tennis Professionals (ATP) world No. 1 singles ranking on 26 July 1999. His career highlights include consecutive US Open titles in 1997 and 1998, consecutive runner-up appearances at Wimbledon in 2000 and 2001, winning the 1999 Australian Open men's doubles tournament alongside Jonas Björkman, and winning two singles and two doubles ATP Masters titles.
He became the first man in the Open Era to win Canada Masters, Cincinnati Masters and the US Open in the same year, which he achieved in 1998; this achievement has been dubbed the American Summer Slam. To date, only two players have followed this feat: Andy Roddick in 2003, and Rafael Nadal in 2013. Rafter is the third man in the Open Era to reach semifinals or better of every Grand Slam tournament in both singles and doubles, after Rod Laver and Stefan Edberg, and remains the last man to date to accomplish this. Rafter is also the only player to remain undefeated against Roger Federer with at least three meetings. He is also the only player who has a winning record against the 20-time Grand Slam winner on all the three main tennis surfaces: hard, clay and grass.
Rafter turned professional in 1991. During the course of his career, he twice won the men's singles title at the US Open and was twice the runner-up at Wimbledon. He was known for his serve-and-volley style of play.
Rafter won his first tour-level match in 1993, at Wimbledon. He reached the third round, before losing to Andre Agassi. He also reached the semifinals in Indianapolis. He defeated Pete Sampras in the quarterfinals in three tight sets, before losing to Boris Becker in the semifinals. Rafter finished 1993 with a ranking of 66.
Rafter won his first career singles title in 1994 in Manchester. Prior to 1997, this was the only ATP singles title he had won.
Rafter's breakthrough came in 1997. At that year's French Open, he reached the semifinals, falling in four sets to two time former champion Sergi Bruguera. Then, he surprised many by winning the US Open, defeating Andriy Medvedev, Magnus Norman, Lionel Roux, Andre Agassi, Magnus Larsson, and Michael Chang before beating Greg Rusedski in a four-set final; he was the first non-American to win the title since Stefan Edberg in 1992. This was his first Grand Slam title, and catapulted him ahead of Chang to finish the year ranked #2 in the world (behind only Pete Sampras). The unexpected nature of his US Open title led many, including Hall-of-famer and four-time US Open champion John McEnroe to criticise Rafter as a "one-slam wonder".
Rafter had a particularly strong year in 1998, winning the Canadian Open and the Cincinnati Masters (Andre Agassi (1995), Andy Roddick (2003), and Rafael Nadal (2013) are the only other players to have won both of these tournaments in the same year). Rafter defeated ninth-ranked Richard Krajicek in the Toronto final and second-ranked Pete Sampras in the Cincinnati final. When asked about the difference between himself and Rafter following their titles, Sampras responded, "10 grand slams". He added that a tennis player must come back and win a Grand Slam again in order to be considered great.
Entering the U.S. Open as the defending champion, Rafter reached the final by defeating Hicham Arazi, Hernán Gumy, David Nainkin, Goran Ivanišević and Jonas Björkman before besting Sampras in a five-set semifinal. Rafter then defended his US Open title by defeating fellow Australian Mark Philippoussis in four sets, committing only five unforced errors throughout the match. Altogether, Rafter won six tournaments in 1998, finishing the year No. 4 in the world.
Rafter won the Australian Open men's doubles title in 1999 (partnering Jonas Björkman), making him one of few players in the modern era to win both a singles and doubles Grand Slam title during their career (fellow countryman Lleyton Hewitt would later achieve this feat in 2001). He and Björkman also won a doubles titles at the ATP Masters Series event in Canada (1999) At the 1999 French Open, Rafter drew future world No. 1 and 20-time Grand Slam champion Roger Federer in the first round, making him the first-ever opponent of Federer in the main draw of a Grand Slam tournament. Rafter defeated him in four sets. Rafter then reached the Wimbledon semifinals for the first time in 1999, losing in straight sets to Agassi. This was the first of three consecutive years that the two met in the Wimbledon semifinals. July 1999 saw Rafter holding the world No. 1 men's singles ranking for one week, making him the shortest-reigning world No. 1 in ATP Tour history. As the two-time defending US Open champion, Rafter lost in the first round of the tournament, retiring in the fifth set against Cédric Pioline after succumbing to shoulder tendinitis. Rafter's shoulder injury wound up being serious enough to necessitate surgery.
Due to injury, Rafter was unable to play in the 1999 Davis Cup final won by Australia; however, he won important matches in the earlier rounds to help the team qualify.
Rafter's ranking had fallen to No. 21 by the time he reached the Wimbledon final in July 2000. In the semifinals, he defeated Agassi 7–5, 4–6, 7–5, 4–6, 6–3. The match was hailed as a classic, particularly because of their contrasting playing styles, with Agassi playing primarily from the baseline and Rafter attacking the net. Rafter faced Sampras in the final, who was gunning for a record-breaking seventh Wimbledon title overall (and seven in the past eight years). While Rafter made a strong start to the match and took the first set, after the match he would claim that he had "choked" part way through the second set, and was then not able to get back into his game. Sampras won in four sets.
Rafter played on the Australian Davis Cup Team that lost in the final in 2000 (to Spain) and 2001 (to France). Rafter played on the Australian teams that won the World Team Cup in 1999 and 2001.
In 2001, Rafter reached the semifinals of the Australian Open. Despite holding a two sets to one lead and having the support of the home crowd, Rafter lost the match to Agassi in five sets. Later in the year, Rafter again reached the Wimbledon final. For the third straight year, he faced Agassi in the semifinals and won in yet another five-setter, 2–6, 6–3, 3–6, 6–2, 8–6. Much like the previous year's semifinal, this match also received praise for the quality of play that the two men displayed. In the final, he squared off against Goran Ivanišević, who had reached the Wimbledon final three times before but had slid down the rankings to World No. 125 following injury problems. In a five-set struggle that lasted just over three hours, Ivanišević prevailed. He played his last match at the Davis Cup final, winning the singles rubber but losing the doubles rubber.
Rafter did not play any tour matches in 2002. He spent the year recovering from injuries. In January 2003, he announced his retirement from professional tennis, stating that he had lost all motivation to compete at the top level.
The 5,500-seat centre court of the Queensland Tennis Centre in Brisbane, Australia, was named "Pat Rafter Arena" in Rafter's honour. In 2002, he won the Australian of the Year award. This created some controversy, as he had spent much of his career residing in Bermuda for tax purposes.
Rafter did return at the beginning of the 2004 season to play doubles at two tournaments only; the 2004 Australian Open and the 2004 AAPT Championships (in Adelaide). However, he lost in round one of both events, playing alongside Joshua Eagle.
He was elected to the International Tennis Hall of Fame and inducted into the Sport Australia Hall of Fame in 2006. On Australia Day 2008, Rafter was inducted into the Australian Tennis Hall of Fame.
In 2009, as part of the Q150 celebrations, Rafter was announced as one of the Q150 Icons of Queensland for his role as a "sports legend".
In October 2010, Rafter was announced as Australia's Davis Cup captain. Rafter stood down as Australia's Davis Cup captain on 29 January 2015. He was succeeded by Wally Masur.
On January 12, 2014, Rafter--then aged 41--announced that he would be partnering current Australian number one Lleyton Hewitt in the doubles draw of the 2014 Australian Open. The comeback, however, was short-lived, as the pair went down in straight sets to eventual runner-ups Eric Butorac and Raven Klaasen in the first round.
At the 2009 AEGON Masters Tennis, Rafter lost his opening round robin match against the 1987 Wimbledon Champion Pat Cash 2–6, 6–2, 10–6. In a much anticipated match and replay of the 2001 Wimbledon final, Rafter faced Goran Ivanišević. Rafter won the match when Ivanisevic retired while serving for the opening set, 3–5. Despite his performance, the retirement was enough to push Rafter into the final against Stefan Edberg. In what is described as a spell-binding serve-and-volley showdown, Rafter won the match 6–7, 6–4, 11–9. This represented the first time that Rafter was able to defeat Edberg. | https://en.wikipedia.org/wiki?curid=23297 |
Proportional representation
Proportional representation (PR) characterizes electoral systems in which divisions in an electorate are reflected proportionately in the elected body. If "n"% of the electorate support a particular political party or set of candidates as their favorite, then roughly "n"% of seats will be won by that party or those candidates. The essence of such systems is that all votes contribute to the result—not just a plurality, or a bare majority. The most prevalent forms of proportional representation all require the use of multiple-member voting districts (also called super-districts), as it is not possible to fill a single seat in a proportional manner. In fact, PR systems that achieve the highest levels of proportionality tend to include districts with large numbers of seats.
The most widely used families of PR electoral systems are party-list PR, the single transferable vote (STV), and mixed-member proportional representation (MMP).
With party list PR, political parties define candidate lists and voters vote for a list. The relative vote for each list determines how many candidates from each list are actually elected. Lists can be "closed" or "open"; open lists allow voters to indicate individual candidate preferences and vote for independent candidates. Voting districts can be small (as few as three seats in some districts in Chile or Ireland) or as large as a province or an entire nation.
The single transferable vote uses multiple-member districts, with voters casting only one vote each but ranking individual candidates in order of preference (by providing back-up preferences). During the count, as candidates are elected or eliminated, surplus or discarded votes that would otherwise be wasted are transferred to other candidates according to the preferences, forming consensus groups that elect surviving candidates. STV enables voters to vote across party lines, to choose the most preferred of a party's candidates and vote for independent candidates, knowing that if the candidate is not elected his/her vote will likely not be wasted if the voter marks back-up preferences on the ballot.
Mixed member proportional representation (MMP), also called the additional member system (AMS), is a two-tier mixed electoral system combining local non-proportional plurality/majoritarian elections and a compensatory regional or national party list PR election. Voters typically have two votes, one for their single-member district and one for the party list, the party list vote determining the balance of the parties in the elected body.
According to the ACE Electoral Knowledge Network, some form of proportional representation is used for national lower house elections in 94 countries. Party list PR, being used in 85 countries, is the most widely used. MMP is used in seven lower houses. STV, despite long being advocated by political scientists, is used in only two: Ireland, since independence in 1922, and Malta, since 1921. STV is also used in the Australian Senate, and can be used for nonpartisan elections such as the city council of Cambridge MA.
As with all electoral systems, both widely accepted and sharply opposing claims are made about the advantages and disadvantages of PR.
The case for proportional representation was made by John Stuart Mill in his 1861 essay "Considerations on Representative Government":
Many academic political theorists agree with Mill, that in a representative democracy the representatives should represent all substantial segments of society, a goal impossible to achieve under First past the post.
PR tries to resolve the unfairness of majoritarian and plurality voting systems where the largest parties receive an "unfair" "seat bonus" and smaller parties are disadvantaged and are always under-represented and on occasion winning no representation at all (Duverger's law). An established party in UK elections can win majority control of the House of Commons with as little as 35% of votes (2005 UK general election). In certain Canadian elections, majority governments have been formed by parties with the support of under 40% of votes cast (2011 Canadian election, 2015 Canadian election). If turnout levels in the electorate are less than 60%, such outcomes allow a party to form a majority government by convincing as few as one quarter of the electorate to vote for it. In the 2005 UK election, for example, the Labour Party under Tony Blair won a comfortable parliamentary majority with the votes of only 21.6% of the total electorate. Such misrepresentation has been criticized as "no longer a question of 'fairness' but of elementary rights of citizens". Note intermediate PR systems with a high electoral threshold, or other features that reduce proportionality, are not necessarily much fairer: in the 2002 Turkish general election, using an open list system with a 10% threshold, 46% of votes were wasted.
Plurality/majoritarian systems also benefit regional parties that win many seats in the region where they have a strong following but have little support nationally, while other parties with national support that is not concentrated in specific districts, like the Greens, win few or no seats. An example is the Bloc Québécois in Canada that won 52 seats in the 1993 federal election, all in Quebec, on 13.5% of the national vote, while the Progressive Conservatives collapsed to two seats on 16% spread nationally. The Conservative party although strong nationally had had very strong regional support in the West but in this election its supporters in the West turned to the Reform party, which won most of its seats west of Saskatchewan and none east of Manitoba. Similarly, in the 2015 UK General Election, the Scottish National Party gained 56 seats, all in Scotland, with a 4.7% share of the national vote while the UK Independence Party, with 12.6%, gained only a single seat.
The use of multiple-member districts enables a greater variety of candidates to be elected. The more representatives per district and the lower the percentage of votes required for election, the more minor parties can gain representation. It has been argued that in emerging democracies, inclusion of minorities in the legislature can be essential for social stability and to consolidate the democratic process.
Critics, on the other hand, claim this can give extreme parties a foothold in parliament, sometimes cited as a cause for the collapse of the Weimar government. With very low thresholds, very small parties can act as "king-makers", holding larger parties to ransom during coalition discussions. The example of Israel is often quoted, but these problems can be limited, as in the modern German Bundestag, by the introduction of higher threshold limits for a party to gain parliamentary representation (which in turn increases the number of wasted votes).
Another criticism is that the dominant parties in plurality/majoritarian systems, often looked on as "coalitions" or as "broad churches", can fragment under PR as the election of candidates from smaller groups becomes possible. Israel, again, and Brazil and Italy are examples. However, research shows, in general, there is only a small increase in the number of parties in parliament (although small parties have larger representation) under PR.
Open list systems and STV, the only prominent PR system which does not require political parties, enable independent candidates to be elected. In Ireland, on average, about six independent candidates have been elected each parliament.
This can lead to a situation that forming a Parliamentary majority requires support of one or more of these independent representatives. In some cases these independents have positions that are closely aligned with the governing party and it hardly matters. The last two Irish Governments even include independent representatives in the cabinet of a minority government. In others, the electoral platform is entirely local and addressing this is a price for support.
The election of smaller parties gives rise to one of the principal objections to PR systems, that they almost always result in coalition governments.
Supporters of PR see coalitions as an advantage, forcing compromise between parties to form a coalition at the centre of the political spectrum, and so leading to continuity and stability. Opponents counter that with many policies compromise is not possible. Neither can many policies be easily positioned on the left-right spectrum (for example, the environment). So policies are horse-traded during coalition formation, with the consequence that voters have no way of knowing which policies will be pursued by the government they elect; voters have less influence on governments. Also, coalitions do not necessarily form at the centre, and small parties can have excessive influence, supplying a coalition with a majority only on condition that a policy or policies favoured by few voters is/are adopted. Most importantly, the ability of voters to vote a party in disfavour out of power is curtailed.
All these disadvantages, the PR opponents contend, are avoided by two-party plurality systems. Coalitions are rare; the two dominant parties necessarily compete at the centre for votes, so that governments are more reliably moderate; the strong opposition necessary for proper scrutiny of government is assured; and governments remain sensitive to public sentiment because they can be, and are, regularly voted out of power. However, this is not necessarily so; a two-party system can result in a "drift to extremes", hollowing out the centre, or, at least, in one party drifting to an extreme. The opponents of PR also contend that coalition governments created under PR are less stable, and elections are more frequent. Italy is an often-cited example with many governments composed of many different coalition partners. However, Italy is unusual in that both its houses can make a government fall, whereas other PR nations have either just one house or have one of their two houses be the core body supporting a government. Italy's mix of FPTP and PR since 1993 also makes for a complicated setup, so Italy is not an appropriate candidate for measuring the stability of PR.
Plurality systems usually result in single-party-majority government because generally fewer parties are elected in large numbers under FPTP compared to PR, and FPTP compresses politics to little more than two-party contests, with relatively few votes in a few of the most finely balanced districts, the "swing seats", able to swing majority control in the house. Incumbents in less evenly divided districts are invulnerable to slight swings of political mood. In the UK, for example, about half the constituencies have always elected the same party since 1945; in the 2012 US House elections 45 districts (10% of all districts) were uncontested by one of the two dominant parties. Voters who know their preferred candidate will not win have little incentive to vote, and even if they do their votes have no effect, it is "wasted", although they would be counted in the popular vote calculation.
With PR, there are no "swing seats", most votes contribute to the election of a candidate so parties need to campaign in all districts, not just those where their support is strongest or where they perceive most advantage. This fact in turn encourages parties to be more responsive to voters, producing a more "balanced" ticket by nominating more women and minority candidates. On average about 8% more women are elected.
Since most votes count, there are fewer "wasted votes", so voters, aware that their vote can make a difference, are more likely to make the effort to vote, and less likely to vote tactically. Compared to countries with plurality electoral systems, voter turnout improves and the population is more involved in the political process. However some experts argue that transitioning from plurality to PR only increases voter turnout in geographical areas associated with safe seats under the plurality system; turnout may decrease in areas formerly associated with swing seats.
To ensure approximately equal representation, plurality systems are dependent on the drawing of boundaries of their single-member districts, a process vulnerable to political interference (gerrymandering). To compound the problem, boundaries have to be periodically re-drawn to accommodate population changes. Even apolitically drawn boundaries can unintentionally produce the effect of gerrymandering, reflecting naturally occurring concentrations.
PR systems with their multiple-member districts are less prone to this research suggests five-seat districts or larger are immune to gerrymandering.
Equality of size of multiple-member districts is not important (the number of seats can vary) so districts can be aligned with historical territories of varying sizes such as cities, counties, states, or provinces. later population changes can be accommodated by simply adjusting the number of representatives elected. For example, Professor Mollison in his 2010 plan for STV for the UK divided the country into 143 districts and then allocated a different number of seats to each district (to equal the existing total of 650) depending on the number of voters in each but with wide ranges (his five-seat districts include one with 327,000 voters and another with 382,000 voters). His district boundaries follow historical county and local authority boundaries, yet he achieves more uniform representation than does the Boundary Commission, the body responsible for balancing the UK's first-past-the-post constituency sizes.
Mixed member systems are susceptible to gerrymandering for the local seats that remain a part of such systems. Under parallel voting, a semi-proportional system, there is no compensation for the effects that such gerrymandering might have. Under MMP, the use of compensatory list seats makes gerrymandering less of an issue. However, its effectiveness in this regard depends upon the features of the system, including the size of the regional districts, the relative share of list seats in the total, and opportunities for collusion that might exist. A striking example of how the compensatory mechanism can be undermined can be seen in the 2014 Hungarian parliamentary election, where the leading party, Fidesz, combined gerrymandering and decoy lists, which resulted in a two-thirds parliamentary majority from a 45% vote. This illustrates how certain implementations of MMP can produce moderately proportional outcomes, similar to parallel voting.
It is generally accepted that a particular advantage of plurality electoral systems such as first past the post, or majoritarian electoral systems such as the alternative vote, is the geographic link between representatives and their constituents. A notable disadvantage of PR is that, as its multiple-member districts are made larger, this link is weakened. In party list PR systems without delineated districts, such as the Netherlands and Israel, the geographic link between representatives and their constituents is considered weak, but has shown to play a role for some parties. Yet with relatively small multiple-member districts, in particular with STV, there are counter-arguments: about 90% of voters can consult a representative they voted for, someone whom they might think more sympathetic to their problem. In such cases it is sometimes argued that constituents and representatives have a closer link; constituents have a choice of representative so they can consult one with particular expertise in the topic at issue. With multiple-member districts, prominent candidates have more opportunity to be elected in their home constituencies, which they know and can represent authentically. There is less likely to be a strong incentive to parachute them into constituencies in which they are strangers and thus less than ideal representatives. Mixed-member PR systems incorporate single-member districts to preserve the link between constituents and representatives. However, because up to half the parliamentary seats are list rather than district seats, the districts are necessarily up to twice as large as with a plurality/majoritarian system where all representatives serve single-member districts.
An interesting case occurred in the Netherlands, when "out of the blue" a Party for the Elderly gained six seats. The other parties had not paid attention, but this made them aware. With the next election, the Party of the Elderly was gone, because the established parties had started to listen to the elderly. Today, a party for older folks, 50PLUS, has established itself firmly in the Netherlands. This can be seen as an example how geography in itself may not be a good enough reason to establish voting results around it and overturn all other particulars of the voting population. In a sense, voting in districts restricts the voters to a specific geography. Proportional voting follows the exact outcome of all the votes.
Academics agree that the most important influence on proportionality is an electoral district's magnitude, the number of representatives elected from the district. Proportionality improves as the magnitude increases. Some scholars recommend voting districts of roughly four to eight seats, which are considered small relative to PR systems in general.
At one extreme, the binomial electoral system used in Chile between 1989 and 2013, a nominally proportional open-list system, features two-member districts. As this system can be expected to result in the election of one candidate from each of the two dominant political blocks in most districts, it is not generally considered proportional.
At the other extreme, where the district encompasses the entire country (and with a low minimum threshold, highly proportionate representation of political parties can result), parties gain by broadening their appeal by nominating more minority and women candidates.
After the introduction of STV in Ireland in 1921 district magnitudes slowly diminished as more and more three-member constituencies were defined, benefiting the dominant Fianna Fáil, until 1979 when an independent boundary commission was established reversing the trend. In 2010, a parliamentary constitutional committee recommended a minimum magnitude of four. Nonetheless, despite relatively low magnitudes Ireland has generally experienced highly proportional results.
In the FairVote plan for STV (which FairVote calls "choice voting") for the US House of Representatives, three- to five-member super-districts are proposed.
In Professor Mollison's plan for STV in the UK, four- and five-member districts are mostly used, with three and six seat districts used as necessary to fit existing boundaries, and even two and single member districts used where geography dictates.
The electoral threshold is the minimum vote required to win a seat. The lower the threshold, the higher the proportion of votes contributing to the election of representatives and the lower the proportion of votes wasted.
All electoral systems have thresholds, either formally defined or as a mathematical consequence of the parameters of the election.
A formal threshold usually requires parties to win a certain percentage of the vote in order to be awarded seats from the party lists. In Germany and New Zealand (both MMP), the threshold is 5% of the national vote but the threshold is not applied to parties that win a minimum number of constituency seats (three in Germany, one in New Zealand). Turkey defines a threshold of 10%, the Netherlands 0.67%. Israel has raised its threshold from 1% (before 1992) to 1.5% (up to 2004), 2% (in 2006) and 3.25% in 2014.
In STV elections, winning the quota (ballots/(seats+1)) of first preference votes assures election. However, well regarded candidates who attract good second (and third, etc.) preference support can hope to win election with only half the quota of first preference votes. Thus, in a six-seat district the effective threshold would be 7.14% of first preference votes (100/(6+1)/2). The need to attract second preferences tends to promote consensus and disadvantage extremes.
Party magnitude is the number of candidates elected from one party in one district. As party magnitude increases a more balanced ticket will be more successful encouraging parties to nominate women and minority candidates for election.
But under STV, nominating too many candidates can be counter-productive, splitting the first-preference votes and allowing the candidates to be eliminated before receiving transferred votes from other parties. An example of this was identified in a ward in the 2007 Scottish local elections where Labour, putting up three candidates, won only one seat while they might have won two had one of their voters' preferred candidates not stood. The same effect may have contributed to the collapse of Fianna Fáil in the 2011 Irish general election.
In a presidential system, the president is chosen independently from the parliament. As a consequence, it is possible to have a divided government where a parliament and president have opposing views and may want to balance each other influence. However, the proportional system favors government of coalitions of many smaller parties that require compromising and negotiating topics. As a consequence, these coalitions might have difficulties presenting a united front to counter presidential influence, leading to an lack of balance between these two powers. With a proportionally elected House, a President may strong-arm certain political issues
This issue does not happen in Parliamentary system, where the prime-minister is elected indirectly, by the parliament itself. As a consequence a divided government is impossible. Even if the political views change with time and the prime-minister lose its support from parliament, it can be replaced with a motion of no confidence. Effectively, both measures make it impossible to create a divided government.
Other aspects of PR can influence proportionality such as the size of the elected body, the choice of open or closed lists, ballot design, and vote counting methods.
In many contexts, it is desired to evaluate how well do the awarded seat shares approximate proportionality. However, only exact proportionality has a single unambiguous definition. A seat allocation is proportional only if the seat shares are equal to the vote shares. If this condition is not met, the seat allocation is disproportional. Consequently, an index of proportionality will only take two values, one if the allocation is proportional and the other if it is not.
In practice, it may be more interesting to examine the degree to which the number of seats won by each party differs from that of a perfectly proportional outcome. In other words, how disproportional is the seat allocation. Unlike exact proportionality, disproportionality does not have a single unambiguous definition. Any index that takes the value of zero if the seat allocation is proportional and a larger value if it is not measures disproportionality. A number of such indexes has been proposed, including the Loosemore–Hanby index, the Gallagher Index, and the Sainte-Laguë Index.
If there are some values of seat shares and vote shares for which two disproportionality indexes disagree, then the indexes measure different concepts of disproportionality. Some disproportionality concepts have been mapped to social welfare functions.
Disproportionality indexes are sometimes used to evaluate existing and proposed electoral systems. For example, the Canadian Parliament's 2016 Special Committee on Electoral Reform recommended that a system be designed to achieve "a Gallagher score of 5 or less". This indicated a much lower degree of disproportionality than observed in the 2015 Canadian election under first-past-the-post voting, where the Gallagher index was 12.
The Loosemore-Hanby index is calculated by subtracting each party's vote share from its seat share, adding up the absolute values (ignoring any negative signs), and dividing by two.
The Gallagher index is similar, but involves squaring the difference between each party's vote share and seat share, and taking the square root of the sum.
With the Sainte-Laguë index, the discrepancy between a party's vote share and seat share is measured relative to its vote share.
None of these indexes (Loosemore-Hanby, Gallagher, Sainte-Laguë) fully support ranked voting.
Party list proportional representation is an electoral system in which seats are first allocated to parties based on vote share, and then assigned to party-affiliated candidates on the parties' electoral lists. This system is used in many countries, including Finland (open list), Latvia (open list), Sweden (open list), Israel (national closed list), Brazil (open list), Nepal (closed list) as adopted in 2008 in first CA election, the Netherlands (open list), Russia (closed list), South Africa (closed list), Democratic Republic of the Congo (open list), and Ukraine (open list). For elections to the European Parliament, most member states use open lists; but most large EU countries use closed lists, so that the majority of EP seats are distributed by those. Local lists were used to elect the Italian Senate during the second half of the 20th century.
In closed list systems, each party lists its candidates according to the party's candidate selection process. This sets the order of candidates on the list and thus, in effect, their probability of being elected. The first candidate on a list, for example, will get the first seat that party wins. Each voter casts a vote for a list of candidates. Voters, therefore, do not have the option to express their preferences at the ballot as to which of a party's candidates are elected into office. A party is allocated seats in proportion to the number of votes it receives.
There is an intermediate system in Uruguay, where each party presents several closed lists, each representing a faction. Seats are distributed between parties according to the number of votes, and then between the factions within each party.
In an open list, voters may vote, depending on the model, for one person, or for two, or indicate their order of preference within the list. These votes sometimes rearrange the order of names on the party's list and thus which of its candidates are elected. Nevertheless, the number of candidates elected from the list is determined by the number of votes the list receives.
In a local list system, parties divide their candidates in single member-like constituencies, which are ranked inside each general party list depending by their percentages. This method allows electors to judge every single candidate as in a FPTP system.
Some party list proportional systems with open lists use a two-tier compensatory system, as in Denmark, Norway, and Sweden. In Denmark, for example, the country is divided into ten multiple-member voting districts arranged in three regions, electing 135 representatives. In addition, 40 compensatory seats are elected. Voters have one vote which can be cast for an individual candidate or for a party list on the district ballot. To determine district winners, candidates are apportioned their share of their party's district list vote plus their individual votes. The compensatory seats are apportioned to the regions according to the party votes aggregated nationally, and then to the districts where the compensatory representatives are determined. In the 2007 general election, the district magnitudes, including compensatory representatives, varied between 14 and 28. The basic design of the system has remained unchanged since its introduction in 1920.
The single transferable vote (STV), also called "choice voting", is a ranked system: voters rank candidates in order of preference. Voting districts usually elect three to seven representatives. The count is cyclic, electing or eliminating candidates and transferring votes until all seats are filled. A candidate is elected whose tally reaches a quota, the minimum vote that guarantees election. The candidate's surplus votes (those in excess of the quota) are transferred to other candidates at a fraction of their value proportionate to the surplus, according to the votes' preferences. If no candidates reach the quota, the candidate with the fewest votes is eliminated, those votes being transferred to their next preference at full value, and the count continues. There are many methods for transferring votes. Some early, manual, methods transferred surplus votes according to a randomly selected sample, or transferred only a "batch" of the surplus, other more recent methods transfer all votes at a fraction of their value (the surplus divided by the candidate's tally) but may need the use of a computer. Some methods may not produce exactly the same result when the count is repeated. There are also different ways of treating transfers to already elected or eliminated candidates, and these, too, can require a computer.
In effect, the method produces groups of voters of equal size that reflect the diversity of the electorate, each group having a representative the group voted for. Some 90% of voters have a representative to whom they gave their first preference. Voters can choose candidates using any criteria they wish, the proportionality is implicit. Political parties are not necessary; all other prominent PR electoral systems presume that parties reflect voters wishes, which many believe gives power to parties. STV satisfies the electoral system criterion "proportionality for solid coalitions" a solid coalition for a set of candidates is the group of voters that rank all those candidates above all others and is therefore considered a system of proportional representation. However, the small district magnitude used in STV elections has been criticized as impairing proportionality, especially when more parties compete than there are seats available, and STV has, for this reason, sometimes been labelled "quasi proportional". While this may be true when considering districts in isolation, results are proportional. In Ireland, with particularly small magnitudes, results are "highly proportional". In 1997, the average magnitude was 4.0 but eight parties gained representation, four of them with less than 3% of first preference votes nationally. Six independent candidates also won election. STV has also been described as the proportional system. The system tends to handicap extreme candidates because, to gain preferences and so improve their chance of election, candidates need to canvass voters beyond their own circle of supporters, and so need to moderate their views. Conversely, widely respected candidates can win election with relatively few first preferences by benefitting from strong subordinate preference support.
The term "STV" in Australia refers to the Senate electoral system, a variant of "Hare-Clark" characterized by the "above the line" group voting ticket, a party list option. It is used in the Australian upper house, the Senate, most state upper houses, the Tasmanian lower house and the Capital Territory assembly. Due to the number of preferences that are compulsory if a vote for candidates (below-the-line) is to be valid for the Senate a minimum of 90% of candidates must be scored, in 2013 in New South Wales that meant writing 99 preferences on the ballot 95% and more of voters use the above-the-line option, making the system, in all but name, a party list system. Parties determine the order in which candidates are elected and also control transfers to other lists and this has led to anomalies: preference deals between parties, and "micro parties" which rely entirely on these deals. Additionally, independent candidates are unelectable unless they form, or join, a group above-the-line. Concerning the development of STV in Australia researchers have observed: "... we see real evidence of the extent to which Australian politicians, particularly at national levels, are prone to fiddle with the electoral system".
As a result of a parliamentary commission investigating the 2013 election, from 2016 the system has been considerably reformed (see 2016 Australian federal election), with group voting tickets (GVTs) abolished and voters no longer required to fill all boxes.
A mixed compensatory system is an electoral system that is mixed, meaning that it combines a plurality/majority formula with a proportional formula, and that uses the proportional component to compensate for disproportionality caused by the plurality/majority component. For example, suppose that a party wins 10 seats based on plurality, but requires 15 seats in total to obtain its proportional share of an elected body. A fully proportional mixed compensatory system would award this party 5 compensatory (PR) seats, raising the party's seat count from 10 to 15. The most prominent mixed compensatory system is mixed member proportional representation (MMP), used in Germany since 1949. In MMP, the seats won by plurality are associated with single-member districts.
Mixed member proportional representation (MMP) is a two-tier system that combines a single-district vote, usually first-past-the-post, with a compensatory regional or nationwide party list proportional vote. The system aims to combine the local district representation of FPTP and the proportionality of a national party list system. MMP has the potential to produce proportional or moderately proportional election outcomes, depending on a number of factors such as the ratio of FPTP seats to PR seats, the existence or nonexistence of extra compensatory seats to make up for overhang seats, and electoral thresholds. It was invented for the German Bundestag after the Second World War and has spread to Lesotho, Bolivia and New Zealand. The system is also used for the Welsh and Scottish assemblies where it is called the additional member system.
Voters typically have two votes, one for their district representative and one for the party list. The list vote usually determines how many seats are allocated to each party in parliament. After the district winners have been determined, sufficient candidates from each party list are elected to "top-up" each party to the overall number of parliamentary seats due to it according to the party's overall list vote. Before apportioning list seats, all list votes for parties which failed to reach the threshold are discarded. If eliminated parties lose seats in this manner, then the seat counts for parties that achieved the threshold improve. Also, any direct seats won by independent candidates are subtracted from the parliamentary total used to apportion list seats.
The system has the potential to produce proportional results, but proportionality can be compromised if the ratio of list to district seats is too low, it may then not be possible to completely compensate district seat disproportionality. Another factor can be how overhang seats are handled, district seats that a party wins in excess of the number due to it under the list vote. To achieve proportionality, other parties require "balance seats", increasing the size of parliament by twice the number of overhang seats, but this is not always done. Until recently, Germany increased the size of parliament by the number of overhang seats but did not use the increased size for apportioning list seats. This was changed for the 2013 national election after the constitutional court rejected the previous law, not compensating for overhang seats had resulted in a negative vote weight effect. Lesotho, Scotland and Wales do not increase the size of parliament at all, and, in 2012, a New Zealand parliamentary commission also proposed abandoning compensation for overhang seats, and so fixing the size of parliament. At the same time, it would abolish the single-seat threshold any such seats would then be overhang seats and would otherwise have increased the size of parliament further and reduce the electoral threshold from 5% to 4%. Proportionality would not suffer.
Dual member proportional representation (DMP) is a single-vote system that elects two representatives in every district. The first seat in each district is awarded to the candidate who wins a plurality of the votes, similar to first-past-the-post voting. The remaining seats are awarded in a compensatory manner to achieve proportionality across a larger region. DMP employs a formula similar to the "best near-winner" variant of MMP used in the German state of Baden-Württemberg. In Baden-Württemberg, compensatory seats are awarded to candidates who receive high levels of support at the district level compared with other candidates of the same party. DMP differs in that at most one candidate per district is permitted to obtain a compensatory seat. If multiple candidates contesting the same district are slated to receive one of their parties' compensatory seats, the candidate with the highest vote share is elected and the others are eliminated. DMP is similar to STV in that all elected representatives, including those who receive compensatory seats, serve their local districts. Invented in 2013 in the Canadian province of Alberta, DMP received attention on Prince Edward Island where it appeared on a 2016 plebiscite as a potential replacement for FPTP, but was eliminated on the third round.
Biproportional apportionment applies a mathematical method (iterative proportional fitting) for the modification of an election result to achieve proportionality. It was proposed for elections by the mathematician Michel Balinski in 1989, and first used by the city of Zurich for its council elections in February 2006, in a modified form called "new Zurich apportionment" ("Neue Zürcher Zuteilungsverfahren"). Zurich had had to modify its party list PR system after the Swiss Federal Court ruled that its smallest wards, as a result of population changes over many years, unconstitutionally disadvantaged smaller political parties. With biproportional apportionment, the use of open party lists hasn't changed, but the way winning candidates are determined has. The proportion of seats due to each party is calculated according to their overall citywide vote, and then the district winners are adjusted to conform to these proportions. This means that some candidates, who would otherwise have been successful, can be denied seats in favor of initially unsuccessful candidates, in order to improve the relative proportions of their respective parties overall. This peculiarity is accepted by the Zurich electorate because the resulting city council is proportional and all votes, regardless of district magnitude, now have equal weight. The system has since been adopted by other Swiss cities and cantons.
Balinski has proposed another variant called fair majority voting (FMV) to replace single-winner plurality/majoritarian electoral systems, in particular the system used for the US House of Representatives. FMV introduces proportionality without changing the method of voting, the number of seats, or thepossibly gerrymandereddistrict boundaries. Seats would be apportioned to parties in a proportional manner at the state level. In a related proposal for the UK parliament, whose elections are contested by many more parties, the authors note that parameters can be tuned to adopt any degree of proportionality deemed acceptable to the electorate. In order to elect smaller parties, a number of constituencies would be awarded to candidates placed fourth or even fifth in the constituency unlikely to be acceptable to the electorate, the authors concede but this effect could be substantially reduced by incorporating a third, regional, apportionment tier, or by specifying minimum thresholds.
Reweighted range voting (RRV) is a multi-winner voting system similar to STV in that voters can express support for multiple candidates, but different in that candidates are graded instead of ranked. That is, a voter assigns a score to each candidate. The higher a candidate's scores, the greater the chance they will be among the winners.
Similar to STV, the vote counting procedure occurs in rounds. The first round of RRV is identical to range voting. All ballots are added with equal weight, and the candidate with the highest overall score is elected. In all subsequent rounds, ballots that support candidates who have already been elected are added with a reduced weight. Thus voters who support none of the winners in the early rounds are increasingly likely to elect one of their preferred candidates in a later round. The procedure has been shown to yield proportional outcomes if voters are loyal to distinct groups of candidates (e.g. political parties).
RRV was used for the nominations in the Visual Effects category for recent Academy Award Oscars from 2013 thru 2017.
Systems can be devised that aim at proportional representation but are based on approval votes on individual candidates (not parties). Such is the idea of Proportional approval voting (PAV).
When there are a lot of seats to be filled, as in a legislature, counting ballots under PAV may not be feasible, so sequential variants have been proposed, such as Sequential proportional approval voting (SPAV). This method is similar to reweighted range voting in that several winners are elected using a multi-round counting procedure in which ballots supporting already elected candidates are given reduced weights. Under SPAV, however, a voter can only choose to approve or disapprove of each candidate, as in approval voting. SPAV was used briefly in Sweden during the early 1900s.
In asset voting, the voters vote for candidates and then the candidates negotiate amongst each other and reallocate votes amongst themselves. Asset voting was proposed by Lewis Carroll in 1884 and has been more recently independently rediscovered and extended by Warren D. Smith and Forest Simmons.
Similar to Majority Judgment voting that elects single winners, Evaluative Proportional Representation (EPR) elects all the members of a legislative body. Both systems remove the qualitative wasting of votes. Each citizen grades the fitness for office of as many of the candidates as they wish as either Excellent (ideal), Very Good, Good, Acceptable, Poor, or Reject (entirely unsuitable). Multiple candidates may be given the same grade by a voter. Using EPR, each citizen elects their representative at-large for a city council. For a large and diverse state legislature, each citizen chooses to vote through any of the districts or official electoral associations in the country. Each voter grades any number of candidates in the whole country. Each elected representative has a different voting power (a different number of weighted votes) in the legislative body. This number is equal to the total number of votes given exclusively to each member from all citizens. Each member's weighted vote results from receiving one of the following from each voter: their highest grade, highest remaining grade, or proxy vote. No citizen's vote is "wasted" Unlike all the other proportional representation systems, each EPR voter, and each self-identifying minority or majority is quantitatively represented with exact proportionality. Also, like Majority Judgment, EPR reduces by almost half both the incentives and possibilities for voters to use Tactical Voting. .
One of the earliest proposals of proportionality in an assembly was by John Adams in his influential pamphlet "Thoughts on Government", written in 1776 during the American Revolution:
Mirabeau, speaking to the Assembly of Provence on January 30, 1789, was also an early proponent of a proportionally representative assembly:
In February 1793, the Marquis de Condorcet led the drafting of the Girondist constitution which proposed a limited voting scheme with proportional aspects. Before that could be voted on, the Montagnards took over the National Convention and produced their own constitution. On June 24, Saint-Just proposed the single non-transferable vote, which can be proportional, for national elections but the constitution was passed on the same day specifying first-past-the-post voting.
Already in 1787, James Wilson, like Adams a US Founding Father, understood the importance of multiple-member districts: "Bad elections proceed from the smallness of the districts which give an opportunity to bad men to intrigue themselves into office", and again, in 1791, in his Lectures on Law: "It may, I believe, be assumed as a general maxim, of no small importance in democratical governments, that the more extensive the district of election is, the choice will be the more wise and enlightened". The 1790 Constitution of Pennsylvania specified multiple-member districts for the state Senate and required their boundaries to follow county lines.
STV or more precisely, an election method where voters have one transferable vote, was first invented in 1819 by an English schoolmaster, Thomas Wright Hill, who devised a "plan of election" for the committee of the "Society for Literary and Scientific Improvement" in Birmingham that used not only transfers of surplus votes from winners but also from losers, a refinement that later both Andræ and Hare initially omitted. But the procedure was unsuitable for a public election and wasn't publicised. In 1839, Hill's son, Rowland Hill, recommended the concept for public elections in Adelaide, and a simple process was used in which voters formed as many groups as there were representatives to be elected, each group electing one representative.
The first practical PR election method, a list method, was conceived by Thomas Gilpin, a retired paper-mill owner, in a paper he read to the American Philosophical Society in Philadelphia in 1844: "On the representation of minorities of electors to act with the majority in elected assemblies". But the paper appears not to have excited any interest.
A practical election using a single transferable vote was devised in Denmark by Carl Andræ, a mathematician, and first used there in 1855, making it the oldest PR system, but the system never really spread. It was re-invented (apparently independently) in the UK in 1857 by Thomas Hare, a London barrister, in his pamphlet "The Machinery of Representation" and expanded on in his 1859 "Treatise on the Election of Representatives". The scheme was enthusiastically taken up by John Stuart Mill, ensuring international interest. The 1865 edition of the book included the transfer of preferences from dropped candidates and the STV method was essentially complete. Mill proposed it to the House of Commons in 1867, but the British parliament rejected it. The name evolved from "Mr.Hare's scheme" to "proportional representation", then "proportional representation with the single transferable vote", and finally, by the end of the 19th century, to "the single transferable vote".
In Australia, the political activist Catherine Helen Spence became an enthusiast of STV and an author on the subject. Through her influence and the efforts of the Tasmanian politician Andrew Inglis Clark, Tasmania became an early pioneer of the system, electing the world's first legislators through STV in 1896, prior to its federation into Australia.
A party list proportional representation system was devised and described in 1878 by Victor D'Hondt in Belgium, which became the first country to adopt list PR in 1900 for its national parliament. D'Hondt's method of seat allocation, the D'Hondt method, is still widely used. Some Swiss cantons (beginning with Ticino in 1890) used the system before Belgium. Victor Considerant, a utopian socialist, devised a similar system in an 1892 book. Many European countries adopted similar systems during or after World War I. List PR was favoured on the Continent because the use of lists in elections, the scrutin de liste, was already widespread. STV was preferred in the English-speaking world because its tradition was the election of individuals.
In the UK, the 1917 Speaker's Conference recommended STV for all multi-seat Westminster constituencies, but it was only applied to university constituencies, lasting from 1918 until 1950 when those constituencies were abolished.
In Ireland, STV was used in 1918 in the University of Dublin constituency, and was introduced for devolved elections in 1921.
STV is currently used for two national lower houses of parliament, Ireland, since independence (as the Irish Free State) in 1922, and Malta, since 1921, long before independence in 1966.
In Ireland, two attempts have been made by Fianna Fáil governments to abolish STV and replace it with the 'First Past the Post' plurality system. Both attempts were rejected by voters in referendums held in 1959 and again in 1968..
STV is also used for all other elections in Ireland including that of the presidency,
It is also used for the Northern Irish assembly and European and local authorities, Scottish local authorities, some New Zealand and Australian local authorities, the Tasmanian (since 1907) and Australian Capital Territory assemblies, where the method is known as "Hare-Clark", and the city council in Cambridge, Massachusetts, (since 1941).
PR is used by a majority of the world's 33 most robust democracies with populations of at least two million people; only six use plurality or a majoritarian system (runoff or instant runoff) for elections to the legislative assembly, four use parallel systems, and 23 use PR. PR dominates Europe, including Germany and most of northern and eastern Europe; it is also used for European Parliament elections. France adopted PR at the end of World War II, but discarded it in 1958; it was used for parliament elections in 1986. Switzerland has the most widespread use of proportional representation, which is the system used to elect not only national legislatures and local councils, but also all local executives. PR is less common in the English-speaking world; New Zealand adopted MMP in 1993, but the UK, Canada, India and Australia all use plurality/majoritarian systems for legislative elections.
In Canada, STV was used by the cities of Edmonton and Calgary in Alberta from 1926 to 1955, and by Winnipeg in Manitoba from 1920 to 1953. In both provinces the alternative vote (AV) was used in rural areas. first-past-the-post was re-adopted in Alberta by the dominant party for reasons of political advantage, in Manitoba a principal reason was the underrepresentation of Winnipeg in the provincial legislature.
STV has some history in the United States. Between 1915 and 1962, twenty-four cities used the system for at least one election. In many cities, minority parties and other groups used STV to break up single-party monopolies on elective office. One of the most famous cases is New York City, where a coalition of Republicans and others imposed STV in 1936 as part of an attack on the Tammany Hall machine. Another famous case is Cincinnati, Ohio, where, in 1924, Democrats and Progressive-wing Republicans imposed a council-manager charter with STV elections to dislodge the Republican machine of Rudolph K. Hynicka. Although Cincinnati's council-manager system survives, Republicans and other disaffected groups replaced STV with plurality-at-large voting in 1957. From 1870 to 1980, Illinois used a semi-proportional cumulative voting system to elect its House of Representatives. Each district across the state elected both Republicans and Democrats year-after-year. Cambridge, Massachusetts (STV) and Peoria, Illinois (cumulative voting) continue to use PR. San Francisco had citywide elections in which people would cast votes for five or six candidates simultaneously, delivering some of the benefits of proportional representation.
Many political scientists argue that PR was adopted by parties on the right as a strategy to survive amid suffrage expansion, democratization and the rise of workers' parties. According to Stein Rokkan in a seminal 1970 study, parties on the right opted to adopt PR as a way to survive as competitive parties in situations when the parties on the right were not united enough to exist under majoritarian systems. This argument was formalized and supported by Carles Boix in a 1999 study. Amel Ahmed notes that prior to the adoption of PR, many electoral systems were based on majority or plurality rule, and that these systems risked eradicating parties on the right in areas were the working class was large in numbers. He therefore argues that parties on the right adopted PR as a way to ensure that they would survive as potent political forces amid suffrage expansion. Other scholars have argued that the choice to adopt PR was also due to a demand by parties on the left to ensure a foothold in politics, as well as to encourage a consensual system that would help the left realize its preferred economic policies.
The table below lists the countries that use a PR electoral system to fill a nationwide elected body. Detailed information on electoral systems applying to the first chamber of the legislature is maintained by the ACE Electoral Knowledge Network. (See also the complete list of electoral systems by country.) | https://en.wikipedia.org/wiki?curid=23298 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.