sentence1 stringlengths 1 133k | sentence2 stringlengths 1 131k |
|---|---|
to support and extend the wing. Near the body, the humerus or upper arm bone is short but powerfully built. It sports a large deltopectoral crest, to which the major flight muscles are attached. Despite the considerable forces exerted on it, the humerus is hollow or pneumatised inside, reinforced by bone struts. The long bones of the lower arm, the ulna and radius, are much longer than the humerus. They were probably incapable of pronation. A bone unique to pterosaurs, known as the pteroid, connected to the wrist and helped to support the forward membrane (the propatagium) between the wrist and shoulder. Evidence of webbing between the three free fingers of the pterosaur forelimb suggests that this forward membrane may have been more extensive than the simple pteroid-to-shoulder connection traditionally depicted in life restorations. The position of the pteroid bone itself has been controversial. Some scientists, notably Matthew Wilkinson, have argued that the pteroid pointed forward, extending the forward membrane and allowing it to function as an adjustable flap. This view was contradicted in a 2007 paper by Chris Bennett, who showed that the pteroid did not articulate as previously thought and could not have pointed forward, but rather was directed inward toward the body as traditionally interpreted. Specimens of Changchengopterus pani and Darwinopterus linglongtaensis show the pteroid in articulation with the proximal syncarpal, suggesting that the pteroid articulated with the 'saddle' of the radiale (proximal syncarpal) and that both the pteroid and preaxial carpal were migrated centralia. The pterosaur wrist consists of two inner (proximal, at the side of the long bones of the arm) and four outer (distal, at the side of the hand) carpals (wrist bones), excluding the pteroid bone, which may itself be a modified distal carpal. The proximal carpals are fused together into a "syncarpal" in mature specimens, while three of the distal carpals fuse to form a distal syncarpal. The remaining distal carpal, referred to here as the medial carpal, but which has also been termed the distal lateral, or pre-axial carpal, articulates on a vertically elongate biconvex facet on the anterior surface of the distal syncarpal. The medial carpal bears a deep concave fovea that opens anteriorly, ventrally and somewhat medially, within which the pteroid articulates, according to Wilkinson. In derived pterodactyloids like pteranodontians and azhdarchoids, metacarpals I-III are small and do not connect to the carpus, instead hanging in contact with the fourth metacarpal. With these derived species, the fourth metacarpal has been enormously elongated, typically equalling or exceeding the length of the long bones of the lower arm. The fifth metacarpal had been lost. In all species, the first to third fingers are much smaller than the fourth, the "wingfinger", and contain two, three and four phalanges respectively. The smaller fingers are clawed, with the ungual size varying among species. In nyctosaurids the forelimb digits besides the wingfinger have been lost altogether. The wingfinger accounts for about half or more of the total wing length. It normally consists of four phalanges. Their relative lengths tend to vary among species, which has often been used to distinguish related forms. The fourth phalanx is usually the shortest. It lacks a claw and has been lost completely by nyctosaurids. It is curved to behind, resulting in a rounded wing tip, which reduces induced drag. The wingfinger is also bent somewhat downwards. When standing, pterosaurs probably rested on their metacarpals, with the outer wing folded to behind. In this position, the "anterior" sides of the metacarpals were rotated to the rear. This would point the smaller fingers obliquely to behind. According to Bennett, this would imply that the wingfinger, able to describe the largest arc of any wing element, up to 175°, was not folded by flexion but by an extreme extension. The wing was automatically folded when the elbow was bowed. A laser-simulated fluorescence scan on Pterodactylus also identified a membranous "fairing" (area conjunctioning the wing with the body at the neck), as opposed to the feathered or fur-composed "fairing" seen in birds and bats respectively. Pelvis The pelvis of pterosaurs was of moderate size compared to the body as a whole. Often the three pelvic bones were fused. The ilium was long and low, its front and rear blades projecting horizontally beyond the edges of the lower pelvic bones. Despite this length, the rod-like form of these processes indicates that the hindlimb muscles attached to them were limited in strength. The, in side view narrow, pubic bone fused with the broad ischium into an ischiopubic blade. Sometimes, the blades of both sides were also fused, closing the pelvis from below and forming the pelvic canal. The hip joint was not perforated and allowed considerable mobility to the leg. It was directed obliquely upwards, preventing a perfectly vertical position of the leg. The front of the pubic bones articulated with a unique structure, the paired prepubic bones. Together these formed a cusp covering the rear belly, between the pelvis and the belly ribs. The vertical mobility of this element suggests a function in breathing, compensating the relative rigidity of the chest cavity. Hindlimbs The hindlimbs of pterosaurs were strongly built, yet relative to their wingspans smaller than those of birds. They were long in comparison to the torso length. The thighbone was rather straight, with the head making only a small angle with the shaft. This implies that the legs were not held vertically below the body but were somewhat sprawling. The shinbone was often fused with the upper ankle bones into a tibiotarsus that was longer than the thighbone. It could attain a vertical position when walking. The calf bone tended to be slender, especially at its lower end that in advanced forms did not reach the ankle, sometimes reducing total length to a third. Typically it was fused to the shinbone. The ankle was a simple, "mesotarsal", hinge. The, rather long and slender, metatarsus was always splayed to some degree. The foot was plantigrade, meaning that during the walking cycle the sole of the metatarsus was pressed onto the soil. There was a clear difference between early pterosaurs and advanced species regarding the form of the fifth digit. Originally, the fifth metatarsal was robust and not very shortened. It was connected to the ankle in a higher position than the other metatarsals. It bore a long, and often curved, mobile clawless fifth toe consisting of two phalanges. The function of this element has been enigmatic. It used to be thought that the animals slept upside-down like bats, hanging from branches and using the fifth toes as hooks. Another hypothesis held that they stretched the brachiopatagia, but in articulated fossils the fifth digits are always flexed towards the tail. Later it became popular to assume that these toes extended an uropatagium or cruropatagium between them. As the fifth toes were on the outside of the feet, such a configuration would only have been possible if these rotated their fronts outwards in flight. Such a rotation could be caused by an abduction of the thighbone, meaning that the legs would be spread. This would also turn the feet into a vertical position. They then could act as rudders to control yaw. Some specimens show membranes between the toes, allowing them to function as flight control surfaces. The uropatagium or cruropatagium would control pitch. When walking the toes could flex upwards to lift the membrane from the ground. In Pterodactyloidea, the fifth metatarsal was much reduced and the fifth toe, if present, little more than a stub. This suggests that their membranes were split, increasing flight manoeuvrability. The first to fourth toes were long. They had two, three, four and five phalanges respectively. Often the third toe was longest; sometimes the fourth. Flat joints indicate a limited mobility. These toes were clawed but the claws were smaller than the hand claws. Soft tissues The rare conditions that allowed for the fossilisation of pterosaur remains, sometimes also preserved soft tissues. Modern synchrotron or ultraviolet light photography has revealed many traces not visible to the naked eye. These are often imprecisely called "impressions" but mostly consist of petrifications, natural casts and transformations of the original material. They may include horn crests, beaks or claw sheaths as well as the various flight membranes. Exceptionally, muscles were preserved. Skin patches show small round non-overlapping scales on the soles of the feet, the ankles and the ends of the metatarsals. They covered pads cushioning the impact of walking. Scales are unknown from other parts of the body. Pycnofibers Most or all pterosaurs had hair-like filaments known as pycnofibers on the head and torso. The term "pycnofiber", meaning "dense filament", was coined by palaeontologist Alexander Kellner and colleagues in 2009. Pycnofibers were unique structures similar to, but not homologous (sharing a common origin) with, mammalian hair, an example of convergent evolution. A fuzzy integument was first reported from a specimen of Scaphognathus crassirostris in 1831 by Georg August Goldfuss, but had been widely doubted. Since the 1990s, pterosaur finds and histological and ultraviolet examination of pterosaur specimens have provided incontrovertible proof: pterosaurs had pycnofiber coats. Sordes pilosus (which translates as "hairy demon") and Jeholopterus ninchengensis show pycnofibers on the head and body. The presence of pycnofibers strongly indicates that pterosaurs were endothermic (warm-blooded). They aided thermoregulation, as is common in warm-blooded animals who need insulation to prevent excessive heat-loss. Pycnofibers were flexible, short filaments, about five to seven millimetres long and rather simple in structure with a hollow central canal. Pterosaur pelts might have been comparable in density to many Mesozoic mammals. Relation with feathers Pterosaur filaments could share a common origin with feathers, as speculated in 2002 by Czerkas and Ji. In 2009, Kellner concluded that pycnofibers were structured similarly to theropod proto-feathers. Others were unconvinced, considering the difference with the "quills" found on many of the bird-like maniraptoran specimens too fundamental. A 2018 study of the remains of two small Jurassic-age pterosaurs from Inner Mongolia, China, found that pterosaurs had a wide array of pycnofiber shapes and structures, as opposed to the homogeneous structures that had generally been assumed to cover them. Some of these had frayed ends, very similar in structure to four different feather types known from birds or other dinosaurs but almost never known from pterosaurs prior to the study, suggesting homology. A response to this study was published in 2020, where it was suggested that the structures seen on the anurognathids were actually a result of the decomposition of aktinofibrils: a type of fibre used to strengthen and stiffen the wing. However, in a response to this, the authors of the 2018 paper point to the fact that the presence of the structures extend past the patagium, and the presence of both aktinofibrils and filaments on Jeholopterus ningchengensis and Sordes pilosus. The various forms of filament structure present on the anurognathids in the 2018 study would also require a form of decomposition that would cause the different 'filament' forms seen. They therefore conclude that the most parsimonious interpretation of the structures is that they are filamentous proto-feathers. But Liliana D’Alba points out that the description of the preserved integumentary structures on the two anurogmathid specimens is still based upon gross morphology. She also points out that Pterorhynchus was described to have feathers to support the claim that feathers had a common origin with Ornithodirans but was argued against by several authors. The only method to assure if it was homologous to feathers is to use a scanning electron microscope. History of discovery First finds Pterosaur fossils are very rare, due to their light bone construction. Complete skeletons can generally only be found in geological layers with exceptional preservation conditions, the so-called Lagerstätten. The pieces from one such Lagerstätte, the Late Jurassic Solnhofen Limestone in Bavaria, became much sought after by rich collectors. In 1784, the Italian naturalist Cosimo Alessandro Collini was the first scientist in history to describe a pterosaur fossil. At that time the concepts of evolution and extinction were only imperfectly developed. The bizarre build of the pterosaur was therefore shocking, as it could not clearly be assigned to any existing animal group. The discovery of pterosaurs would thus play an important role in the progress of modern paleontology and geology. If such creatures were still alive, only the sea was a credible habitat and Collini suggested it might be a swimming animal that used its long front limbs as paddles. A few scientists continued to support the aquatic interpretation even until 1830, when the German zoologist Johann Georg Wagler suggested that Pterodactylus used its wings as flippers and was affiliated with Ichthyosauria and Plesiosauria. In 1800, Johann Hermann first suggested that it represented a flying creature in a letter to Georges Cuvier. Cuvier agreed in 1801, understanding it was an extinct flying reptile. In 1809, he coined the name Ptéro-Dactyle, "wing-finger". This was in 1815 Latinised to Pterodactylus. At first most species were assigned to this genus and ultimately "pterodactyl" was popularly and incorrectly applied to all members of Pterosauria. Today, paleontologists limit the term to the genus Pterodactylus or members of the Pterodactyloidea. In 1812 and 1817, Samuel Thomas von Soemmerring redescribed the original specimen and an additional one. He saw them as affiliated to birds and bats. Although he was mistaken in this, his "bat model" would be very influential during the 19th century. In 1843, Edward Newman thought pterosaurs were flying marsupials. As the bat model correctly depicted pterosaurs as furred and warm-blooded, it better approached the true physiology of pterosaurs than Cuvier's "reptile model". In 1834, Johann Jakob Kaup coined the term Pterosauria. Expanding research In 1828, Mary Anning found in England the first pterosaur genus outside Germany, named as Dimorphodon by Richard Owen, also the first non-pterodactyloid pterosaur known. Later in the century, the Early Cretaceous Cambridge Greensand produced thousands of pterosaur fossils, that however, were of poor quality, consisting mostly of strongly eroded fragments. Based on these, nevertheless numerous genera and species would be named. Many were described by Harry Govier Seeley, at the time the main English expert on the subject, who also wrote the first pterosaur book, Ornithosauria, and in 1901 the first popular book, Dragons of the Air. Seeley thought that pterosaurs were warm-blooded and dynamic creatures, closely related to birds. Earlier, the evolutionist St. George Jackson Mivart had suggested pterosaurs were the direct ancestors of birds. Owen opposed the views of both men, seeing pterosaurs as cold-blooded "true" reptiles. In the US, Othniel Charles Marsh in 1870 discovered Pteranodon in the Niobrara Chalk, then the largest known pterosaur, the first toothless one and the first from America. These layers too rendered thousands of fossils, also including relatively complete skeletons that were three-dimensionally preserved instead of being strongly compressed as with the Solnhofen specimens. This led to a much better understanding of many anatomical details, such as the hollow nature of the bones. Meanwhile, finds from the Solnhofen had continued, accounting for the majority of complete high quality specimens discovered. They allowed to identify most new basal taxa, such as Rhamphorhynchus, Scaphognathus and Dorygnathus. This material gave birth to a German school of pterosaur research, which saw flying reptiles as the warm-blooded, furry and active Mesozoic counterparts of modern bats and birds. In 1882, Marsh and Karl Alfred Zittel published studies about the wing membranes of specimens of Rhamphorhynchus. German studies continued well into the 1930s, describing new species such as Anurognathus. In 1927, Ferdinand Broili discovered hair follicles in pterosaur skin, and paleoneurologist Tilly Edinger determined that the brains of pterosaurs more resembled those of birds than modern cold-blooded reptiles. In contrast, English and American paleontologists by the middle of the twentieth century largely lost interest in pterosaurs. They saw them as failed evolutionary experiments, cold-blooded and scaly, that hardly could fly, the larger species only able to glide, being forced to climb trees or throw themselves from cliffs to achieve a take-off. In 1914, for the first time pterosaur aerodynamics were quantitatively analysed, by Ernest Hanbury Hankin and David Meredith Seares Watson, but they interpreted Pteranodon as a pure glider. Little research was done on the group during the 1940s and 1950s. Pterosaur renaissance The situation for dinosaurs was comparable. From the 1960s onwards, a dinosaur renaissance took place, a quick increase in the number of studies and critical ideas, influenced by the discovery of additional fossils of Deinonychus, whose spectacular traits refuted what had become entrenched orthodoxy. In 1970, likewise the description of the furry pterosaur Sordes began what Robert Bakker named a renaissance of pterosaurs. Especially Kevin Padian propagated the new views, publishing a series of studies depicting pterosaurs as warm-blooded, active and running animals. This coincided with a revival of the German school through the work of Peter Wellnhofer, who in 1970s laid the foundations of modern pterosaur science. In 1978, he published the first pterosaur textbook, the Handbuch der Paläoherptologie, Teil 19: Pterosauria, and in 1991 the second ever popular science pterosaur book, the Encyclopedia of Pterosaurs. This development accelerated through the exploitation of two new Lagerstätten. During the 1970s, the Early Cretaceous Santana Formation in Brazil began to produce chalk nodules that, though often limited in size and the completeness of the fossils they contained, perfectly preserved three-dimensional pterosaur skeletal parts. German and Dutch institutes bought such nodules from fossil poachers and prepared them in Europe, allowing their scientists to describe many new species and revealing a whole new fauna. Soon, Brazilian researchers, among them Alexander Kellner, intercepted the trade and named even more species. Even more productive was the Early Cretaceous Chinese Jehol Biota of Liaoning that since the 1990s has brought forth hundreds of exquisitely preserved two-dimensional fossils, often showing soft tissue remains. Chinese researchers such as Lü Junchang have again named many new taxa. As discoveries also increased in other parts of the world, a sudden surge in the total of named genera took place. By 2009, when they had increased to about ninety, this growth showed no sign of levelling-off. In 2013, M.P. Witton indicated that the number of discovered pterosaur species had risen to 130. Over ninety percent of known taxa has been named during the "renaissance". Many of these were from groups the existence of which had been unknown. Advances in computing power allowed to determine their complex relationships through the quantitative method of cladistics. New and old fossils yielded much more information when subjected to modern ultraviolet light or roentgen photography, or CAT-scans. Insights from other fields of biology were applied to the data obtained. All this resulted in a substantial progress in pterosaur research, rendering older accounts in popular science books completely outdated. In 2017 a fossil from a 170-million-year-old pterosaur was discovered on the Isle of Skye in Scotland. The National Museum of Scotland claims that it the largest of its kind ever discovered from the Jurassic period, and it has been described as the world’s best-preserved skeleton of a pterosaur. Evolution and extinction Origins Because pterosaur anatomy has been so heavily modified for flight, and immediate transitional fossil predecessors have not so far been described, the ancestry of pterosaurs is not fully understood. The oldest known pterosaurs were already fully adapted to a flying lifestyle. Since Seeley, it was recognised that pterosaurs were likely to have had their origin in the "archosaurs", what today would be called the Archosauromorpha. In the 1980s, early cladistic analyses found that they were Avemetatarsalians (archosaurs closer to dinosaurs than to crocodilians). As this would make them also rather close relatives of the dinosaurs, these results were seen by Kevin Padian as confirming his interpretation of pterosaurs as bipedal warm-blooded animals. Because these early analyses were based on a limited number of taxa and characters, their results were inherently uncertain. Several influential researchers who rejected Padian's conclusions offered alternative hypotheses. David Unwin proposed an ancestry among the basal Archosauromorpha, specifically long-necked forms ("protorosaurs") such as tanystropheids. A placement among basal archosauriforms like Euparkeria was also suggested. Some basal archosauromorphs seem at first glance to be good candidates for close pterosaur relatives due to their long-limbed anatomy; one example is Sharovipteryx, a "protorosaur" with skin membranes on its hindlimbs likely used for gliding. A 1999 study by Michael Benton found that pterosaurs were avemetatarsalians closely related to Scleromochlus, and named the group Ornithodira to encompass pterosaurs and dinosaurs. Two researchers, S. Christopher Bennett in 1996, and paleoartist David Peters in 2000, published analyses finding pterosaurs to be protorosaurs or closely related to them. However, Peters gathered novel anatomical data using an unverified technique called "Digital Graphic Segregation" (DGS), which involves digitally tracing over images of pterosaur fossils using photo editing software. Bennett only recovered pterosaurs as close relatives of the protorosaurs after removing characteristics of the hindlimb from his analysis, to test the possibility of locomotion-based convergent evolution between pterosaurs and dinosaurs. A 2007 reply by Dave Hone and Michael Benton could not reproduce this result, finding pterosaurs to be closely related to dinosaurs even without hindlimb characters. They also criticized David Peters for drawing conclusions without access to the primary evidence, that is, the pterosaur fossils themselves. Hone and Benton concluded that, although more basal pterosauromorphs are needed to clarify their relationships, current evidence indicates that pterosaurs are avemetatarsalians, as either the sister group of Scleromochlus or a branch between the latter and Lagosuchus. An 2011 archosaur-focused phylogenetic analysis by Sterling Nesbitt benefited from far more data and found strong support for pterosaurs being avemetatarsalians, though Scleromochlus was not included due to its poor preservation. A 2016 archosauromorph-focused study by Martin Ezcurra included various proposed pterosaur relatives, yet also found pterosaurs to be closer to dinosaurs and unrelated to more basal taxa. Working from his 1996 analysis, Bennett published a 2020 study on Scleromochlus which argued that both Scleromochlus and pterosaurs were non-archosaur archosauromorphs, albeit not particularly closely related to each other. By contrast, a later 2020 study proposed that lagerpetid archosaurs were the sister clade to pterosauria. This was based on newly described fossil skulls and forelimbs showing various anatomical similarities with pterosaurs and reconstructions of lagerpetid brains and sensory systems based on CT scans also showing neuroanatomical similarities with pterosaurs. The results of the latter study were subsequently supported by an independent analysis of early pterosauromorph interrelationships. A related problem is the origin of pterosaur flight. Like with birds, hypotheses can be ordered into two main varieties: "ground up" or | sides were also fused, closing the pelvis from below and forming the pelvic canal. The hip joint was not perforated and allowed considerable mobility to the leg. It was directed obliquely upwards, preventing a perfectly vertical position of the leg. The front of the pubic bones articulated with a unique structure, the paired prepubic bones. Together these formed a cusp covering the rear belly, between the pelvis and the belly ribs. The vertical mobility of this element suggests a function in breathing, compensating the relative rigidity of the chest cavity. Hindlimbs The hindlimbs of pterosaurs were strongly built, yet relative to their wingspans smaller than those of birds. They were long in comparison to the torso length. The thighbone was rather straight, with the head making only a small angle with the shaft. This implies that the legs were not held vertically below the body but were somewhat sprawling. The shinbone was often fused with the upper ankle bones into a tibiotarsus that was longer than the thighbone. It could attain a vertical position when walking. The calf bone tended to be slender, especially at its lower end that in advanced forms did not reach the ankle, sometimes reducing total length to a third. Typically it was fused to the shinbone. The ankle was a simple, "mesotarsal", hinge. The, rather long and slender, metatarsus was always splayed to some degree. The foot was plantigrade, meaning that during the walking cycle the sole of the metatarsus was pressed onto the soil. There was a clear difference between early pterosaurs and advanced species regarding the form of the fifth digit. Originally, the fifth metatarsal was robust and not very shortened. It was connected to the ankle in a higher position than the other metatarsals. It bore a long, and often curved, mobile clawless fifth toe consisting of two phalanges. The function of this element has been enigmatic. It used to be thought that the animals slept upside-down like bats, hanging from branches and using the fifth toes as hooks. Another hypothesis held that they stretched the brachiopatagia, but in articulated fossils the fifth digits are always flexed towards the tail. Later it became popular to assume that these toes extended an uropatagium or cruropatagium between them. As the fifth toes were on the outside of the feet, such a configuration would only have been possible if these rotated their fronts outwards in flight. Such a rotation could be caused by an abduction of the thighbone, meaning that the legs would be spread. This would also turn the feet into a vertical position. They then could act as rudders to control yaw. Some specimens show membranes between the toes, allowing them to function as flight control surfaces. The uropatagium or cruropatagium would control pitch. When walking the toes could flex upwards to lift the membrane from the ground. In Pterodactyloidea, the fifth metatarsal was much reduced and the fifth toe, if present, little more than a stub. This suggests that their membranes were split, increasing flight manoeuvrability. The first to fourth toes were long. They had two, three, four and five phalanges respectively. Often the third toe was longest; sometimes the fourth. Flat joints indicate a limited mobility. These toes were clawed but the claws were smaller than the hand claws. Soft tissues The rare conditions that allowed for the fossilisation of pterosaur remains, sometimes also preserved soft tissues. Modern synchrotron or ultraviolet light photography has revealed many traces not visible to the naked eye. These are often imprecisely called "impressions" but mostly consist of petrifications, natural casts and transformations of the original material. They may include horn crests, beaks or claw sheaths as well as the various flight membranes. Exceptionally, muscles were preserved. Skin patches show small round non-overlapping scales on the soles of the feet, the ankles and the ends of the metatarsals. They covered pads cushioning the impact of walking. Scales are unknown from other parts of the body. Pycnofibers Most or all pterosaurs had hair-like filaments known as pycnofibers on the head and torso. The term "pycnofiber", meaning "dense filament", was coined by palaeontologist Alexander Kellner and colleagues in 2009. Pycnofibers were unique structures similar to, but not homologous (sharing a common origin) with, mammalian hair, an example of convergent evolution. A fuzzy integument was first reported from a specimen of Scaphognathus crassirostris in 1831 by Georg August Goldfuss, but had been widely doubted. Since the 1990s, pterosaur finds and histological and ultraviolet examination of pterosaur specimens have provided incontrovertible proof: pterosaurs had pycnofiber coats. Sordes pilosus (which translates as "hairy demon") and Jeholopterus ninchengensis show pycnofibers on the head and body. The presence of pycnofibers strongly indicates that pterosaurs were endothermic (warm-blooded). They aided thermoregulation, as is common in warm-blooded animals who need insulation to prevent excessive heat-loss. Pycnofibers were flexible, short filaments, about five to seven millimetres long and rather simple in structure with a hollow central canal. Pterosaur pelts might have been comparable in density to many Mesozoic mammals. Relation with feathers Pterosaur filaments could share a common origin with feathers, as speculated in 2002 by Czerkas and Ji. In 2009, Kellner concluded that pycnofibers were structured similarly to theropod proto-feathers. Others were unconvinced, considering the difference with the "quills" found on many of the bird-like maniraptoran specimens too fundamental. A 2018 study of the remains of two small Jurassic-age pterosaurs from Inner Mongolia, China, found that pterosaurs had a wide array of pycnofiber shapes and structures, as opposed to the homogeneous structures that had generally been assumed to cover them. Some of these had frayed ends, very similar in structure to four different feather types known from birds or other dinosaurs but almost never known from pterosaurs prior to the study, suggesting homology. A response to this study was published in 2020, where it was suggested that the structures seen on the anurognathids were actually a result of the decomposition of aktinofibrils: a type of fibre used to strengthen and stiffen the wing. However, in a response to this, the authors of the 2018 paper point to the fact that the presence of the structures extend past the patagium, and the presence of both aktinofibrils and filaments on Jeholopterus ningchengensis and Sordes pilosus. The various forms of filament structure present on the anurognathids in the 2018 study would also require a form of decomposition that would cause the different 'filament' forms seen. They therefore conclude that the most parsimonious interpretation of the structures is that they are filamentous proto-feathers. But Liliana D’Alba points out that the description of the preserved integumentary structures on the two anurogmathid specimens is still based upon gross morphology. She also points out that Pterorhynchus was described to have feathers to support the claim that feathers had a common origin with Ornithodirans but was argued against by several authors. The only method to assure if it was homologous to feathers is to use a scanning electron microscope. History of discovery First finds Pterosaur fossils are very rare, due to their light bone construction. Complete skeletons can generally only be found in geological layers with exceptional preservation conditions, the so-called Lagerstätten. The pieces from one such Lagerstätte, the Late Jurassic Solnhofen Limestone in Bavaria, became much sought after by rich collectors. In 1784, the Italian naturalist Cosimo Alessandro Collini was the first scientist in history to describe a pterosaur fossil. At that time the concepts of evolution and extinction were only imperfectly developed. The bizarre build of the pterosaur was therefore shocking, as it could not clearly be assigned to any existing animal group. The discovery of pterosaurs would thus play an important role in the progress of modern paleontology and geology. If such creatures were still alive, only the sea was a credible habitat and Collini suggested it might be a swimming animal that used its long front limbs as paddles. A few scientists continued to support the aquatic interpretation even until 1830, when the German zoologist Johann Georg Wagler suggested that Pterodactylus used its wings as flippers and was affiliated with Ichthyosauria and Plesiosauria. In 1800, Johann Hermann first suggested that it represented a flying creature in a letter to Georges Cuvier. Cuvier agreed in 1801, understanding it was an extinct flying reptile. In 1809, he coined the name Ptéro-Dactyle, "wing-finger". This was in 1815 Latinised to Pterodactylus. At first most species were assigned to this genus and ultimately "pterodactyl" was popularly and incorrectly applied to all members of Pterosauria. Today, paleontologists limit the term to the genus Pterodactylus or members of the Pterodactyloidea. In 1812 and 1817, Samuel Thomas von Soemmerring redescribed the original specimen and an additional one. He saw them as affiliated to birds and bats. Although he was mistaken in this, his "bat model" would be very influential during the 19th century. In 1843, Edward Newman thought pterosaurs were flying marsupials. As the bat model correctly depicted pterosaurs as furred and warm-blooded, it better approached the true physiology of pterosaurs than Cuvier's "reptile model". In 1834, Johann Jakob Kaup coined the term Pterosauria. Expanding research In 1828, Mary Anning found in England the first pterosaur genus outside Germany, named as Dimorphodon by Richard Owen, also the first non-pterodactyloid pterosaur known. Later in the century, the Early Cretaceous Cambridge Greensand produced thousands of pterosaur fossils, that however, were of poor quality, consisting mostly of strongly eroded fragments. Based on these, nevertheless numerous genera and species would be named. Many were described by Harry Govier Seeley, at the time the main English expert on the subject, who also wrote the first pterosaur book, Ornithosauria, and in 1901 the first popular book, Dragons of the Air. Seeley thought that pterosaurs were warm-blooded and dynamic creatures, closely related to birds. Earlier, the evolutionist St. George Jackson Mivart had suggested pterosaurs were the direct ancestors of birds. Owen opposed the views of both men, seeing pterosaurs as cold-blooded "true" reptiles. In the US, Othniel Charles Marsh in 1870 discovered Pteranodon in the Niobrara Chalk, then the largest known pterosaur, the first toothless one and the first from America. These layers too rendered thousands of fossils, also including relatively complete skeletons that were three-dimensionally preserved instead of being strongly compressed as with the Solnhofen specimens. This led to a much better understanding of many anatomical details, such as the hollow nature of the bones. Meanwhile, finds from the Solnhofen had continued, accounting for the majority of complete high quality specimens discovered. They allowed to identify most new basal taxa, such as Rhamphorhynchus, Scaphognathus and Dorygnathus. This material gave birth to a German school of pterosaur research, which saw flying reptiles as the warm-blooded, furry and active Mesozoic counterparts of modern bats and birds. In 1882, Marsh and Karl Alfred Zittel published studies about the wing membranes of specimens of Rhamphorhynchus. German studies continued well into the 1930s, describing new species such as Anurognathus. In 1927, Ferdinand Broili discovered hair follicles in pterosaur skin, and paleoneurologist Tilly Edinger determined that the brains of pterosaurs more resembled those of birds than modern cold-blooded reptiles. In contrast, English and American paleontologists by the middle of the twentieth century largely lost interest in pterosaurs. They saw them as failed evolutionary experiments, cold-blooded and scaly, that hardly could fly, the larger species only able to glide, being forced to climb trees or throw themselves from cliffs to achieve a take-off. In 1914, for the first time pterosaur aerodynamics were quantitatively analysed, by Ernest Hanbury Hankin and David Meredith Seares Watson, but they interpreted Pteranodon as a pure glider. Little research was done on the group during the 1940s and 1950s. Pterosaur renaissance The situation for dinosaurs was comparable. From the 1960s onwards, a dinosaur renaissance took place, a quick increase in the number of studies and critical ideas, influenced by the discovery of additional fossils of Deinonychus, whose spectacular traits refuted what had become entrenched orthodoxy. In 1970, likewise the description of the furry pterosaur Sordes began what Robert Bakker named a renaissance of pterosaurs. Especially Kevin Padian propagated the new views, publishing a series of studies depicting pterosaurs as warm-blooded, active and running animals. This coincided with a revival of the German school through the work of Peter Wellnhofer, who in 1970s laid the foundations of modern pterosaur science. In 1978, he published the first pterosaur textbook, the Handbuch der Paläoherptologie, Teil 19: Pterosauria, and in 1991 the second ever popular science pterosaur book, the Encyclopedia of Pterosaurs. This development accelerated through the exploitation of two new Lagerstätten. During the 1970s, the Early Cretaceous Santana Formation in Brazil began to produce chalk nodules that, though often limited in size and the completeness of the fossils they contained, perfectly preserved three-dimensional pterosaur skeletal parts. German and Dutch institutes bought such nodules from fossil poachers and prepared them in Europe, allowing their scientists to describe many new species and revealing a whole new fauna. Soon, Brazilian researchers, among them Alexander Kellner, intercepted the trade and named even more species. Even more productive was the Early Cretaceous Chinese Jehol Biota of Liaoning that since the 1990s has brought forth hundreds of exquisitely preserved two-dimensional fossils, often showing soft tissue remains. Chinese researchers such as Lü Junchang have again named many new taxa. As discoveries also increased in other parts of the world, a sudden surge in the total of named genera took place. By 2009, when they had increased to about ninety, this growth showed no sign of levelling-off. In 2013, M.P. Witton indicated that the number of discovered pterosaur species had risen to 130. Over ninety percent of known taxa has been named during the "renaissance". Many of these were from groups the existence of which had been unknown. Advances in computing power allowed to determine their complex relationships through the quantitative method of cladistics. New and old fossils yielded much more information when subjected to modern ultraviolet light or roentgen photography, or CAT-scans. Insights from other fields of biology were applied to the data obtained. All this resulted in a substantial progress in pterosaur research, rendering older accounts in popular science books completely outdated. In 2017 a fossil from a 170-million-year-old pterosaur was discovered on the Isle of Skye in Scotland. The National Museum of Scotland claims that it the largest of its kind ever discovered from the Jurassic period, and it has been described as the world’s best-preserved skeleton of a pterosaur. Evolution and extinction Origins Because pterosaur anatomy has been so heavily modified for flight, and immediate transitional fossil predecessors have not so far been described, the ancestry of pterosaurs is not fully understood. The oldest known pterosaurs were already fully adapted to a flying lifestyle. Since Seeley, it was recognised that pterosaurs were likely to have had their origin in the "archosaurs", what today would be called the Archosauromorpha. In the 1980s, early cladistic analyses found that they were Avemetatarsalians (archosaurs closer to dinosaurs than to crocodilians). As this would make them also rather close relatives of the dinosaurs, these results were seen by Kevin Padian as confirming his interpretation of pterosaurs as bipedal warm-blooded animals. Because these early analyses were based on a limited number of taxa and characters, their results were inherently uncertain. Several influential researchers who rejected Padian's conclusions offered alternative hypotheses. David Unwin proposed an ancestry among the basal Archosauromorpha, specifically long-necked forms ("protorosaurs") such as tanystropheids. A placement among basal archosauriforms like Euparkeria was also suggested. Some basal archosauromorphs seem at first glance to be good candidates for close pterosaur relatives due to their long-limbed anatomy; one example is Sharovipteryx, a "protorosaur" with skin membranes on its hindlimbs likely used for gliding. A 1999 study by Michael Benton found that pterosaurs were avemetatarsalians closely related to Scleromochlus, and named the group Ornithodira to encompass pterosaurs and dinosaurs. Two researchers, S. Christopher Bennett in 1996, and paleoartist David Peters in 2000, published analyses finding pterosaurs to be protorosaurs or closely related to them. However, Peters gathered novel anatomical data using an unverified technique called "Digital Graphic Segregation" (DGS), which involves digitally tracing over images of pterosaur fossils using photo editing software. Bennett only recovered pterosaurs as close relatives of the protorosaurs after removing characteristics of the hindlimb from his analysis, to test the possibility of locomotion-based convergent evolution between pterosaurs and dinosaurs. A 2007 reply by Dave Hone and Michael Benton could not reproduce this result, finding pterosaurs to be closely related to dinosaurs even without hindlimb characters. They also criticized David Peters for drawing conclusions without access to the primary evidence, that is, the pterosaur fossils themselves. Hone and Benton concluded that, although more basal pterosauromorphs are needed to clarify their relationships, current evidence indicates that pterosaurs are avemetatarsalians, as either the sister group of Scleromochlus or a branch between the latter and Lagosuchus. An 2011 archosaur-focused phylogenetic analysis by Sterling Nesbitt benefited from far more data and found strong support for pterosaurs being avemetatarsalians, though Scleromochlus was not included due to its poor preservation. A 2016 archosauromorph-focused study by Martin Ezcurra included various proposed pterosaur relatives, yet also found pterosaurs to be closer to dinosaurs and unrelated to more basal taxa. Working from his 1996 analysis, Bennett published a 2020 study on Scleromochlus which argued that both Scleromochlus and pterosaurs were non-archosaur archosauromorphs, albeit not particularly closely related to each other. By contrast, a later 2020 study proposed that lagerpetid archosaurs were the sister clade to pterosauria. This was based on newly described fossil skulls and forelimbs showing various anatomical similarities with pterosaurs and reconstructions of lagerpetid brains and sensory systems based on CT scans also showing neuroanatomical similarities with pterosaurs. The results of the latter study were subsequently supported by an independent analysis of early pterosauromorph interrelationships. A related problem is the origin of pterosaur flight. Like with birds, hypotheses can be ordered into two main varieties: "ground up" or "tree down". Climbing a tree would cause height and gravity provide both the energy and a strong selection pressure for incipient flight. Rupert Wild in 1983 proposed a hypothetical "propterosaurus": a lizard-like arboreal animal developing a membrane between its limbs, first to safely parachute and then, gradually elongating the fourth finger, to glide. However, subsequent cladistic results did not fit this model well. Neither protorosaurs nor ornithodirans are biologically equivalent to lizards. Furthermore, the transition between gliding and flapping flight is not well-understood. More recent studies on basal pterosaur hindlimb morphology seem to vindicate a connection to Scleromochlus. Like this archosaur, basal pterosaur lineages have plantigrade hindlimbs that show adaptations for saltation. Extinction It was once thought that competition with early bird species might have resulted in the extinction of many of the pterosaurs. Part of this is due to the fact it used to be thought that by the end of the Cretaceous, only large species of pterosaurs were present (no longer true; see below). The smaller species were thought to have become extinct, their niche filled by birds. However, pterosaur decline (if actually present) seems unrelated to bird diversity, as ecological overlap between the two groups appears to be minimal. In fact, at least some avian niches were reclaimed by pterosaurs prior to the KT event. At the end of the Cretaceous period, the Cretaceous–Paleogene extinction event, which wiped out all non-avian dinosaurs and most avian dinosaurs as well, and many other animals, seems also to have taken the pterosaurs. In the early 2010s, several new pterosaur taxa were discovered dating to the Campanian/Maastrichtian, such as the ornithocheirids Piksi and "Ornithocheirus", possible pteranodontids and nyctosaurids, several tapejarids and the indeterminate non-azhdarchid Navajodactylus. Small azhdarchoid pterosaurs were also present in the Campanian. This suggests that late Cretaceous pterosaur faunas were far more diverse than previously thought, possibly not even having declined significantly from the early Cretaceous. Small-sized pterosaur species apparently were present in the Csehbánya Formation, indicating a higher diversity of Late Cretaceous pterosaurs than previously accounted for. The recent findings of a small cat-sized adult azhdarchid further indicate that small pterosaurs from the Late Cretaceous might actually have simply been rarely preserved in the fossil record, helped by the fact that there is a strong bias against terrestrial small sized vertebrates such as juvenile dinosaurs, and that their diversity might actually have been much larger than previously thought. At least some non-pterodactyloid pterosaurs survived into the Late Cretaceous, postulating a Lazarus taxa situation for late Cretaceous pterosaur faunas. A 2021 study showcases that niches previously occupied by small pterosaurs were increasingly occupied by the juvenile stages of larger species in the Late Cretaceous. Rather than outcompeted by birds, pterosaurs essentially specialized a trend already occurring in previous eras of the Mesozoic. Classification and phylogeny In phylogenetic taxonomy, the clade Pterosauria has usually been defined as node-based and anchored to several extensively studied taxa as well as those thought to be primitive. One 2003 study defined Pterosauria as "The most recent common ancestor of the Anurognathidae, Preondactylus and Quetzalcoatlus and all their descendants." However, these types of definition would inevitably leave any related species that are slightly more primitive out of the Pterosauria. To remedy this, a new definition was proposed that would anchor the name not to any particular species but to an anatomical feature, the presence of an enlarged fourth finger that supports a wing membrane. This "apomorophy-based" definition was adopted by the PhyloCode in 2020 as "[T]he clade characterized by the apomorphy fourth manual digit hypertrophied to support a wing membrane, as inherited by Pterodactylus (originally Ornithocephalus) antiquus (Sömmerring 1812)". A broader clade, Pterosauromorpha, has been defined as all ornithodirans more closely related to pterosaurs than to dinosaurs. The internal classification of pterosaurs has historically been difficult, because there were many gaps in the fossil record. Starting from the 21st century, new discoveries are now filling in these gaps and giving a better picture of the evolution of pterosaurs. Traditionally, they were organized into two suborders: the Rhamphorhynchoidea, a "primitive" group of long-tailed pterosaurs, and the Pterodactyloidea, "advanced" pterosaurs with short tails. However, this traditional division has been largely abandoned. Rhamphorhynchoidea is a paraphyletic (unnatural) group, since the pterodactyloids evolved directly from them and not from a common ancestor, so, with the increasing use of cladistics, it has fallen out of favor among most scientists. The precise relationships between pterosaurs is still unsettled. Many studies of pterosaur relationships in the past have included limited data and were highly contradictory. However, newer studies using larger data sets are beginning to make things clearer. The cladogram (family tree) below follows a phylogenetic analysis presented by Longrich, Martill and Andres in 2018, with clade names after Andres et al. (2014). Paleobiology Flight The mechanics of pterosaur flight are not completely understood or modeled at this time. Katsufumi Sato, a Japanese scientist, did calculations using modern birds and concluded that it was impossible for a pterosaur to stay aloft. In the book Posture, Locomotion, and Paleoecology of Pterosaurs it is theorized that they were able to fly due to the oxygen-rich, dense atmosphere of the Late Cretaceous period. However, both Sato and the authors of Posture, Locomotion, and Paleoecology of Pterosaurs based their research on the now-outdated theories of pterosaurs being seabird-like, and the size limit does not apply to terrestrial pterosaurs, such as azhdarchids and tapejarids. Furthermore, Darren Naish concluded that atmospheric differences between the present and the Mesozoic were not needed for the giant size of pterosaurs. Another issue that has been difficult to understand is how they took off. Earlier suggestions were that pterosaurs were largely cold-blooded gliding animals, deriving warmth from the environment like modern lizards, rather than burning calories. In this case, it was unclear how the larger ones of enormous size, with an inefficient cold-blooded |
record, allowing for detailed descriptions of their anatomy and analysis of their life history. Over 1,000 specimens have been identified, though less than half are complete enough to give researchers good anatomical information. Still, this is more fossils material than is known for any other pterosaur, and it includes both male and female specimens of various age groups and possibly species. Adult Pteranodon specimens from the two major species can be divided into two distinct size classes. The smaller class of specimens have small, rounded head crests and very wide pelvic canals, even wider than those of the much larger size class. The size of the pelvic canal probably allowed the laying of eggs, indicating that these smaller adults are females. The larger size class, representing male individuals, have narrow hips and very large crests, which were probably for display. Adult male Pteranodon were among the largest pterosaurs, and were the largest flying animals known until the late 20th century, when the giant azhdarchid pterosaurs were discovered. The wingspan of an average adult male Pteranodon was . Adult females were much smaller, averaging in wingspan. The largest specimen of Pteranodon longiceps from the Niobrara Formation measured from wingtip to wingtip. An even larger specimen is known from the Pierre Shale Formation, with a wingspan of , though this specimen may belong to the distinct genus and species Geosternbergia maysei. While most specimens are found crushed, enough fossils exist to put together a detailed description of the animal. Methods used to estimate the mass of large male Pteranodon specimens (those with wingspans of about 7 meters) have been notoriously unreliable, producing a wide range of estimates. In a review of pterosaur size estimates published in 2010, researchers Mark Witton and Mike Habib argued that the largest estimate of 544 kg is much too high and an upper limit of 200 to 250 kg is more realistic. Witton and Habib considered the methods used by researchers who obtained smaller mass estimates equally flawed. Most have been produced by scaling modern animals such as bats and birds up to Pteranodon size, despite the fact that pterosaurs have vastly different body proportions and soft tissue anatomy from any living animal. Other distinguishing characteristics that set Pteranodon apart from other pterosaurs include narrow neural spines on the vertebrae, plate-like bony ligaments strengthening the vertebrae above the hip, and a relatively short tail in which the last few vertebrae are fused into a long rod. The entire length of the tail was about 3.5% as long as the wingspan, or up to in the largest males. Skull and beak Unlike earlier pterosaurs, such as Rhamphorhynchus and Pterodactylus, Pteranodon had toothless beaks, similar to those of birds. Pteranodon beaks were made of solid, bony margins that projected from the base of the jaws. The beaks were long, slender, and ended in thin, sharp points. The upper jaw, which was longer than the lower jaw, was curved upward; while this normally has been attributed only to the upward-curving beak, one specimen (UALVP 24238) has a curvature corresponding with the beak widening towards the tip. While the tip of the beak is not known in this specimen, the level of curvature suggests it would have been extremely long. The unique form of the beak in this specimen led Alexander Kellner to assign it to a distinct genus, Dawndraco, in 2010. The most distinctive characteristic of Pteranodon is its cranial crest. These crests consisted of skull bones (frontals) projecting upward and backward from the skull. The size and shape of these crests varied due to a number of factors, including age, sex, and species. Male Pteranodon sternbergi, the older species of the two described to date (and sometimes placed in the distinct genus Geosternbergia), had a more vertical crest with a broad forward projection, while their descendants, Pteranodon longiceps, evolved a narrower, more backward-projecting crest. Females of both species were smaller and bore small, rounded crests. The crests were probably mainly display structures, though they may have had other functions as well. Paleobiology Flight The wing shape of Pteranodon suggests that it would have flown rather like a modern-day albatross. This is based on the fact that Pteranodon had a high aspect ratio (wingspan to chord length) similar to that of the albatross — 9:1 for Pteranodon, compared to 8:1 for an albatross. Albatrosses spend long stretches of time at sea fishing, and use a flight pattern called "dynamic soaring" which exploits the vertical gradient of wind speed near the ocean surface to travel long distances without flapping, and without the aid of thermals (which do not occur over the open ocean the same way they do over land). While most of a Pteranodon flight would have depended on soaring, like long-winged seabirds, it probably required an occasional active, rapid burst of flapping, and studies of Pteranodon wing loading (the strength of the wings vs. the weight of the body) indicate that they were capable of substantial flapping flight, contrary to some earlier suggestions that they were so big they could only glide. However, a more recent study suggests that it relied on thermal soaring, unlike modern seabirds but much like modern continental flyers and the extinct Pelagornis. Like other pterosaurs, Pteranodon probably took off from a standing, quadrupedal position. Using their long forelimbs for leverage, they would have vaulted themselves into the air in a rapid leap. Almost all of the energy would have been generated by the forelimbs. The upstroke of the wings would have occurred when the animal cleared the ground followed by a rapid down-stroke to generate additional lift and complete the launch into the air. Terrestrial locomotion Historically, the terrestrial locomotion of Pteranodon, especially whether it was bipedal or quadrupedal, has been the subject of debate. Today, most pterosaur researchers agree that pterosaurs were quadrupedal, thanks largely to the discovery of pterosaur trackways. The possibility of aquatic locomotion via swimming has been discussed briefly in several papers (Bennett 2001, 1994, and Bramwell & Whitfield 1974). Diet The diet of Pteranodon is known to have included fish; fossilized fish bones have been found in the stomach area of one Pteranodon, and a fossilized fish bolus has been found between the jaws of another Pteranodon, specimen AMNH 5098. Numerous other specimens also preserve fragments of fish scales and vertebrae near the torso, indicating that fish made up a majority of the diet of Pteranodon (though they may also have taken invertebrates). Traditionally, most researchers have suggested that Pteranodon would have taken fish by dipping their beaks into the water while in low, soaring flight. However, this was probably based on the assumption that the animals could not take off from the water surface. It is more likely that Pteranodon could take off from the water, and would have dipped for fish while swimming rather than while flying. Even a small, female Pteranodon could have reached a depth of at least with its long bill and neck while floating on the surface, and they may have reached even greater depths by plunge-diving into the water from the air like some modern long-winged seabirds. In 1994, Bennett noted that the head, neck, and shoulders of Pteranodon were as heavily built as diving birds, and suggested that they could dive by folding back their wings like the modern gannet. Crest function Pteranodon was notable for its skull crest, though the function of this crest has been a subject of debate. Most explanations have focused on the blade-like, backward pointed crest of male P. longiceps, however, and ignored the wide range of variation across age and sex. The fact that the crests vary so much rules out most practical functions other than for use in mating displays. Therefore, display was probably the main function of the crest, and any other functions were secondary. Scientific interpretations of the crest's function began in 1910, when George Francis Eaton proposed two possibilities: an aerodynamic counterbalance and a muscle attachment point. He suggested that the crest might have anchored large, long jaw muscles, but admitted that this function alone could not explain the large size of some crests. Bennett (1992) agreed with Eaton's own assessment that the crest was too large and variable to have been a muscle attachment site. Eaton had suggested that a secondary function of the crest might have been as a counterbalance against the long beak, reducing the need for heavy neck muscles to control the orientation of the head. Wind tunnel tests showed that the crest did function as an effective counterbalance to a degree, but Bennett noted that, again, the hypothesis focuses only on the long crests of male P. longiceps, not on the larger crests of P. sternbergi and very small crests that existed among the females. Bennett found that the crests of females had no counterbalancing effect, and that the crests of male P. sternbergi would, by themselves, have a negative effect on the balance of the head. In fact, side to side movement of the crests would have required more, not less, neck musculature to control balance. In 1943, Dominik von Kripp suggested that the crest may have served as a rudder, an idea embraced by several later researchers. One researcher, Ross S. Stein, even suggested that the crest may have supported a membrane of skin connecting the backward-pointing crest to the neck and back, increasing its surface area and effectiveness as a rudder. The rudder hypothesis, again, does not take into account females nor P. sternbergi, which had an upward-pointing, not backward-pointing crest. Bennett also found that, even in its capacity as a rudder, the crest would not provide nearly so much directional force as simply maneuvering the wings. The suggestion that the crest was an air brake, and that the animals would turn their heads to the side in order to slow down, suffers from a similar problem. Additionally, the rudder and air brake hypotheses do not explain why such large variation exists in crest size even among adults. Alexander Kellner suggested that the large crests of the pterosaur Tapejara, as well as other species, might be used for heat exchange, allowing these pterosaurs to absorb or shed heat and regulate body temperature, which also would account for the correlation between crest size and body size. There is no evidence of extra blood vessels in the crest for this purpose, however, and the large, membranous wings filled with blood vessels would have served that purpose much more effectively. With these hypotheses ruled out, the best-supported hypothesis for crest function seems to be as a sexual display. This is consistent with the size variation seen in fossil specimens, where females and juveniles have small crests and males large, elaborate, variable crests. Sexual variation Adult Pteranodon specimens may be divided into two distinct size classes, small and large, with the large size class being about one and a half times larger than the small class, and the small class being twice as common as the large class. Both size classes lived alongside each other, and while researchers had previously suggested that they represent different species, Christopher Bennett showed that the differences between them are consistent with the concept that they represent females and males, and that Pteranodon species were sexually dimorphic. Skulls from the larger size class preserve large, upward and backward pointing crests, while the crests of the smaller size class are small and triangular. Some larger skulls also show evidence of a second crest that extended long and low, toward the tip of the beak, which is not seen in smaller specimens. The sex of the different size classes was determined, not from the skulls, but from the pelvic bones. Contrary to what may be expected, the smaller size class had disproportionately large and wide-set pelvic bones. Bennett interpreted this as indicating a more spacious birth canal, through which eggs would pass. He concluded that the small size class with small, triangular crests represent females, and the larger, large-crested specimens represent males. Note that the overall size and crest size also corresponds to age. Immature specimens are known from both females and males, and immature males often have small crests similar to adult females. Therefore, it seems that the large crests only developed in males when they reached their large, adult size, making the sex of immature specimens difficult to establish from partial remains. The fact that females appear to have outnumbered males two to one suggests that, as with modern animals with size-related sexual dimorphism, such as sea lions and other pinnipeds, Pteranodon might have been polygynous, with a few males competing for association | specimens of Pteranodon have been found than any other pterosaur, with about 1,200 specimens known to science, many of them well preserved with nearly complete skulls and articulated skeletons. It was an important part of the animal community in the Western Interior Seaway. Pteranodon was a pterosaur, meaning that it is not a dinosaur. By definition, all dinosaurs belong to one of the two groups within Dinosauria, i.e. Saurischia or Ornithischia. As such, this excludes pterosaurs. Nonetheless, Pteranodon is frequently featured in dinosaur media and is strongly associated with dinosaurs by the general public. While not dinosaurs, pterosaurs such as Pteranodon form a clade closely related to dinosaurs as both fall within the clade Avemetatarsalia. Discovery and history First fossils Pteranodon was the first pterosaur found outside of Europe. Its fossils first were found by Othniel Charles Marsh in 1871, in the Late Cretaceous Smoky Hill Chalk deposits of western Kansas. These chalk beds were deposited at the bottom of what was once the Western Interior Seaway, a large shallow sea over what now is the midsection of the North American continent. These first specimens, YPM 1160 and YPM 1161, consisted of partial wing bones, as well as a tooth from the prehistoric fish Xiphactinus, which Marsh mistakenly believed to belong to this new pterosaur (all known pterosaurs up to that point had teeth). In 1871, Marsh named the find Pterodactylus oweni, assigning it to the well-known (but much smaller) European genus Pterodactylus. Marsh also collected more wing bones of the large pterosaur in 1871. Realizing that the name he had chosen had already been used for Harry Seeley's European pterosaur species Pterodactylus oweni in 1864, Marsh renamed his giant North American pterosaur Pterodactylus occidentalis, meaning "Western wing finger," in his 1872 description of the new specimen. He named two additional species, based on size differences: Pterodactylus ingens (the largest specimen so far), and Pterodactylus velox (the smallest). Meanwhile, Marsh's rival Edward Drinker Cope had unearthed several specimens of the large North American pterosaur. Based on these specimens, Cope named two new species, Ornithochirus umbrosus and Ornithochirus harpyia, in an attempt to assign them to the large European genus Ornithocheirus, though he misspelled the name (forgetting the 'e'). Cope's paper naming his species was published in 1872, just five days after Marsh's paper. This resulted in a dispute, fought in the published literature, over whose names had priority in what obviously were the same species. Cope conceded in 1875 that Marsh's names did have priority over his, but maintained that Pterodactylus umbrosus was a distinct species (but not genus) from any that Marsh had named previously. Re-evaluation by later scientists has supported Marsh's case, refuting Cope's assertion that P. umbrosus represented a larger, distinct species. A toothless pterosaur While the first Pteranodon wing bones were collected by Marsh and Cope in the early 1870s, the first Pteranodon skull was found on May 2, 1876, along the Smoky Hill River in Wallace County (now Logan County), Kansas, USA, by Samuel Wendell Williston, a fossil collector working for Marsh. A second, smaller skull soon was discovered as well. These skulls showed that the North American pterosaurs were different from any European species, in that they lacked teeth and had bony crests on their skulls. Marsh recognized this major difference, describing the specimens as "distinguished from all previously known genera of the order Pterosauria by the entire absence of teeth." Marsh recognized that this characteristic warranted a new genus, and he coined the name Pteranodon ("wing without tooth") in 1876. Marsh reclassified all the previously named North American species from Pterodactylus to Pteranodon. He considered the smaller skull to belong to Pteranodon occidentalis, based on its size. Marsh classified the larger skull, YPM 1117, in the new species Pteranodon longiceps, which he thought to be a medium-sized species in between the small P. occidentalis and the large P. ingens. Marsh also named several additional species: Pteranodon comptus and Pteranodon nanus were named for fragmentary skeletons of small individuals, while Pteranodon gracilis was based on a wing bone that he mistook for a pelvic bone. He soon realized his mistake, and re-classified that specimen again into a separate genus, which he named Nyctosaurus. P. nanus was also later recognized as a Nyctosaurus specimen. In 1892, Samuel Williston examined the question of Pteranodon classification. He noticed that, in 1871, Seeley had mentioned the existence of a partial set of toothless pterosaur jaws from the Cambridge Greensand of England, which he named Ornithostoma. Because the primary characteristic Marsh had used to separate Pteranodon from other pterosaurs was its lack of teeth, Williston concluded that "Ornithostoma" must be considered the senior synonym of Pteranodon. However, in 1901, Pleininger pointed out that "Ornithostoma" had never been scientifically described or even assigned a species name until Williston's work, and therefore had been a nomen nudum and could not beat out Pteranodon for naming priority. Williston accepted this conclusion and went back to calling the genus Pteranodon. However, both Williston and Pleininger were incorrect, because unnoticed by both of them was the fact that, in 1891, Seeley himself had finally described and properly named Ornithostoma, assigning it to the species O. sedgwicki. In the 2010s, more research on the identity of Ornithostoma showed that it was probably not Pteranodon or even a close relative, but may in fact have been an azhdarchoid, a different type of toothless pterosaur. Revising species Williston was also the first scientist to critically evaluate all of the pteranodon species classified by Cope and Marsh. He agreed with most of Marsh's classification, with a few exceptions. First, he did not believe that P. ingens and P. umbrosus could be considered synonyms, which even Cope had come to believe. He considered both P. velox and P. longiceps to be dubious; the first was based on non-diagnostic fragments, and the second, though known from a complete skull, probably belonged to one of the other, previously-named species. In 1903, Williston revisited the question of Pteranodon classification, and revised his earlier conclusion that there were seven species down to just three. He considered both P. comptus and P. nanus to be specimens of Nyctosaurus, and divided the others into small (P. velox), medium (P. occidentalis), and large species (P. ingens), based primarily on the shape of their upper arm bones. He thought P. longiceps, the only one known from a skull, could be a synonym of either P. velox or P. occidentalis, based on its size. In 1910, Eaton became the first scientist to publish a more detailed description of the entire Pteranodon skeleton, as it was known at the time. He used his findings to revise the classification of the genus once again based on a better understanding of the differences in pteranodont anatomy. Eaton conducted experiments using clay models of bones to help determine the effects of crushing and flattening on the shapes of the arm bones Williston had used in his own classification. Eaton found that most of the differences in bone shapes could be easily explained by the pressures of fossilization, and concluded that no Pteranodon skeletons had any significant differences from each other besides their size. Therefore, Eaton was left to decide his classification scheme based on differences in the skulls alone, which he assigned to species just as Marsh did, by their size. In the end, Eaton recognized only three valid species: P. occidentalis, P. ingens, and P. longiceps. The discovery of specimens with upright crests, classified by Harksen in 1966 as the new species Pteranodon sternbergi, complicated the situation even further, prompting another revision of the genus by Miller in 1972. Because it was impossible to determine crest shape for all of the species based on headless skeletons, Miller concluded that all Pteranodon species except the two based on skulls (P. longiceps and P. sternbergi) must be considered nomena dubia and abandoned. The skull Eaton thought belonged to P. ingens was placed in the new species Pteranodon marshi, and the skull Eaton assigned to P. occidentalis was re-named Pteranodon eatoni. Miller also recognized another species based on a skull with a crest similar to that of P. sternbergi; Miller named this Pteranodon walkeri. To help bring order to this tangle of names, Miller created three categories or "subgenera" for them. P. marshi and P. longiceps were placed in the subgenus Longicepia, though this was later changed to simply Pteranodon due to the rules of priority. P. sternbergi and P. walkeri, the upright-crested species, were given the subgenus Sternbergia, which was later changed to Geosternbergia because Sternbergia was already in use ("preoccupied"). Finally, Miller named the subgenus Occidentalia for P. eatoni, the skull formerly associated with P. occidentalis. Miller further expanded the concept of Pteranodon to include Nyctosaurus as a fourth subgenus. Miller considered these to be an evolutionary progression, with the primitive Nyctosaurus, at the time thought to be crestless, giving rise to Occidentalia (with a small crest), which in turn gave rise to Pteranodon with its long backwards crest, finally leading to Geosternbergia with its large, upright crest. However, Miller made several mistakes in his study concerning which specimens Marsh had assigned to which species, and most scientists disregarded his work on the subject in their later research, though Wellnhofer (1978) followed Miller's species list. and Schoch (1984) somewhat oddly published another revision that essentially returned to Marsh's original classification scheme, most notably sinking P. longiceps as a synonym of P. ingens. Recognizing variation During the early 1990s, S. Christopher Bennett also published several major papers reviewing the anatomy, taxonomy and life history of Pteranodon. Fragmentary fossils assigned to Pteranodon have also been discovered in Skåne, Sweden. Description Pteranodon species are extremely well represented in the fossil record, allowing for detailed descriptions of their anatomy and analysis of their life history. Over 1,000 specimens have been identified, though less than half are complete enough to give researchers good anatomical information. Still, this is more fossils material than is known for any other pterosaur, and it includes both male and female specimens of various age groups and possibly species. Adult Pteranodon specimens from the two major species can be divided into two distinct size classes. The smaller class of specimens have small, rounded head crests and very wide pelvic canals, even wider than those of the much larger size class. The size of the pelvic canal probably allowed the laying of eggs, indicating that these smaller adults are females. The larger size class, representing male individuals, have narrow hips and very large crests, which were probably for display. Adult male Pteranodon were among the largest pterosaurs, and were the largest flying animals known until the late 20th century, when the giant azhdarchid pterosaurs were discovered. The wingspan of an average adult male Pteranodon was . Adult females were much smaller, averaging in wingspan. The largest specimen of Pteranodon longiceps from the Niobrara Formation measured from wingtip to wingtip. An even larger specimen is known from the Pierre Shale Formation, with a wingspan of , though this specimen may belong to the distinct genus and species Geosternbergia maysei. While most specimens are found crushed, enough fossils exist to put together a detailed description of the animal. Methods used to estimate the mass of large male Pteranodon specimens (those with wingspans of about 7 meters) have been notoriously unreliable, producing a wide range of estimates. In a review of pterosaur size estimates published in 2010, researchers Mark Witton and Mike Habib argued that the largest estimate of 544 kg is much too high and an upper limit of 200 to 250 kg is more realistic. Witton and Habib considered the methods used by researchers who obtained smaller mass estimates equally flawed. Most have been produced by scaling modern animals such as bats and birds up to Pteranodon size, despite the fact that pterosaurs have vastly different body proportions and soft tissue anatomy from any living animal. Other distinguishing characteristics that set Pteranodon apart from other pterosaurs include narrow neural spines on the vertebrae, plate-like bony ligaments strengthening the vertebrae above the hip, and a relatively short tail in which the last few vertebrae are fused into a long rod. The entire length of the tail was about 3.5% as long as the wingspan, or up to in the largest males. Skull and beak Unlike earlier pterosaurs, such as Rhamphorhynchus and Pterodactylus, Pteranodon had toothless beaks, similar to those of birds. Pteranodon beaks were made of solid, bony margins that projected from the base of the jaws. The beaks were long, slender, and ended in thin, sharp points. The upper jaw, which was longer than the lower jaw, was curved upward; while this normally has been attributed only to the upward-curving beak, one specimen (UALVP 24238) has a curvature corresponding with the beak widening towards the tip. While the tip of the beak is not known in this specimen, the level of curvature suggests it would have been extremely long. The unique form of the beak in this specimen led Alexander Kellner to assign it to a distinct genus, Dawndraco, in 2010. The most distinctive characteristic of Pteranodon is its cranial crest. These crests consisted of skull bones (frontals) projecting upward and backward from the skull. The size and shape of these crests varied due to a number of factors, including age, sex, and species. Male Pteranodon sternbergi, the older species of the two described to date (and sometimes placed in the distinct genus Geosternbergia), had a more vertical crest with a broad forward projection, while their descendants, Pteranodon longiceps, evolved a narrower, more backward-projecting crest. Females of both species were smaller and bore small, rounded crests. The crests were probably mainly display structures, though they may have had other functions as well. Paleobiology Flight The wing shape of Pteranodon suggests that it would have flown rather like a modern-day albatross. This is based on the fact that Pteranodon had a high aspect ratio (wingspan to chord length) similar to that of the albatross — 9:1 for Pteranodon, compared to 8:1 for an albatross. Albatrosses spend long stretches of time at sea fishing, and |
like some other languages, uses a periphrastic passive. Rather than conjugating directly for voice, English uses the past participle form of the verb plus an auxiliary verb, either be or get (called linking verbs in traditional grammar), to indicate passive voice. The money was donated to the school. The vase got broken during the fight. All men are created equal. If the agent is mentioned, it usually appears in a prepositional phrase introduced by the preposition by. Without agent: The paper was marked. With agent: The paper was marked by Mr. Tan. The subject of the passive voice usually corresponds to the direct object of the corresponding active-voice formulation (as in the above examples), but English also allows passive constructions in which the subject corresponds to an indirect object or preposition complement: We were given tickets. (subject we corresponds to the indirect object of give) Tim was operated on yesterday. (subject Tim corresponds to the complement of the preposition on) In sentences of the second type, a stranded preposition is left. This is called the prepositional passive or pseudo-passive (although the latter term can also be used with other meanings). The active voice is the dominant voice used in English. Many commentators, notably George Orwell in his essay "Politics and the English Language" and Strunk & White in The Elements of Style, have urged minimizing use of the passive voice, but this is almost always based on these commentators' misunderstanding of what the passive voice is. Contrary to common critiques, the passive voice has important uses, with virtually all writers using the passive voice (including Orwell and Strunk & White). There is general agreement that the passive voice is useful for emphasis, or when the receiver of the action is more important than the actor. Merriam–Webster's Dictionary of English Usage refers to three statistical studies of passive versus active sentences in various periodicals, stating: "the highest incidence of passive constructions was 13 percent. Orwell runs to a little over 20 percent in "Politics and the English Language". Clearly he found the construction useful in spite of his advice to avoid it as much as possible". Defining "passive" In the field of linguistics, the term passive is applied to a wide range of grammatical structures. Linguists therefore find it difficult to define the term in a way that makes sense across all human languages. The canonical passive in European languages has the following properties: The subject is not an agent. There is a change in: word order; or in nominal morphology—the form of the nouns in the sentence. There is specific verbal morphology—a particular form of the verb indicates passive voice. The problem arises with non-European languages. Many constructions in these languages share at least one property with the canonical European passive, but not all. While it seems justified to call these constructions passive when comparing them to European languages' passive constructions, as a whole the passives of the world's languages do not share a single common feature. R. M. W. Dixon has defined four criteria for determining whether a construction is a passive: It applies to underlying transitive clauses and forms a | advice to avoid it as much as possible". Defining "passive" In the field of linguistics, the term passive is applied to a wide range of grammatical structures. Linguists therefore find it difficult to define the term in a way that makes sense across all human languages. The canonical passive in European languages has the following properties: The subject is not an agent. There is a change in: word order; or in nominal morphology—the form of the nouns in the sentence. There is specific verbal morphology—a particular form of the verb indicates passive voice. The problem arises with non-European languages. Many constructions in these languages share at least one property with the canonical European passive, but not all. While it seems justified to call these constructions passive when comparing them to European languages' passive constructions, as a whole the passives of the world's languages do not share a single common feature. R. M. W. Dixon has defined four criteria for determining whether a construction is a passive: It applies to underlying transitive clauses and forms a derived intransitive. The entity that is the patient or the object of the transitive verb in the underlying representation (indicated as O in linguistic terminology) becomes the core argument of the clause (indicated as S, since the core argument is the subject of an intransitive). The agent in the underlying representation (indicated as A) becomes a chômeur, a noun in the periphery that is not a core argument. It is marked by a non-core case or becomes part of an adpositional phrase, etc. This can be omitted, but there is always the option of including it. There is some explicit marking of the construction. Dixon acknowledges that this excludes some constructions labeled as passive by some linguists. Adversative passive In some languages, including several Southeast Asian languages, the passive voice is sometimes used to indicate that an action or event was unpleasant or undesirable. This so-called adversative passive works like the ordinary passive voice in terms of syntactic structure—that is, a theme or instrument acts as subject. In addition, the construction indicates adversative affect, suggesting that someone was negatively affected. In Japanese, for example, the adversative passive (also called indirect passive) indicates adversative affect. The indirect or adversative passive has the same form as the direct passive. Unlike the direct passive, the indirect passive may be used with intransitive verbs. Yup'ik, from the Eskimo–Aleut family, has two different suffixes that can indicate passive, -cir- and -ma-. The morpheme -cir- has an adversative meaning. If an agent is included in a passive sentence with the -cir passive, the noun is usually in the allative (oblique) case. Stative and dynamic passive In some languages, for example English, there is often a similarity between clauses expressing an action or event in the passive voice and clauses expressing a state. For example, the string of words "The dog is fed" can have the following two different meanings: The dog is fed (twice a day). The dog is fed (so we can leave now). The additions in parentheses "force" the same string of words to clearly show only one of their two possible grammatical functions and the related meaning. In the first sentence, the combination of the auxiliary verb "is" and the past participle "fed" is a regular example of the construction of the passive voice in English. In the second sentence, "is" can however be interpreted as an ordinary copula and the past participle as an adjective. Sentences of the second type are called false passives by some linguists, who feel that such sentences are simply confused with the passive voice due to their outward similarity. Other linguists consider the second type to be a different kind of passive – a stative passive (rarely called statal, static, or resultative passive), in contrast to the dynamic or eventive passive illustrated by the first sentence. Some languages express or can express these different meanings using different constructions. The difference between dynamic and stative passives is more evident in languages such as German that use different words or constructions for the two. In German, the auxiliary verb marks static passive (German: , rarely , in referring to German also called or ), while marks the dynamic passive ( or , rarely , in referring to German also called or or simply or ). The English string of words "the lawn is mown" has two possible meanings corresponding |
are primitive recursive in Q and R: NOT_Q(x) . Q OR R: Q(x) V R(x), Q AND R: Q(x) & R(x), Q IMPLIES R: Q(x) → R(x) Q is equivalent to R: Q(x) ≡ R(x) #E: The following predicates are primitive recursive in the predicate R: (Ey)y<z R(x, y) where (Ey)y<z denotes "there exists at least one y that is less than z such that" (y)y<z R(x, y) where (y)y<z denotes "for all y less than z it is true that" μyy<z R(x, y). The operator μyy<z R(x, y) is a bounded form of the so-called minimization- or mu-operator: Defined as "the least value of y less than z such that R(x, y) is true; or z if there is no such value." #F: Definition by cases: The function defined thus, where Q1, ..., Qm are mutually exclusive predicates (or "ψ(x) shall have the value given by the first clause that applies), is primitive recursive in φ1, ..., Q1, ... Qm: φ(x) = φ1(x) if Q1(x) is true, . . . . . . . . . . . . . . . . . . . φm(x) if Qm(x) is true φm+1(x) otherwise #G: If φ satisfies the equation: φ(y,x) = χ(y, COURSE-φ(y; x2, ... xn ), x2, ... xn then φ is primitive recursive in χ. The value COURSE-φ(y; x2 to n ) of the course-of-values function encodes the sequence of values φ(0,x2 to n), ..., φ(y-1,x2 to n) of the original function. Use in first-order Peano arithmetic In first-order Peano arithmetic, there are infinitely many variables (0-ary symbols) but no k-ary non-logical symbols with k>0 other than S, +, *, and ≤. Thus in order to define primitive recursive functions one has to use the following trick by Gödel. By using a Gödel numbering for sequences, for example Gödel's β function, any finite sequence of numbers can be encoded by a single number. Such a number can therefore represent the primitive recursive function until a given n. Let h be a 1-ary primitive recursion function defined by: where C is a constant and g is an already defined function. Using Gödel's β function, for any sequence of natural numbers (k0, k1, ..., kn), there are natural numbers b and c such that, for every i ≤ n, β(b, c, i) = ki. We may thus use the following formula to define h; more precisely, m=h(n) is a shorthand for the following: and the equating to g, being already defined, is in fact shorthand for some other already defined formula (as is β, whose formula is given here). The generalization to any k-ary primitive recursion function is trivial. Relationship to recursive functions The broader class of partial recursive functions is defined by introducing an unbounded search operator. The use of this operator may result in a partial function, that is, a relation with at most one value for each argument, but does not necessarily have any value for any argument (see domain). An equivalent definition states that a partial recursive function is one that can be computed by a Turing machine. A total recursive function is a partial recursive function that is defined for every input. Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. The Ackermann function A(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursive if and only if there is a natural number m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewer steps, where n is the sum of the arguments of the primitive recursive function. An important property of the primitive recursive functions is that they are a recursively enumerable subset of the set of all total recursive functions (which is not itself recursively enumerable). This means that there is a single computable function f(m,n) that enumerates the primitive recursive functions, namely: For every primitive recursive function g, there is an m such that g(n) = f(m,n) for all n, and For every m, the function h(n) = f(m,n) is primitive recursive.f can be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use a diagonalization argument to show that f is not recursive primitive in itself: had it been such, so would be h(n) = f(n,n)+1. But if this equals some primitive recursive function, there is an m such that h(n) = f(m,n) for all n, and then h(m) = f(m,m), leading to contradiction. However, the set of primitive recursive functions is not the largest recursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true. Limitations Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant of Cantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows: This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the article Machine that always halts. Note however that the partial computable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings. Other examples of total recursive but not primitive recursive functions are known: The function that takes m to Ackermann(m,m) is a unary total recursive function that is not primitive recursive. The Paris–Harrington theorem involves a total recursive function that is not primitive recursive. The Sudan function The Goodstein function Variants Constant functions Instead of , alternative definitions use just one 0-ary zero function as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator. Weak primitive recursion The 1-place predecessor function is primitive recursive, see section #Predecessor. Fischer, Fischer & Beigel removed the implicit predecessor from the recursion rule, replacing it by the weaker rule They proved that the predecessor function still could be defined, and hence that "weak" primitive recursion also defines the primitive recursive functions. Iterative functions Weakening this even further by using functions of arity k+1, removing and from the arguments of completely, we get the iteration rule: The class of iterative functions is defined the same way as the class of primitive recursive functions except with this weaker rule. These are conjectured to be a proper subset of the primitive recursive functions. Additional primitive recursive forms Some additional forms of recursion also define functions that are in fact primitive recursive. Definitions in these forms may be easier to find or more natural for reading or writing. Course-of-values recursion defines primitive recursive functions. Some forms of mutual recursion also define primitive recursive functions. The functions that can be programmed in the LOOP programming language are exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to a Turing-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run. Computer language definition An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals | & NOT(pix'|a) lh(a): the "length" or number of non-vanishing exponents in a lo(a, b): (logarithm of a to base b): If a, b > 1 then the greatest x such that bx | a else 0 In the following, the abbreviation x =def x1, ... xn; subscripts may be applied if the meaning requires. #A: A function φ definable explicitly from functions Ψ and constants q1, ... qn is primitive recursive in Ψ. #B: The finite sum Σy<z ψ(x, y) and product Πy<zψ(x, y) are primitive recursive in ψ. #C: A predicate P obtained by substituting functions χ1,..., χm for the respective variables of a predicate Q is primitive recursive in χ1,..., χm, Q. #D: The following predicates are primitive recursive in Q and R: NOT_Q(x) . Q OR R: Q(x) V R(x), Q AND R: Q(x) & R(x), Q IMPLIES R: Q(x) → R(x) Q is equivalent to R: Q(x) ≡ R(x) #E: The following predicates are primitive recursive in the predicate R: (Ey)y<z R(x, y) where (Ey)y<z denotes "there exists at least one y that is less than z such that" (y)y<z R(x, y) where (y)y<z denotes "for all y less than z it is true that" μyy<z R(x, y). The operator μyy<z R(x, y) is a bounded form of the so-called minimization- or mu-operator: Defined as "the least value of y less than z such that R(x, y) is true; or z if there is no such value." #F: Definition by cases: The function defined thus, where Q1, ..., Qm are mutually exclusive predicates (or "ψ(x) shall have the value given by the first clause that applies), is primitive recursive in φ1, ..., Q1, ... Qm: φ(x) = φ1(x) if Q1(x) is true, . . . . . . . . . . . . . . . . . . . φm(x) if Qm(x) is true φm+1(x) otherwise #G: If φ satisfies the equation: φ(y,x) = χ(y, COURSE-φ(y; x2, ... xn ), x2, ... xn then φ is primitive recursive in χ. The value COURSE-φ(y; x2 to n ) of the course-of-values function encodes the sequence of values φ(0,x2 to n), ..., φ(y-1,x2 to n) of the original function. Use in first-order Peano arithmetic In first-order Peano arithmetic, there are infinitely many variables (0-ary symbols) but no k-ary non-logical symbols with k>0 other than S, +, *, and ≤. Thus in order to define primitive recursive functions one has to use the following trick by Gödel. By using a Gödel numbering for sequences, for example Gödel's β function, any finite sequence of numbers can be encoded by a single number. Such a number can therefore represent the primitive recursive function until a given n. Let h be a 1-ary primitive recursion function defined by: where C is a constant and g is an already defined function. Using Gödel's β function, for any sequence of natural numbers (k0, k1, ..., kn), there are natural numbers b and c such that, for every i ≤ n, β(b, c, i) = ki. We may thus use the following formula to define h; more precisely, m=h(n) is a shorthand for the following: and the equating to g, being already defined, is in fact shorthand for some other already defined formula (as is β, whose formula is given here). The generalization to any k-ary primitive recursion function is trivial. Relationship to recursive functions The broader class of partial recursive functions is defined by introducing an unbounded search operator. The use of this operator may result in a partial function, that is, a relation with at most one value for each argument, but does not necessarily have any value for any argument (see domain). An equivalent definition states that a partial recursive function is one that can be computed by a Turing machine. A total recursive function is a partial recursive function that is defined for every input. Every primitive recursive function is total recursive, but not all total recursive functions are primitive recursive. The Ackermann function A(m,n) is a well-known example of a total recursive function (in fact, provable total), that is not primitive recursive. There is a characterization of the primitive recursive functions as a subset of the total recursive functions using the Ackermann function. This characterization states that a function is primitive recursive if and only if there is a natural number m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewer steps, where n is the sum of the arguments of the primitive recursive function. An important property of the primitive recursive functions is that they are a recursively enumerable subset of the set of all total recursive functions (which is not itself recursively enumerable). This means that there is a single computable function f(m,n) that enumerates the primitive recursive functions, namely: For every primitive recursive function g, there is an m such that g(n) = f(m,n) for all n, and For every m, the function h(n) = f(m,n) is primitive recursive.f can be explicitly constructed by iteratively repeating all possible ways of creating primitive recursive functions. Thus, it is provably total. One can use a diagonalization argument to show that f is not recursive primitive in itself: had it been such, so would be h(n) = f(n,n)+1. But if this equals some primitive recursive function, there is an m such that h(n) = f(m,n) for all n, and then h(m) = f(m,m), leading to contradiction. However, the set of primitive recursive functions is not the largest recursively enumerable subset of the set of all total recursive functions. For example, the set of provably total functions (in Peano arithmetic) is also recursively enumerable, as one can enumerate all the proofs of the theory. While all primitive recursive functions are provably total, the converse is not true. Limitations Primitive recursive functions tend to correspond very closely with our intuition of what a computable function must be. Certainly the initial functions are intuitively computable (in their very simplicity), and the two operations by which one can create new primitive recursive functions are also very straightforward. However, the set of primitive recursive functions does not include every possible total computable function—this can be seen with a variant of Cantor's diagonal argument. This argument provides a total computable function that is not primitive recursive. A sketch of the proof is as follows: This argument can be applied to any class of computable (total) functions that can be enumerated in this way, as explained in the article Machine that always halts. Note however that the partial computable functions (those that need not be defined for all arguments) can be explicitly enumerated, for instance by enumerating Turing machine encodings. Other examples of total recursive but not primitive recursive functions are known: The function that takes m to Ackermann(m,m) is a unary total recursive function that is not primitive recursive. The Paris–Harrington theorem involves a total recursive function that is not primitive recursive. The Sudan function The Goodstein function Variants Constant functions Instead of , alternative definitions use just one 0-ary zero function as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator. Weak primitive recursion The 1-place predecessor function is primitive recursive, see section #Predecessor. Fischer, Fischer & Beigel removed the implicit predecessor from the recursion rule, replacing it by the weaker rule They proved that the predecessor function still could be defined, and hence that "weak" primitive recursion also defines the primitive recursive functions. Iterative functions Weakening this even further by using functions of arity k+1, removing and from the arguments of completely, we get the iteration rule: The class of iterative functions is defined the same way as the class of primitive recursive functions except with this weaker rule. These are conjectured to be a proper subset of the primitive recursive functions. Additional primitive recursive forms Some additional forms of recursion also define functions that are in fact primitive recursive. Definitions in these forms may be easier to find or more natural for reading or writing. Course-of-values recursion defines primitive recursive functions. Some forms of mutual recursion also define primitive recursive functions. The functions that can be programmed in the LOOP programming language are exactly the primitive recursive functions. This gives a different characterization of the power of these functions. The main limitation of the LOOP language, compared to a Turing-complete language, is that in the LOOP language the number of times that each loop will run is specified before the loop begins to run. Computer language definition An example of a primitive recursive programming language is one that contains basic arithmetic operators (e.g. + and −, or ADD and SUBTRACT), conditionals and comparison (IF-THEN, EQUALS, LESS-THAN), and bounded loops, such as the basic for loop, where there is a known or calculable upper bound to all loops (FOR i FROM 1 TO n, with neither i nor n modifiable by the loop body). No control structures of greater generality, such as while loops or IF-THEN plus GOTO, are admitted in a primitive recursive language. The LOOP language, introduced in a 1967 paper by Albert R. Meyer and Dennis M. Ritchie, is such a language. Its computing power coincides with the primitive recursive functions. A variant of the LOOP language is Douglas Hofstadter's BlooP in Gödel, Escher, Bach. Adding unbounded loops (WHILE, GOTO) makes the language general recursive and Turing-complete, as are all real-world computer programming languages. The definition of primitive recursive functions implies that their computation halts on every input (after a finite number of steps). On the other hand, the halting problem is undecidable for general recursive functions, even if they are total. |
Nestor who appears in The Odyssey Peisistratos of Lapithos, 4th century BC Peisistratus of Orchomenus, king of Arcadian Orchomenus during the Peloponnesian War Pisistratus the Younger, | refer to: Peisistratus (Odyssey), son of Nestor who appears in The Odyssey Peisistratos of Lapithos, 4th century BC Peisistratus of Orchomenus, |
authority. Insignia Official office and residence Located near the Diet building, the Office of the Prime Minister of Japan is called the . The original Kantei served from 1929 until 2002, when a new building was inaugurated to serve as the current Kantei. The old Kantei was then converted into the Official Residence, or . The Kōtei lies to the southwest of the Kantei, and is linked by a walkway. Travel The Prime Minister of Japan travels in a Toyota Century, which replaced the Lexus LS 600h L in 2019. For overseas air travel, the Japanese government maintains two Boeing 777, which replaced the Boeing 747-400 also in 2019. The aircraft is also used by the emperor, the members of the Imperial family, and other high-ranking officials. They have the radio callsigns Japanese Air Force One and Japanese Air Force Two when operating on official business, and Cygnus One and Cygnus Two when operating outside of official business (e.g., on training flights). The aircraft always fly together on government missions, with one serving as the primary transport and the other serving as a backup with maintenance personnel on board. The aircraft are officially referred to as . Honours and emoluments Until the mid-1930s, the prime minister of Japan was normally granted a hereditary peerage (kazoku) prior to leaving office if he had not already been ennobled. Titles were usually bestowed in the ranks of count, viscount or baron, depending on the relative accomplishments and status of the prime minister. The two highest ranks, marquess and prince, were only bestowed upon highly distinguished statesmen, and were not granted to a prime minister after 1928. The last prime minister who was a peer was Baron Kijūrō Shidehara, who served as Prime Minister from October 1945 to May 1946. The peerage was abolished when the Constitution of Japan came into effect in May 1947. Certain eminent prime ministers have been awarded the Order of the Chrysanthemum, typically in the degree of Grand Cordon. The highest honour in the Japanese honours system, the Collar of the Order of the Chrysanthemum, has only been conferred upon select prime ministers and eminent statesmen; the last such award to a living prime minister was to Saionji Kinmochi in 1928. More often, the Order of the Chrysanthemum has been a posthumous distinction; the Collar of the order was last awarded posthumously to former prime | appointed by the emperor of Japan after being designated by the National Diet and must enjoy the confidence of the House of Representatives to remain in office. He is the head of the Cabinet and appoints and dismisses the other ministers of state. The literal translation of the Japanese name for the office is Minister for the Comprehensive Administration of (or the Presidency over) the Cabinet. The current prime minister of Japan is Fumio Kishida, who replaced Yoshihide Suga on 4 October 2021. History Before the adoption of the Meiji Constitution, Japan had in practice no written constitution. Originally, a Chinese-inspired legal system known as ritsuryō was enacted in the late Asuka period and early Nara period. It described a government based on an elaborate and rational meritocratic bureaucracy, serving, in theory, under the ultimate authority of the emperor; although in practice, real power was often held elsewhere, such as in the hands of the Fujiwara clan, who intermarried with the imperial family in the Heian period, or by the ruling shōgun. Theoretically, the last ritsuryō code, the Yōrō Code enacted in 752, was still in force at the time of the Meiji Restoration. Under this system, the was the head of the Daijō-kan (Department of State), the highest organ of Japan's pre-modern Imperial government during the Heian period and until briefly under the Meiji Constitution with the appointment of Sanjō Sanetomi in 1871. The office was replaced in 1885 with the appointment of Itō Hirobumi to the new position of Minister President of State, four years before the enactment of the Meiji Constitution, which mentions neither the Cabinet nor the position of Prime Minister explicitly. It took its current form with the adoption of the Constitution of Japan in 1947. To date, 64 people have served this position. The longest serving prime minister to date is Shinzō Abe, who served as prime minister in two terms: from 26 September 2006 until 26 September 2007, and from 26 December 2012 until 16 September 2020. Appointment The prime minister is designated by both houses of the Diet, before the conduct of any other business. For that purpose, each conducts a ballot under the run-off system. If the two houses choose different individuals, then a joint committee of both houses is appointed to agree on a common candidate. Ultimately, however, if the two houses do not agree within ten days, the decision of the House of Representatives is deemed to be that of the Diet. Therefore, the House of Representatives can theoretically ensure the appointment of any prime minister it wants. The candidate is then presented with his or her commission, and formally |
recognition particle (SRP) recognizes an N-terminal signal peptide of the nascent protein. Binding of the SRP temporarily pauses synthesis while the ribosome-protein complex is transferred to an SRP receptor on the ER in eukaryotes, and the plasma membrane in prokaryotes. There, the nascent protein is inserted into the translocon, a membrane-bound protein conducting channel composed of the Sec61 translocation complex in eukaryotes, and the homologous SecYEG complex in prokaryotes. In secretory proteins and type I transmembrane proteins, the signal sequence is immediately cleaved from the nascent polypeptide once it has been translocated into the membrane of the ER (eukaryotes) or plasma membrane (prokaryotes) by signal peptidase. The signal sequence of type II membrane proteins and some polytopic membrane proteins are not cleaved off and therefore are referred to as signal anchor sequences. Within the ER, the protein is first covered by a chaperone protein to protect it from the high concentration of other proteins in the ER, giving it time to fold correctly. Once folded, the protein is modified as needed (for example, by glycosylation), then transported to the Golgi for further processing and goes to its target organelles or is retained in the ER by various ER retention mechanisms. The amino acid chain of transmembrane proteins, which often are transmembrane receptors, passes through a membrane one or several times. These proteins are inserted into the membrane by translocation, until the process is interrupted by a stop-transfer sequence, also called a membrane anchor or signal-anchor sequence. These complex membrane proteins are currently characterized using the same model of targeting that has been developed for secretory proteins. However, many complex multi-transmembrane proteins contain structural aspects that do not fit this model. Seven transmembrane G-protein coupled receptors (which represent about 5% of the genes in humans) mostly do not have an amino-terminal signal sequence. In contrast to secretory proteins, the first transmembrane domain acts as the first signal sequence, which targets them to the ER membrane. This also results in the translocation of the amino terminus of the protein into the ER membrane lumen. This translocation, which has been demonstrated with opsin with in vitro experiments, breaks the usual pattern of "co-translational" translocation which has always held for mammalian proteins targeted to the ER. A great deal of the mechanics of transmembrane topology and folding remains to be elucidated. Post-translational translocation Even though most secretory proteins are co-translationally translocated, some are translated in the cytosol and later transported to the ER/plasma membrane by a post-translational system. In prokaryotes this process requires certain cofactors such as SecA and SecB and is facilitated by Sec62 and Sec63, two membrane-bound proteins. The Sec63 complex, which is embedded in the ER membrane, causes hydrolysis of ATP, allowing chaperone proteins to bind to an exposed peptide chain and slide the polypeptide into the ER lumen. Once in the lumen the polypeptide chain can be folded properly. This process only occurs in unfolded proteins located in the cytosol. In addition, proteins targeted to other cellular destinations, such as mitochondria, chloroplasts, or peroxisomes, use specialized post-translational pathways. Proteins targeted for the nucleus are also translocated post-translationally through the addition of a nuclear localization signal (NLS) that promotes passage through the nuclear envelope via nuclear pores. Sorting of proteins Mitochondria Most mitochondrial proteins are synthesized as cytosolic precursors containing uptake peptide signals. Cytosolic chaperones deliver preproteins to channel-linked receptors in the mitochondrial membrane. The preprotein with presequence targeted for the mitochondria is bound by receptors and the general import pore (GIP), collectively known as translocase of the outer membrane (TOM), at the outer membrane. It is then translocated through TOM as hairpin loops. The preprotein is transported through the intermembrane space by small TIMs (which also acts as molecular chaperones) to the TIM23 or TIM22 (translocase of the inner membrane) at the inner membrane. Within the matrix the targeting sequence is cleaved off by mtHsp70. Three mitochondrial outer membrane receptors are known: TOM70: Binds to internal targeting peptides and acts as docking point for cytosolic chaperones. TOM20: Binds presequences. TOM22: Binds both presequences and internal targeting peptides. The TOM channel (TOM40) is a cation specific high conductance channel with a molecular weight of 410 kDa and a pore diameter of 21Å. The presequence translocase23 (TIM23) is localized to the mitochondrial | preproteins to channel-linked receptors in the mitochondrial membrane. The preprotein with presequence targeted for the mitochondria is bound by receptors and the general import pore (GIP), collectively known as translocase of the outer membrane (TOM), at the outer membrane. It is then translocated through TOM as hairpin loops. The preprotein is transported through the intermembrane space by small TIMs (which also acts as molecular chaperones) to the TIM23 or TIM22 (translocase of the inner membrane) at the inner membrane. Within the matrix the targeting sequence is cleaved off by mtHsp70. Three mitochondrial outer membrane receptors are known: TOM70: Binds to internal targeting peptides and acts as docking point for cytosolic chaperones. TOM20: Binds presequences. TOM22: Binds both presequences and internal targeting peptides. The TOM channel (TOM40) is a cation specific high conductance channel with a molecular weight of 410 kDa and a pore diameter of 21Å. The presequence translocase23 (TIM23) is localized to the mitochondrial inner membrane and acts as a pore-forming protein which binds precursor proteins with its N-terminus. TIM23 acts as a translocator for preproteins for the mitochondrial matrix, the inner mitochondrial membrane as well as for the intermembrane space. TIM50 is bound to TIM23 at the inner mitochondrial side and found to bind presequences. TIM44 is bound on the matrix side and found binding to mtHsp70. The presequence translocase22 (TIM22) binds preproteins exclusively bound for the inner mitochondrial membrane. Mitochondrial matrix targeting sequences are rich in positively charged amino acids and hydroxylated ones. Proteins are targeted to submitochondrial compartments by multiple signals and several pathways. Targeting to the outer membrane, intermembrane space, and inner membrane often requires another signal sequence in addition to the matrix targeting sequence. Chloroplasts The preprotein for chloroplasts may contain a stromal import sequence or a stromal and thylakoid targeting sequence. The majority of preproteins are translocated through the Toc and Tic complexes located within the chloroplast envelope. In the stroma the stromal import sequence is cleaved off and folded as well as intra-chloroplast sorting to thylakoids continues. Proteins targeted to the envelope of chloroplasts usually lack cleavable sorting sequence. Both chloroplasts and mitochondria Many proteins are needed in both mitochondria and chloroplasts. In general the dual-targeting peptide is of intermediate character to the two specific ones. The targeting peptides of these proteins have a high content of basic and hydrophobic amino acids, a low content of negatively charged amino acids. They have a lower content of alanine and a higher content of leucine and phenylalanine. The dual targeted proteins have a more hydrophobic targeting peptide than both mitochondrial and chloroplastic ones. However, it is tedious to predict if a peptide is dual-targeted or not based on its physico-chemical characteristics. Peroxisomes All peroxisomal proteins are encoded by nuclear genes. To date there are two types of known Peroxisome Targeting Signals (PTS): Peroxisome targeting signal 1 (PTS1): a C-terminal tripeptide with a consensus sequence (S/A/C)-(K/R/H)-(L/A). The most common PTS1 is serine-lysine-leucine (SKL). Most peroxisomal matrix proteins possess a PTS1 type signal. Peroxisome targeting signal 2 (PTS2): a nonapeptide located near the N-terminus with a consensus sequence (R/K)-(L/V/I)-XXXXX-(H/Q)-(L/A/F) (where X can be any amino acid). There are also proteins that possess neither of |
and players return meld cards to their hands. Some varieties accept a "round house", kings and queens of each suit, and earn a bonus 10 points awarding a total of 250 points. Trick-taking commences and continues until all held cards have been played. One variation has no "leading" requirement for the bid winner or subsequent trick winner to lead a specific card, however the rules of "following" are still observed. When adding counters, cards from each player's discard pile are included in totals for a total of 240 counters per round, plus one counter for winning the final trick. One variation to make it more difficult for the bid-winning player, the discard pile created by drawing cards is used by the non-bidding player to score towards tricks. Three-handed In Three-handed pinochle, each player plays for him or herself. The dealer deals 15 cards to each player and three cards to the kitty—a separate pile in the middle. All players review their cards and silently determine their bids. The player to the dealer's left initiates the bidding process. If the player has a meld, he or she is required to open the bidding; otherwise, they may pass or bid. If he or she passes, the obligation to bid passes to the next player, if meld is held. Once a player passes, he or she is out of the auction. Bidding begins at 20, or as little as 19 if dealer, and increases in multiples of 1. The highest bidder wins the auction and turns up the three-card kitty for all to see. The three widow cards are placed in the bid winner's hand. The bid winner then declares trump and lays down meld. The other two players also lay meld face-up for count. After the appropriate points have been tallied the bid winner must set aside any three cards that have not been melded. This will reduce the bid winner's hand to 15 cards. For all three players, meld is now returned to each respective player's hand, and the round is played. During the round, a player must take at least one trick to "save one's meld", even if the trick contains no points; otherwise, no meld points will be counted for that player during that round. After all tricks are taken, counters are tallied for each player. The three discards by the highest bidder count toward their counter score for the hand, so there is always a total of 25 points for the trick score among the three players. If the highest bidder fails to make their contract by adding meld points and trick points from the play, then their score is negative the amount of the bid for that hand. The meld count is cancelled. After viewing the kitty, the highest bidder may concede the hand and take a negative score for the amount of their bid; however, they still must name trump and the other two players score their meld. Conceding the hand does save the trick points opponents would score playing their hands, although opponents will not have an opportunity to lose their meld by failing to take a trick. The game is won when one player reaches 100 points. It is possible for two or all three players to go over 100 on the same hand. There are three methods of resolving ties: Playing another hand The game is extended becomes a contest to 125 points. If two players exceed 125 points on the same hand, the contest lengthens to 150 points. This rule holds regardless of score fluctuations (players "going minus"—failing to reach their bid amounts—and falling below 100 points. If two players exceed 100 points on the same hand, then the high bidder for that round automatically is declared the winner. If two non-bidders exceed 100 points on the same hand, then either of the other methods apply. Renege Any time a player accidentally misplays during the play portion of the hand, it is called a renege. There are various forms of misplay: Playing out of suit Sloughing on a trick when holding trump Playing out of turn Failure to discard three cards prior to the play portion of the hand—this constitutes a renege if bidder has led to the first trick Purposely exposing any portion of your hand to another player (during the play portion of the game for all players and also during the meld portion of the hand for nonbidders with the exception of meld cards) Failure to kill (Not going over the played card when required to during the hand) Any other action that disrupts the harmony of the game After play begins, any discovered misdeal not caused by a card-fault in the deck constitutes a renege for the dealer and any player on the dealer's team. If the bidder reneges, they automatically takes a double set and the amount of the bid is subtracted from their score. The two opposing players get to count their meld points and the remainder of the hand is thrown in. If either of the two nonbidders misplay, the bidder automatically makes their bid. The bidder gets to score the amount of their bid and meld, the player that misplayed loses all meld and takes a single set, and the third player scores only their meld. If at any point during melding or play it is determined that a non-standard deck is being used, any player may declare a card-fault misdeal. This results in the nullification of the entire hand including all meld and points obtained. Cutthroat Similar to three-handed pinochle, cutthroat is a simple modification. The dealer deals the entire deck out (16 cards to each player), in packets of four. The player to the left of the dealer begins the bidding once meld has been silently determined by all players. Play continues normally in terms of scoring and trick taking. The only way to win in cutthroat pinochle, however, is to "bid and out", or to have taken the bid and surpassed the predetermined winning score. It is then possible for multiple players to go over the winning score, yet if none has taken a bid and met the resulting contract, a win has not happened and play continues. It is also possible for a person to lose with the high score if they do not take a winning bid. Four-handed Four-handed pinochle, or "partnership pinochle" is played with two teams of two players each. Partners are seated opposite from each other. Each player is dealt 12 cards. The opening bid is typically 150, but can be a higher agreed on value. All four players may bid. Both the bidder and his partner have their score count towards making the contract. High bidder names trump. There typically is no kitty. With a kitty, the four cards are distributed, one to each player, by the bid winner. Each hand must meld separately. As in the three-handed version, the first player is forced to bid when holding meld. Play is often to 1000 but can increase to 1500 during partnership. Five-handed and larger Games with five hands or more typically modify partnership pinochle in a variety of ways. They are generally played with 1 1/2 or doubled decks, with extra dix added or withheld to make an even deal. With an odd number of players, the bidder asks for a desired card in the trump suit, with the first matching player being partner for that hand. Everyone else plays against the team. In larger groups, one or more players can sit out each hand allowing the remaining players to follow the appropriate rules for the respective number of players. Check Check pinochle is a gambling variant of three-hand. It is the same as to 1000, except that players keep track of "checks". If playing for $1 stakes, each check gained means that the other two players owe a dollar. The following events cause a gain or loss of checks. Flush or run +1 check Aces +1 check Roundtable +2 checks (marriage in each suit) Note that checks for meld can be earned either by the bidder or non-bidder. Checks are kept even if the bid is not made. Looking at the "talon" and losing the hand (either by conceding or playing) −1 check Playing the hand and losing −1 additional check Not looking at the talon and conceding 300 points. – no gain or loss of checks (happens when forced to bid) Double marriage (two marriages in same suit) +1 check Double pinochle +1 check Double aces, double kings, double queens, or double jacks +2 checks Winning the game +5 or +10 checks 7 nines +5 or +10 checks (do not need to win the bid to get credit) Double-deck Today "double-deck" pinochle is a popular form of the game, exclusively played by the National Pinochle Association, the American Pinochle Association, the Cambridge Pinochle Association, and in the "World Series of Pinochle". Double-deck pinochle is played with two pinochle decks, without the nines. This makes for an 80 card deck. Play is similar to regular pinochle, except 20 cards are dealt to each person and minimum bid is increased to 500 points. In some variations, bids are made in increments of 10 or more points until 600 is reached, then by 50 points. This version often features "meld bidding", a bid made to let a partner know what is in the bidder's hand. The only communication during bidding should be a numerical number or "pass", any other way of communicating is called "talking across the table" and is forbidden. There are occasionally different meld values for a run and a pinochle; a run being 250 or 150 points, and a pinochle being 150 or 40 points. All other aspects of the game generally remain the same. Technical Misdeal If a player is dealt 13 or more non-counters and no aces, the player may declare a technical misdeal. This must declared before he or she plays the first trick. A technical misdeal nullifies all points melded for all players. The hand is then re-dealt by the original dealer of that hand. Triple-deck, six-handed This version follows the rules of double-deck pinochle. In triple-deck pinochle six play in two partnerships of three each; each player has an opponent at their right and left. Three pinochle decks with no nines are mixed together, making a pack of 120 cards. Each player is dealt 20 cards, and the rules of double deck pinochle apply, except that the minimum bid is 75, and the last trick is worth 3 points. most of the extra melds made possible by the triple pack do not count extra. i.e. if a player should hold twenty aces, five of each suit, the value would be that of double aces and triple aces combined. Internet Internet pinochle is almost always "double deck" except for a few applications for some smart phones. Today the Internet is host to many live professional cash tournaments, although many are still cautious about playing online because of potential cheating. Racehorse Note that this use of the term "racehorse" is inconsistent with the commonly understood meaning of the term when applied to Pinochle. As summarized by Dave LeVasseur: "Racehorse means that, after the winning bidder has named trump, that player's partner passes cards across the table" Played much the same as "double deck" but to six hands, the point values are inflated. Two teams are formed, 20 cards are then dealt to each player and four cards are dealt to the blind. Bidding commences with the person immediately to the left of the dealer automatically bidding 500. The winner of the bid includes the blind into their hand, calls trump and melds. Note: all runs, double, triple, and quadruple, marriages must be in trump The game continues with the standard rules of play. When the play is over each team adds up their points in the count with kings, 10s, and aces worth ten points, while queens and jacks are worth zero. If a team count plus meld does not equal their bid, they "go set". By going set the amount of the bid is subtracted from the team's score and their count is discarded. The other team retains both their meld and their count provided they took at least 10 points in the count. Eight-player double-deck Two full decks are dealt between eight players, forming four teams. Team members are spaced so that they are not able to see any other hands. The game is usually played to a score of 5,000 or higher. Other than this, the four-player rules apply, and any variations may also be used. There is an increased possibility that when one team declares trump another team may have an equal number of trump also, which may lead to an interesting game. An optional scoring rule rewards 1,000 points for a quadruple pinochle—four jacks of diamonds and four queens of spades in a meld. Alternate end games One variation on winning allows a team or individual to win instantly from any score by taking all the tricks in a hand. To win in this fashion, the winning player or team must play very skillfully to prevent opposing players from taking even one lowly (even zero-point) trick. This victory is known as "pinochling". A player or team can play for this victory even if they are not the highest bidder. "pinochling" does not require a bidder to make their bid. They also can play for this victory even if their bid cannot be made with the maximum number of trick points available plus their meld. However, the highest bidding player or team can prevent other players from attempting this if they elect to "throw in" the hand before the first card is played. When playing "bid-out" rules, a team can win without bidding if their score reaches (and remains above) the agreed-upon game-ending score while their opponents fail to make | must follow the lead suit if possible. Usually every player must play a winning card against those played so far, if it is possible to do so, even when the current player expects a later player to win the hand with a better card. The only exception is if a player played a trump card when trump was not the suit led. In that case, those following that player may play any card of the lead suit, since they must follow the lead suit but are already losing to the player who played trump. Likewise, if a player cannot follow suit, but has trump, they must play trump. Again, if a player does not have any cards of the lead suit and can play a trump card higher than any other trump played so far, the player must do so, even if the player expects that a later player will beat the card. If another trump has already been played that a player cannot beat, then they can play any trump in their hand, but they still must play a trump card if they can. Only when a player has no cards in suit, and has no trump, can the player choose to play any card in their hand. Most books of post-1945 rules say that unless trump is led, there is no requirement to try to win the trick. It is only when trump is led that "heading" the trick is mandatory. In pinochle circles and tournaments the post-1945 rules are played about half of the time according to Pagat and Hoyle. If two identical cards are played, the first one outranks the second. After the first trick, the winner of each trick leads the first card for the next trick, until all the cards are played. Scoring tricks Points are scored based on the tricks won in the hand. There are several ways to count up the points for play, but they always add up to 250 points. The last trick is always worth an additional 10 points added to any existing points in the actual trick cards. The classic counting system of pinochle is where aces are worth 11, tens are worth 10, kings are worth four, queens are worth three, jacks are worth two, and nines are worth zero. This method takes longer to count the score at the end of each hand. A simpler method is to count aces and tens for 10 points, kings and queens for five points, and jacks and nines are worth zero. An even simpler method has aces, tens, and kings worth 10 (and known as "counters"), and everything else zero ("garbage"). Since all points are multiples of ten in the third method, most players drop the redundant zero. Aces, tens, and kings won in tricks are worth one point. The meld scoring can also avoid the zero in the tenth place. Melds like 1,000 aces are thus worth 100. The terms "1,000 aces", "800 kings" and so on are often used, even though the point values are one-tenth. Game variations Two-handed Two-handed pinochle is the original pinochle game, while partnership, auction, and all other variants are derived from it. It is the game most similar to the original Bezique game, whence pinochle was derived, via the German game of Binokel. The only significant difference in its rules from Bezique is the scoring. The original version of pinochle involves a partial deal of twelve cards to both players in packets of four, leaving a stock of 24 cards. A player can score one meld after each trick won of the first 12 tricks. Melded cards can even be used to win tricks. After each trick, players draw one card from the stock into their hand starting with the trick-winning player. For the last 12 tricks, melds are taken into each player's hand and are no longer announced by the player who wins the trick. The traditional trick-taking rules apply only for these last 12 tricks. In variations of two-handed play, no cards are initially dealt, a distinction from all other variations. Instead, the entire deck is placed face-down on the playing surface between the two players to form the widow. One player begins the hand-building process by drawing the top card of the widow. The player can either keep that card for her or his hand or reject the card. If the player chooses to hold the initial card, the player then draws a second card from the widow, then places it face-down, without looking at it, creating a discard pile. If the player rejects the first card, the card becomes the first card in the discard pile. The second card drawn from the widow must be kept, regardless of whether she or he preferred the first card. Players alternate turns in this hand-building process until all cards are chosen. With bidding, the player winning the bid declares trump, then lays all meld face-up on the table. The other player shows her or his melds as well. Meld points are tallied, and players return meld cards to their hands. Some varieties accept a "round house", kings and queens of each suit, and earn a bonus 10 points awarding a total of 250 points. Trick-taking commences and continues until all held cards have been played. One variation has no "leading" requirement for the bid winner or subsequent trick winner to lead a specific card, however the rules of "following" are still observed. When adding counters, cards from each player's discard pile are included in totals for a total of 240 counters per round, plus one counter for winning the final trick. One variation to make it more difficult for the bid-winning player, the discard pile created by drawing cards is used by the non-bidding player to score towards tricks. Three-handed In Three-handed pinochle, each player plays for him or herself. The dealer deals 15 cards to each player and three cards to the kitty—a separate pile in the middle. All players review their cards and silently determine their bids. The player to the dealer's left initiates the bidding process. If the player has a meld, he or she is required to open the bidding; otherwise, they may pass or bid. If he or she passes, the obligation to bid passes to the next player, if meld is held. Once a player passes, he or she is out of the auction. Bidding begins at 20, or as little as 19 if dealer, and increases in multiples of 1. The highest bidder wins the auction and turns up the three-card kitty for all to see. The three widow cards are placed in the bid winner's hand. The bid winner then declares trump and lays down meld. The other two players also lay meld face-up for count. After the appropriate points have been tallied the bid winner must set aside any three cards that have not been melded. This will reduce the bid winner's hand to 15 cards. For all three players, meld is now returned to each respective player's hand, and the round is played. During the round, a player must take at least one trick to "save one's meld", even if the trick contains no points; otherwise, no meld points will be counted for that player during that round. After all tricks are taken, counters are tallied for each player. The three discards by the highest bidder count toward their counter score for the hand, so there is always a total of 25 points for the trick score among the three players. If the highest bidder fails to make their contract by adding meld points and trick points from the play, then their score is negative the amount of the bid for that hand. The meld count is cancelled. After viewing the kitty, the highest bidder may concede the hand and take a negative score for the amount of their bid; however, they still must name trump and the other two players score their meld. Conceding the hand does save the trick points opponents would score playing their hands, although opponents will not have an opportunity to lose their meld by failing to take a trick. The game is won when one player reaches 100 points. It is possible for two or all three players to go over 100 on the same hand. There are three methods of resolving ties: Playing another hand The game is extended becomes a contest to 125 points. If two players exceed 125 points on the same hand, the contest lengthens to 150 points. This rule holds regardless of score fluctuations (players "going minus"—failing to reach their bid amounts—and falling below 100 points. If two players exceed 100 points on the same hand, then the high bidder for that round automatically is declared the winner. If two non-bidders exceed 100 points on the same hand, then either of the other methods apply. Renege Any time a player accidentally misplays during the play portion of the hand, it is called a renege. There are various forms of misplay: Playing out of suit Sloughing on a trick when holding trump Playing out of turn Failure to discard three cards prior to the play portion of the hand—this constitutes a renege if bidder has led to the first trick Purposely exposing any portion of your hand to another player (during the play portion of the game for all players and also during the meld portion of the hand for nonbidders with the exception of meld cards) Failure to kill (Not going over the played card when required to during the hand) Any other action that disrupts the harmony of the game After play begins, any discovered misdeal not caused by a card-fault in the deck constitutes a renege for the dealer and any player on the dealer's team. If the bidder reneges, they automatically takes a double set and the amount of the bid is subtracted from their score. The two opposing players get to count their meld points and the remainder of the hand is thrown in. If either of the two nonbidders misplay, the bidder automatically makes their bid. The bidder gets to score the amount of their bid and meld, the |
bacterium) or L-alanine, D-glutamine, L-lysine, and D-alanine with a 5-glycine interbridge between tetrapeptides in the case of Staphylococcus aureus (a Gram-positive bacterium). Peptidoglycan is one of the most important sources of D-amino acids in nature. Cross-linking between amino acids in different linear amino sugar chains occurs with the help of the enzyme DD-transpeptidase and results in a 3-dimensional structure that is strong and rigid. The specific amino acid sequence and molecular structure vary with the bacterial species. Biosynthesis The peptidoglycan monomers are synthesized in the cytosol and are then attached to a membrane carrier bactoprenol. Bactoprenol transports peptidoglycan monomers across the cell membrane where they are inserted into the existing peptidoglycan. In the first step of peptidoglycan synthesis, glutamine, which is an amino acid, donates an amino group to a sugar, fructose 6-phosphate. This turns fructose 6-phosphate into glucosamine-6-phosphate. In step two, an acetyl group is transferred from acetyl CoA to the amino group on the glucosamine-6-phosphate creating N-acetyl-glucosamine-6-phosphate. In step three of the synthesis process, the N-acetyl-glucosamine-6-phosphate is isomerized, which will change N-acetyl-glucosamine-6-phosphate to N-acetyl-glucosamine-1-phosphate. In step 4, the N-acetyl-glucosamine-1-phosphate, which is now a monophosphate, attacks UTP. Uridine triphosphate, which is a pyrimidine nucleotide, has the ability to act as an energy source. In this particular reaction, after the monophosphate has attacked the UTP, an inorganic pyrophosphate is given off and is replaced by the monophosphate, creating UDP-N-acetylglucosamine (2,4). (When UDP is used as an energy source, it gives off an inorganic phosphate.) This initial stage, is used to create the precursor for the NAG | glucosamine-6-phosphate. In step two, an acetyl group is transferred from acetyl CoA to the amino group on the glucosamine-6-phosphate creating N-acetyl-glucosamine-6-phosphate. In step three of the synthesis process, the N-acetyl-glucosamine-6-phosphate is isomerized, which will change N-acetyl-glucosamine-6-phosphate to N-acetyl-glucosamine-1-phosphate. In step 4, the N-acetyl-glucosamine-1-phosphate, which is now a monophosphate, attacks UTP. Uridine triphosphate, which is a pyrimidine nucleotide, has the ability to act as an energy source. In this particular reaction, after the monophosphate has attacked the UTP, an inorganic pyrophosphate is given off and is replaced by the monophosphate, creating UDP-N-acetylglucosamine (2,4). (When UDP is used as an energy source, it gives off an inorganic phosphate.) This initial stage, is used to create the precursor for the NAG in peptidoglycan. In step 5, some of the UDP-N-acetylglucosamine (UDP-GlcNAc) is converted to UDP-MurNAc (UDP-N-acetylmuramic acid) by the addition of a lactyl group to the glucosamine. Also in this reaction, the C3 hydroxyl group will remove a phosphate from the alpha carbon of phosphoenolpyruvate. This creates what is called an enol derivative that will be reduced to a “lactyl moiety” by NADPH in step six. In step 7, the UDP–MurNAc is converted to UDP-MurNAc pentapeptide by the addition of five amino acids, usually including the dipeptide D-alanyl-D-alanine. Each of these reactions requires the energy source ATP. This is all referred to as Stage one. Stage two occurs in the cytoplasmic membrane. It is in the membrane where a lipid carrier called bactoprenol carries peptidoglycan precursors through the cell membrane. |
letters of words: Partial differential equation, differential equation involving partial derivatives (of a function of multiple variables) The European Democratic Party (esp. | Program development environment Pug dog encephalitis Pulse detonation engine, proposed substitute to the traditional jet engine Other modes of abbreviation: Phosphodiesterase, enzyme important in intracellular communication Polydichloric euthimal, fictional substance Pde (or Pde.), "Parade" (when serving as part of |
territorial aggrandizement of the Papal States, his niece's son, Cardinal Raffaele Riario (for whom the Palazzo della Cancelleria was constructed) was suspected of colluding in the failed Pazzi conspiracy of 1478 to assassinate both Lorenzo de' Medici and his brother Giuliano and replace them in Florence with Sixtus IV's other nephew, Girolamo Riario. Francesco Salviati, Archbishop of Pisa and a main organizer of the plot, was hanged on the walls of the Florentine Palazzo della Signoria. Sixtus IV replied with an interdict and two years of war with Florence. According to the later published chronicle of the Italian historian Stefano Infessura, Diary of the City of Rome, Sixtus was a "lover of boys and sodomites", awarding benefices and bishoprics in return for sexual favours and nominating a number of young men as cardinals, some of whom were celebrated for their good looks. However, Infessura had partisan allegiances to the Colonna and so is not considered to be always reliable or impartial. The English churchman and Protestant polemicist John Bale, writing a century later, attributed to Sixtus "the authorisation to practice sodomy during periods of warm weather" to the "Cardinal of Santa Lucia". This prompted the noted historian of the Catholic Church, Ludwig von Pastor, to issue a firm rebuttal. Foreign policy Sixtus continued a dispute with King Louis XI of France, who upheld the Pragmatic Sanction of Bourges (1438), which held that papal decrees needed royal assent before they could be promulgated in France. That was a cornerstone of the privileges claimed for the Gallican Church and could never be shifted as long as Louis XI manoeuvred to replace King Ferdinand I of Naples with a French prince. Louis was thus in conflict with the papacy, and Sixtus could not permit it. On 1 November 1478, Sixtus published the papal bull Exigit Sincerae Devotionis Affectus through which the Spanish Inquisition was established in the Kingdom of Castile. Sixtus consented under political pressure from Ferdinand of Aragon, who threatened to withhold military support from his kingdom of Sicily. Nevertheless, Sixtus IV quarrelled over protocol and prerogatives of jurisdiction; he was unhappy with the excesses of the Inquisition and condemned the most flagrant abuses in 1482. As a temporal prince who constructed stout fortresses in the Papal States, he encouraged the Venetians to attack Ferrara, which he wished to obtain for another nephew. Ercole I d'Este, Duke of Ferrara, was allied with the Sforzas of Milan, the Medicis of Florence along with the King of Naples, normally a hereditary ally and champion of the papacy. The angered Italian princes allied to force Sixtus IV to make peace to his great annoyance. For refusing to desist from the very hostilities that he himself had instigated and for being a dangerous rival to Della Rovere dynastic ambitions in the Marche, Sixtus placed Venice under interdict in 1483. He also lined the coffers of the state by unscrupulously selling high offices and privileges. In ecclesiastical affairs, Sixtus promoted the dogma of the Immaculate Conception, which had been confirmed at the Council of Basle in 1439, and he designated 8 December as its feastday. In 1476, he issued the apostolic constitution Cum Praeexcelsa, establishing a Mass and Office for the feast. He formally annulled the decrees of the Council of Constance in 1478. Slavery The two papal bulls issued by Pope Nicholas V, Dum Diversas of 1452 and Romanus Pontifex of 1455, had effectively given the Portuguese the rights to acquire slaves along the African Coast by force or trade. Those concessions were confirmed by Sixtus in his own bull, Aeterni regis, of 21 June 1481. Arguably the "ideology of conquest" expounded in those texts became the means by which commerce and conversion were facilitated. In November 1476, Isabel and Fernando ordered an investigation into rights of conquest in the Canary Islands, and in the spring of 1478, they sent Juan Rejon with sixty soldiers and thirty cavalry to the Grand Canary, where the natives retreated inland. Sixtus's earlier threats to excommunicate all captains or pirates who enslaved Christians in the bull Regimini Gregis of 1476 could have been intended to emphasise the need to convert the natives of the Canary Islands and Guinea and establish a clear difference in status between those who had converted and those who resisted. The ecclesiastical penalties were directed towards those who were enslaving the recent converts. Princely patronage As a civic patron in Rome, even the anti-papal chronicler Stefano Infessura agreed that Sixtus should be admired. The dedicatory inscription in the fresco by Melozzo da Forlì in the Vatican Palace records: "You gave your city temples, streets, squares, fortifications, bridges and restored the Acqua Vergine as far as the Trevi..." In addition to restoring the aqueduct that provided Rome an alternative to the river water, which had made the city famously unhealthy, he restored or rebuilt over 30 of Rome's dilapidated churches such as San Vitale (1475) and Santa Maria del Popolo, and he added seven new ones. The Sistine Chapel was sponsored by Sixtus IV, as was the Ponte Sisto, the Sistine Bridge (the first new bridge across the Tiber since Antiquity) and the building of Via Sistina (later named Borgo Sant'Angelo), a road leading from Castel Sant'Angelo to Saint Peter. | temporal issues and dynastic considerations. Nepotism Sixtus IV sought to strengthen his position by surrounding himself with relatives and friends. In the fresco by Melozzo da Forlì, he is accompanied by his Della Rovere and Riario nephews, not all of whom were made cardinals; the protonotary apostolic Pietro Riario (on his right), the future Pope Julius II/ Giuliano Della Rovere standing before him; and Girolamo Riario and Giovanni della Rovere, behind the kneeling Platina, author of the first humanist history of the popes. His nephew, Pietro Riario, also benefited from his nepotism. Pietro became one of the richest men in Rome and was entrusted with Pope Sixtus' foreign policy. However, Pietro died prematurely in 1474, and his role passed to Giuliano Della Rovere. The secular fortunes of the Della Rovere family began when Sixtus invested his nephew Giovanni with the lordship of Senigallia and arranged his marriage to the daughter of Federico III da Montefeltro, duke of Urbino; from that union came a line of Della Rovere dukes of Urbino that lasted until the line expired, in 1631. Six of the thirty-four cardinals that he created were his nephews. In his territorial aggrandizement of the Papal States, his niece's son, Cardinal Raffaele Riario (for whom the Palazzo della Cancelleria was constructed) was suspected of colluding in the failed Pazzi conspiracy of 1478 to assassinate both Lorenzo de' Medici and his brother Giuliano and replace them in Florence with Sixtus IV's other nephew, Girolamo Riario. Francesco Salviati, Archbishop of Pisa and a main organizer of the plot, was hanged on the walls of the Florentine Palazzo della Signoria. Sixtus IV replied with an interdict and two years of war with Florence. According to the later published chronicle of the Italian historian Stefano Infessura, Diary of the City of Rome, Sixtus was a "lover of boys and sodomites", awarding benefices and bishoprics in return for sexual favours and nominating a number of young men as cardinals, some of whom were celebrated for their good looks. However, Infessura had partisan allegiances to the Colonna and so is not considered to be always reliable or impartial. The English churchman and Protestant polemicist John Bale, writing a century later, attributed to Sixtus "the authorisation to practice sodomy during periods of warm weather" to the "Cardinal of Santa Lucia". This prompted the noted historian of the Catholic Church, Ludwig von Pastor, to issue a firm rebuttal. Foreign policy Sixtus continued a dispute with King Louis XI of France, who upheld the Pragmatic Sanction of Bourges (1438), which held that papal decrees needed royal assent before they could be promulgated in France. That was a cornerstone of the privileges claimed for the Gallican Church and could never be shifted as long as Louis XI manoeuvred to replace King Ferdinand I of Naples with a French prince. Louis was thus in conflict with the papacy, and Sixtus could not permit it. On 1 November 1478, Sixtus published the papal bull Exigit Sincerae Devotionis Affectus through which the Spanish Inquisition was established in the Kingdom of Castile. Sixtus consented under political pressure from Ferdinand of Aragon, who threatened to withhold military support from his kingdom of Sicily. Nevertheless, Sixtus IV quarrelled over protocol and prerogatives of jurisdiction; he was unhappy with the excesses of the Inquisition and condemned the most flagrant abuses in 1482. As a temporal prince who constructed stout fortresses in the Papal States, he encouraged the Venetians to attack Ferrara, which he wished to obtain for another nephew. Ercole I d'Este, Duke of Ferrara, was allied with the Sforzas of Milan, the Medicis of Florence along with the King of Naples, normally a hereditary ally and champion of the papacy. The angered Italian princes allied to force Sixtus IV to make peace to his great annoyance. For refusing to desist from the very hostilities that he himself had instigated and for being a dangerous rival to Della Rovere dynastic ambitions in the Marche, Sixtus placed Venice under interdict in 1483. He also lined the coffers of the state by unscrupulously selling high offices and privileges. In ecclesiastical affairs, Sixtus promoted the dogma of the Immaculate Conception, which had been confirmed at the Council of Basle in 1439, and he designated 8 December as its feastday. In 1476, he issued the apostolic constitution Cum Praeexcelsa, establishing a Mass and Office for the feast. He formally annulled the decrees of the Council of Constance in 1478. Slavery The two papal bulls issued by Pope Nicholas V, Dum Diversas of 1452 and Romanus Pontifex of 1455, had effectively given the Portuguese the rights to acquire slaves along the African Coast by force or trade. Those concessions were confirmed by Sixtus in his own bull, Aeterni regis, of 21 June 1481. Arguably the "ideology of conquest" expounded in those texts became the means by which commerce and conversion were facilitated. In November 1476, Isabel and Fernando ordered an investigation into rights of conquest in the Canary Islands, and in the spring of 1478, they sent Juan Rejon with sixty soldiers and thirty cavalry to the Grand Canary, where the natives retreated inland. Sixtus's earlier threats to excommunicate all captains or pirates who enslaved Christians in the bull Regimini Gregis of 1476 could have been intended to emphasise the need to convert the natives of the Canary Islands and Guinea and establish a clear difference in status between those who had converted and those who resisted. The ecclesiastical penalties were directed towards those who were enslaving the recent converts. Princely patronage As a civic patron in Rome, even the anti-papal chronicler Stefano Infessura agreed that Sixtus should be admired. The dedicatory inscription in the fresco by Melozzo da Forlì in the Vatican Palace records: "You gave your city temples, streets, squares, fortifications, bridges and restored the Acqua Vergine as far as the Trevi..." In addition to restoring the aqueduct that provided Rome an alternative to the river water, which had made the city famously unhealthy, he restored or rebuilt over 30 of Rome's dilapidated churches such as San Vitale (1475) and Santa Maria del Popolo, and he added seven new ones. The Sistine Chapel was sponsored by Sixtus IV, as was the Ponte Sisto, the Sistine Bridge (the first new bridge across the Tiber since Antiquity) and the building of Via Sistina (later named Borgo Sant'Angelo), a road leading from Castel Sant'Angelo to Saint Peter. All of that was done to facilitate the integration of the Vatican Hill and Borgo with the heart of Old |
November 1902 known as the "Thousand Days War." With the US being fully aware of these conditions and even incorporating them into the planning of the Panama intervention as the US acted as an arbitrator between the two sides; with the peace treaty that ended the "Thousand Days War" being signed on the USS Wisconsin on November 21, 1902. While in port the US also brought engineering teams to Panama, with the peace delegation, to begin planning for the canal's construction before the US had even gained the rights to build the canal. All these factors would result in the Colombians being unable to put down the Panamanian rebellion and expel the United States troops occupying what today is the independent nation of Panama. On November 6, 1903, Philippe Bunau-Varilla, as Panama's ambassador to the United States, signed the Hay–Bunau-Varilla Treaty, granting rights to the United States to build and indefinitely administer the Panama Canal Zone and its defenses. This is sometimes misinterpreted as the "99-year lease" because of misleading wording included in article 22 of the agreement. Almost immediately, the treaty was condemned by many Panamanians as an infringement on their country's new national sovereignty. This would later become a contentious diplomatic issue among Colombia, Panama, and the United States. President Roosevelt famously stated, "I took the Isthmus, started the canal and then left Congress not to debate the canal, but to debate me." Several parties in the United States called this an act of war on Colombia: The New York Times described the support given by the United States to Bunau-Varilla as an "act of sordid conquest." The New York Evening Post called it a "vulgar and mercenary venture." The US maneuvers are often cited as the classic example of US gunboat diplomacy in Latin America, and the best illustration of what Roosevelt meant by the old African adage, "Speak softly and carry a big stick [and] you will go far." After the revolution in 1903, the Republic of Panama became a US protectorate until 1939. In 1904, the United States purchased the French equipment and excavations, including the Panama Railroad, for US$40 million, of which $30 million related to excavations completed, primarily in the Culebra Cut, valued at about $1.00 per cubic yard. The United States also paid the new country of Panama $10 million and a $250,000 payment each following year. In 1921, Colombia and the United States entered into the Thomson–Urrutia Treaty, in which the United States agreed to pay Colombia $25 million: $5 million upon ratification, and four-$5 million annual payments, and grant Colombia special privileges in the Canal Zone. In return, Colombia recognized Panama as an independent nation. United States construction of the Panama canal, 1904–1914 The US formally took control of the canal property on May 4, 1904, inheriting from the French a depleted workforce and a vast jumble of buildings, infrastructure, and equipment, much of it in poor condition. A US government commission, the Isthmian Canal Commission (ICC), was established to oversee construction; it was given control of the Panama Canal Zone, over which the United States exercised sovereignty. The commission reported directly to Secretary of War William Howard Taft and was directed to avoid the inefficiency and corruption that had plagued the French 15 years earlier. On May 6, 1904, President Theodore Roosevelt appointed John Findley Wallace, formerly chief engineer and finally general manager of the Illinois Central Railroad, as chief engineer of the Panama Canal Project. Overwhelmed by the disease-plagued country and forced to use often dilapidated French infrastructure and equipment, as well as being frustrated by the overly bureaucratic ICC, Wallace resigned abruptly in June 1905. He was succeeded by John Frank Stevens, a self-educated engineer who had built the Great Northern Railroad. Stevens was not a member of the ICC; he increasingly viewed its bureaucracy as a serious hindrance, bypassing the commission and sending requests and demands directly to the Roosevelt administration in Washington, DC. One of Stevens' first achievements in Panama was in building and rebuilding the housing, cafeterias, hotels, water systems, repair shops, warehouses, and other infrastructure needed by the thousands of incoming workers. Stevens began the recruitment effort to entice thousands of workers from the United States and other areas to come to the Canal Zone to work. Workers from the Caribbean—called "Afro-Panamanians"—came in large numbers and many settled permanently. Stevens tried to provide accommodation in which the workers could work and live in reasonable safety and comfort. He also re-established and enlarged the railway, which was to prove crucial in transporting millions of tons of soil from the cut through the mountains to the dam across the Chagres River. Colonel William C. Gorgas had been appointed chief sanitation officer of the canal construction project in 1904. Gorgas implemented a range of measures to minimize the spread of deadly diseases, particularly yellow fever and malaria, which had recently been shown to be mosquito-borne following the work of Dr. Carlos Finlay and Dr. Walter Reed. Investment was made in extensive sanitation projects, including city water systems, fumigation of buildings, spraying of insect-breeding areas with oil and larvicide, installation of mosquito netting and window screens, and elimination of stagnant water. Despite opposition from the commission (one member said his ideas were barmy), Gorgas persisted, and when Stevens arrived, he threw his weight behind the project. After two years of extensive work, the mosquito-spread diseases were nearly eliminated. Even after all that effort, about 5,600 workers died of disease and accidents during the US construction phase of the canal. In 1905, a US engineering panel was commissioned to review the canal design, which had not been finalized. In January 1906 the panel, in a majority of eight to five, recommended to President Roosevelt a sea-level canal, as had been attempted by the French and temporarily abandoned by them in 1887 for a ten locks system designed by Philippe Bunau-Varilla, and definitively in 1898 for a lock-and-lake canal designed by the Comité Technique of the Compagnie Nouvelle de Canal de Panama as conceptualized by Adolphe Godin de Lépinay in 1879. But in 1906 Stevens, who had seen the Chagres in full flood, was summoned to Washington; he declared a sea-level approach to be "an entirely untenable proposition". He argued in favor of a canal using a lock system to raise and lower ships from a large reservoir above sea level. This would create both the largest dam (Gatun Dam) and the largest man-made lake (Gatun Lake) in the world at that time. The water to refill the locks would be taken from Gatun Lake by opening and closing enormous gates and valves and letting gravity propel the water from the lake. Gatun Lake would connect to the Pacific through the mountains at the Gaillard (Culebra) Cut. Unlike Godin de Lépinay with the Congrès International d'Etudes du Canal Interocéanique, Stevens successfully convinced Roosevelt of the necessity and feasibility of this alternative scheme. The construction of a canal with locks required the excavation of more than of material over and above the excavated by the French. As quickly as possible, the Americans replaced or upgraded the old, unusable French equipment with new construction equipment that was designed for a much larger and faster scale of work. 102 large, railroad-mounted steam shovels were purchased, 77 from Bucyrus-Erie, and 25 from the Marion Power Shovel Company. These were joined by enormous steam-powered cranes, giant hydraulic rock crushers, concrete mixers, dredges, and pneumatic power drills, nearly all of which were manufactured by new, extensive machine-building technology developed and built in the United States. The railroad also had to be comprehensively upgraded with heavy-duty, double-tracked rails over most of the line to accommodate new rolling stock. In many places, the new Gatun Lake flooded over the original rail line, and a new line had to be constructed above Gatun Lake's waterline. Goethals replaces Stevens as chief engineer In 1907, Stevens resigned as chief engineer. His replacement, appointed by President Theodore Roosevelt, was US Army Major George Washington Goethals of the US Army Corps of Engineers. Soon to be promoted to lieutenant colonel and later to general, he was a strong, West Point-trained leader and civil engineer with experience in canals (unlike Stevens). Goethals directed the work in Panama to a successful conclusion in 1914, two years ahead of the target date of June 10, 1916. Goethals divided the engineering and excavation work into three divisions: Atlantic, Central, and Pacific. The Atlantic Division, under Major William L. Sibert, was responsible for construction of the massive breakwater at the entrance to Limon Bay, the Gatun locks, and their 3½ mi (5.6 km) approach channel, and the immense Gatun Dam. The Pacific Division, under Sydney B. Williamson (the only civilian member of this high-level team), was similarly responsible for the Pacific 3 mi (4.8 km) breakwater in Panama Bay, the approach channel to the locks, and the Miraflores and Pedro Miguel locks and their associated dams and reservoirs. The Central Division, under Major David du Bose Gaillard of the United States Army Corps of Engineers, was assigned one of the most difficult parts: excavating the Culebra Cut through the continental divide to connect Gatun Lake to the Pacific Panama Canal locks. On October 10, 1913, President Woodrow Wilson sent a signal from the White House by telegraph which triggered the explosion that destroyed the Gamboa Dike. This flooded the Culebra Cut, thereby joining the Atlantic and Pacific oceans via the Panama Canal. Alexandre La Valley (a floating crane built by Lobnitz & Company and launched in 1887) was the first self-propelled vessel to transit the canal from ocean to ocean. This vessel crossed the canal from the Atlantic in stages during construction, finally reaching the Pacific on January 7, 1914. SS Cristobal (a cargo and passenger ship built by Maryland Steel, and launched in 1902 as SS Tremont) on August 3, 1914, was the first ship to transit the canal from ocean to ocean. The construction of the canal was completed in 1914, 401 years after Panama was first crossed overland by Europeans by Vasco Núñez de Balboa's party of conquistadores. The United States spent almost $500 million (roughly equivalent to $ billion in ) to finish the project. This was by far the largest American engineering project to date. The canal was formally opened on August 15, 1914, with the passage of the cargo ship . The opening of the Panama Canal in 1914 caused a severe drop in traffic along Chilean ports due to shifts in maritime trade routes. The burgeoning sheep farming business in southern Patagonia suffered a significant setback by the change in trade routes, as did the economy of the Falkland Islands. Throughout this time, Ernest "Red" Hallen was hired by the Isthmian Canal Commission to document the progress of the work. Later developments By the 1930s, water supply became an issue for the canal, prompting construction of the Madden Dam across the Chagres River above Gatun Lake. Completed in 1935, the dam created Madden Lake (later Alajeula Lake), which provides additional water storage for the canal. In 1939, construction began on a further major improvement: a new set of locks large enough to carry the larger warships that the United States was building at the time and planned to continue building. The work proceeded for several years, and significant excavation was carried out on the new approach channels, but the project was canceled after World War II. After World War II, US control of the canal and the Canal Zone surrounding it became contentious; relations between Panama and the United States became increasingly tense. Many Panamanians felt that the Zone rightfully belonged to Panama; student protests were met by the fencing-in of the zone and an increased military presence there. Demands for the United States to hand over the canal to Panama increased after the Suez Crisis in 1956, when the United States used financial and diplomatic pressure to force France and the UK to abandon their attempt to retake control of the Suez Canal, previously nationalized by the Nasser regime in Egypt. Panamanian unrest culminated in riots on Martyr's Day, January 9, 1964, when about 20 Panamanians and 3–5 US soldiers were killed. A decade later, in 1974, negotiations toward a settlement began and resulted in the Torrijos–Carter Treaties. On September 7, 1977, the treaty was signed by President of the United States Jimmy Carter and Omar Torrijos, de facto leader of Panama. This mobilized the process of granting the Panamanians free control of the canal so long as Panama signed a treaty guaranteeing the permanent neutrality of the canal. The treaty led to full Panamanian control effective at noon on December 31, 1999, and the Panama Canal Authority (ACP) assumed command of the waterway. The Panama Canal remains one of the chief revenue sources for Panama. Before this handover, the government of Panama held an international bid to negotiate a 25-year contract for operation of the container shipping ports located at the canal's Atlantic and Pacific outlets. The contract was not affiliated with the ACP or Panama Canal operations and was won by the firm Hutchison Whampoa, a Hong Kong–based shipping interest owned by Li Ka-shing. Canal Layout While globally the Atlantic Ocean is east of the isthmus and the Pacific is west, the general direction of the canal passage from the Atlantic to the Pacific is from northwest to southeast, because of the shape of the isthmus at the point the canal occupies. The Bridge of the Americas () at the Pacific side is about a third of a degree east of the Colón end on the Atlantic side. Still, in formal nautical communications, the simplified directions "southbound" and "northbound" are used. The canal consists of artificial lakes, several improved and artificial channels, and three sets of locks. An additional artificial lake, Alajuela Lake (known during the American era as Madden Lake), acts as a reservoir for the canal. The layout of the canal as seen by a ship passing from the Atlantic to the Pacific is: From the formal marking line of the Atlantic Entrance, one enters Limón Bay (Bahía Limón), a large natural harbor. The entrance runs . It provides a deepwater port (Cristóbal), with facilities like multimodal cargo exchange (to and from train) and the Colón Free Trade Zone (a free port). A channel forms the approach to the locks from the Atlantic side. The Gatun Locks, a three-stage flight of locks long, lifts ships to the Gatun Lake level, some above sea level. Gatun Lake, an artificial lake formed by the building of the Gatun Dam, carries vessels across the isthmus. It is the summit canal stretch, fed by the Gatun River and emptied by basic lock operations. From the lake, the Chagres River, a natural waterway enhanced by the damming of Gatun Lake, runs about . Here the upper Chagres River feeds the high-level-canal stretch. The Culebra Cut slices through the mountain ridge, crosses the continental divide and passes under the Centennial Bridge. The single-stage Pedro Miguel Lock, which is long, is the first part of the descent with a lift of . The artificial Miraflores Lake long, and above sea level. The two-stage Miraflores Locks is long, with a total descent of at mid-tide. From the Miraflores Locks one reaches Balboa harbor, again with multimodal exchange provision (here the railway meets the shipping route again). Nearby is Panama City. From this harbor an entrance/exit channel leads to the Pacific Ocean (Gulf of Panama), from the Miraflores Locks, passing under the Bridge of the Americas. Thus, the total length of the canal is . Navigation Gatun Lake Created in 1913 by damming the Chagres River, the Gatun Lake is a key part of the Panama Canal, providing the millions of liters of water necessary to operate its locks each time a ship passes through. At time of formation, Gatun Lake was the largest man-made lake in the world. The impassable rainforest around the lake has been the best defense of the Panama Canal. Today these areas remain practically unscathed by human interference and are one of the few accessible areas where various native Central American animal and plant species can be observed undisturbed in their natural habitat. The largest island on Gatun Lake is Barro Colorado Island. It was established for scientific study when the lake was formed, and is operated by the Smithsonian Institution. Many important scientific and biological discoveries of the tropical animal and plant kingdom originated here. Gatun Lake covers about , a vast tropical ecological zone and part of the Atlantic Forest Corridor. Ecotourism on the lake has become an industry for Panamanians. Gatun Lake also provides drinking water for Panama City and Colón. Fishing is one of the primary recreational pursuits on Gatun Lake. Non-native peacock bass were introduced by accident to Gatun Lake around 1967 by a local businessman, and have since flourished to become the dominant angling game fish in Gatun Lake. Locally called Sargento and believed to be the species Cichla pleiozona, these peacock bass originate from the Amazon, Rio Negro, and Orinoco river basins, where they are considered a premier game fish. Lock size The size of the locks determines the maximum size ship that can pass through. Because of the importance of the canal to international trade, many ships are built to the maximum size allowed. These are known as Panamax vessels. A Panamax cargo ship typically has a deadweight tonnage (DWT) of 65,000–80,000 tons, but its actual cargo is restricted to about 52,500 tons because of the draft restrictions within the canal. The longest ship ever to transit the canal was the San Juan Prospector (now Marcona Prospector), an ore-bulk-oil carrier that is long with a beam of . Initially the locks at Gatun were designed to be wide. In 1908, the United States Navy requested that an increased width of at least to allow the passage of US naval ships. Eventually a compromise was made and the locks were built wide. Each lock is long, with the walls ranging in thickness from at the base to at the top. The central wall between the parallel locks at Gatun is thick and over high. The steel lock gates measure an average of thick, wide, and high. Panama Canal pilots were initially unprepared to handle the significant flight deck overhang of aircraft carriers. knocked over all the adjacent concrete lamp posts while passing through the Gatun Locks for the first time in 1928. It is the size of the locks, specifically the Pedro Miguel Locks, along with the height of the Bridge of the Americas at Balboa, that determine the Panamax metric and limit the size of ships that may use the canal. The 2006 third set of locks project has created larger locks, allowing bigger ships to transit through deeper and wider channels. The allowed dimensions of ships using these locks increased by 25 percent in length, 51 percent in beam, and 26 percent in draft, as defined by New Panamax metrics. Tolls As with a toll road, vessels transiting the canal must pay tolls. Tolls for the canal are set by the Panama Canal Authority and are based on vessel type, size, and the type of cargo. For container ships, the toll is assessed on the ship's capacity expressed in twenty-foot equivalent units (TEUs), one TEU being the size of a standard intermodal shipping container. Effective April 1, 2016, this toll went from US$74 per loaded container to $60 per TEU capacity plus $30 per loaded container for a potential $90 per TEU when the ship is full. A Panamax container ship may carry up to . The toll is calculated differently for passenger ships and for container ships carrying no cargo ("in ballast"). , the ballast rate is US$60, down from US$65.60 per TEU. Passenger vessels in excess of 30,000 tons (PC/UMS) pay a rate based on the number of berths, that is, the number of passengers that can be accommodated in permanent beds. The per-berth charge since April 1, 2016 is $111 for unoccupied berths and $138 for occupied berths in the Panamax locks. Started in 2007, this fee has greatly increased the tolls for such ships. Passenger vessels of less than 30,000 tons or less than 33 tons per passenger are charged according to the same per-ton schedule as are freighters. Almost all major cruise ships have more than 33 tons per passenger; the rule of thumb for cruise line comfort is generally given as a minimum of 40 tons per passenger. Most other types of vessel pay a toll per PC/UMS net ton, in which one "ton" is actually a volume of . (The calculation of tonnage for commercial vessels is quite complex.) , this toll is US$5.25 per ton for the first 10,000 tons, US$5.14 per ton for the next 10,000 tons, and US$5.06 per ton thereafter. As with container ships, reduced tolls are charged for freight ships "in ballast", $4.19, $4.12, $4.05 respectively. On 1 April 2016, a more complicated toll system was introduced, having the neopanamax locks at a higher rate in some cases, natural gas transport as a new separate category and other changes. As of October 1, 2017, there are modified tolls and categories of tolls in effect. Small (less than 125 ft) vessels up to 583 PC/UMS net tons when carrying passengers or cargo, or up to 735 PC/UMS net tons when in ballast, or up to 1,048 fully loaded displacement tons, are assessed minimum tolls based upon their length overall, according to the following table (as of 29 April 2015): Morgan Adams of Los Angeles, California, holds the distinction of paying the first toll received by the United States Government for the use of the Panama Canal by a pleasure boat. His boat Lasata passed through the Zone on August 14, 1914. The crossing occurred during a sea voyage from Jacksonville, Florida, to Los Angeles in 1914. The most expensive regular toll for canal passage to date was charged on April 14, 2010, to the cruise ship Norwegian Pearl, which paid US$375,600. The average toll is around US$54,000. The highest fee for priority passage charged through the Transit Slot Auction System was US$220,300, paid on August 24, 2006, by the Panamax tanker Erikoussa, bypassing a 90-ship queue waiting for the end of maintenance work on the Gatun Locks, and thus avoiding a seven-day delay. The normal fee would have been just US$13,430. The lowest toll ever paid was 36 cents | French effort went bankrupt in 1889 after reportedly spending US$287,000,000; an estimated 22,000 men died from disease and accidents, and the savings of 800,000 investors were lost. Work was suspended on May 15, and in the ensuing scandal, known as the Panama affair, some of those deemed responsible were prosecuted, including Gustave Eiffel. Lesseps and his son Charles were found guilty of misappropriation of funds and sentenced to five years' imprisonment. This sentence was later overturned, and the father, at age 88, was never imprisoned. In 1894, a second French company, the Compagnie Nouvelle du Canal de Panama, was created to take over the project. A minimal workforce of a few thousand people was employed primarily to comply with the terms of the Colombian Panama Canal concession, to run the Panama Railroad, and to maintain the existing excavation and equipment in salable condition. The company sought a buyer for these assets, with an asking price of US$109,000,000. In the meantime, they continued with enough activity to maintain their franchise. Phillipe Bunau-Varilla, the French manager of the New Panama Canal Company, eventually managed to persuade Lesseps that a lock-and-lake canal was more realistic than a sea-level canal.The Comité Technique, a high level technical committee, was formed by the Compagnie Nouvelle to review the studies and work—that already finished and that still ongoing—and come up with the best plan for completing the canal. The committee arrived on the Isthmus in February 1896 and went immediately, quietly and efficiently about their work of devising the best possible canal plan, which they presented on November 16, 1898. Many aspects of the plan were similar in principle to the canal that was finally built by the Americans in 1914. It was a lock canal with two high level lakes to lift ships up and over the Continental Divide. Double locks would be 738 feet long and about 30 feet deep; one chamber of each pair would be 82 feet wide, the other 59. There would be eight sets of locks, two at Bohio Soldado and two at Obispo on the Atlantic side; one at Paraiso, two at Pedro Miguel, and one at Miraflores on the Pacific. Artificial lakes would be formed by damming the Chagres River at Bohio and Alhajuela, providing both flood control and electric power. United States acquisition At this time, the President and the Senate of the United States were interested in establishing a canal across the isthmus, with some favoring a canal across Nicaragua and others advocating the purchase of the French interests in Panama. Bunau-Varilla, who was seeking American involvement, asked for $100 million, but accepted $40 million in the face of the Nicaraguan option. In June 1902, the US Senate voted in favor of the Spooner Act, to pursue the Panamanian option, provided the necessary rights could be obtained. On January 22, 1903, the Hay–Herrán Treaty was signed by United States Secretary of State John M. Hay and Colombian Chargé Dr. Tomás Herrán. For $10 million and an annual payment, it would have granted the United States a renewable lease in perpetuity from Colombia on the land proposed for the canal. The treaty was ratified by the US Senate on March 14, 1903, but the Senate of Colombia did not ratify it. Bunau-Varilla told President Theodore Roosevelt and Hay of a possible revolt by Panamanian rebels who aimed to separate from Colombia, and hoped that the United States would support the rebels with US troops and money. Roosevelt changed tactics, based in part on the Mallarino–Bidlack Treaty of 1846, and actively supported the separation of Panama from Colombia. Shortly after recognizing Panama, he signed a treaty with the new Panamanian government under terms similar to the Hay–Herrán Treaty. On November 2, 1903, US warships blocked sea lanes against possible Colombian troop movements en route to put down the Panama rebellion. Panama declared independence on November 3, 1903. The United States quickly recognized the new nation. This happened so quickly that by the time the Colombian government in Bogotá launched a response to the Panamanian uprising US troops had already entered the rebelling province. It should also be stated that the Colombian troops dispatched to Panama were hastily assembled conscripts with little training. While these Conscripts may have been able to defeat the Panamanian rebels, they would not have been able to defeat the US army troops that were supporting the Panamanian rebels. The reason why an army of conscripts was sent was because that was the best response the Colombians could gather; due to the fact that Colombia was still recovering from a civil war within Colombia that was between Liberals and Conservatives from October 1899 to November 1902 known as the "Thousand Days War." With the US being fully aware of these conditions and even incorporating them into the planning of the Panama intervention as the US acted as an arbitrator between the two sides; with the peace treaty that ended the "Thousand Days War" being signed on the USS Wisconsin on November 21, 1902. While in port the US also brought engineering teams to Panama, with the peace delegation, to begin planning for the canal's construction before the US had even gained the rights to build the canal. All these factors would result in the Colombians being unable to put down the Panamanian rebellion and expel the United States troops occupying what today is the independent nation of Panama. On November 6, 1903, Philippe Bunau-Varilla, as Panama's ambassador to the United States, signed the Hay–Bunau-Varilla Treaty, granting rights to the United States to build and indefinitely administer the Panama Canal Zone and its defenses. This is sometimes misinterpreted as the "99-year lease" because of misleading wording included in article 22 of the agreement. Almost immediately, the treaty was condemned by many Panamanians as an infringement on their country's new national sovereignty. This would later become a contentious diplomatic issue among Colombia, Panama, and the United States. President Roosevelt famously stated, "I took the Isthmus, started the canal and then left Congress not to debate the canal, but to debate me." Several parties in the United States called this an act of war on Colombia: The New York Times described the support given by the United States to Bunau-Varilla as an "act of sordid conquest." The New York Evening Post called it a "vulgar and mercenary venture." The US maneuvers are often cited as the classic example of US gunboat diplomacy in Latin America, and the best illustration of what Roosevelt meant by the old African adage, "Speak softly and carry a big stick [and] you will go far." After the revolution in 1903, the Republic of Panama became a US protectorate until 1939. In 1904, the United States purchased the French equipment and excavations, including the Panama Railroad, for US$40 million, of which $30 million related to excavations completed, primarily in the Culebra Cut, valued at about $1.00 per cubic yard. The United States also paid the new country of Panama $10 million and a $250,000 payment each following year. In 1921, Colombia and the United States entered into the Thomson–Urrutia Treaty, in which the United States agreed to pay Colombia $25 million: $5 million upon ratification, and four-$5 million annual payments, and grant Colombia special privileges in the Canal Zone. In return, Colombia recognized Panama as an independent nation. United States construction of the Panama canal, 1904–1914 The US formally took control of the canal property on May 4, 1904, inheriting from the French a depleted workforce and a vast jumble of buildings, infrastructure, and equipment, much of it in poor condition. A US government commission, the Isthmian Canal Commission (ICC), was established to oversee construction; it was given control of the Panama Canal Zone, over which the United States exercised sovereignty. The commission reported directly to Secretary of War William Howard Taft and was directed to avoid the inefficiency and corruption that had plagued the French 15 years earlier. On May 6, 1904, President Theodore Roosevelt appointed John Findley Wallace, formerly chief engineer and finally general manager of the Illinois Central Railroad, as chief engineer of the Panama Canal Project. Overwhelmed by the disease-plagued country and forced to use often dilapidated French infrastructure and equipment, as well as being frustrated by the overly bureaucratic ICC, Wallace resigned abruptly in June 1905. He was succeeded by John Frank Stevens, a self-educated engineer who had built the Great Northern Railroad. Stevens was not a member of the ICC; he increasingly viewed its bureaucracy as a serious hindrance, bypassing the commission and sending requests and demands directly to the Roosevelt administration in Washington, DC. One of Stevens' first achievements in Panama was in building and rebuilding the housing, cafeterias, hotels, water systems, repair shops, warehouses, and other infrastructure needed by the thousands of incoming workers. Stevens began the recruitment effort to entice thousands of workers from the United States and other areas to come to the Canal Zone to work. Workers from the Caribbean—called "Afro-Panamanians"—came in large numbers and many settled permanently. Stevens tried to provide accommodation in which the workers could work and live in reasonable safety and comfort. He also re-established and enlarged the railway, which was to prove crucial in transporting millions of tons of soil from the cut through the mountains to the dam across the Chagres River. Colonel William C. Gorgas had been appointed chief sanitation officer of the canal construction project in 1904. Gorgas implemented a range of measures to minimize the spread of deadly diseases, particularly yellow fever and malaria, which had recently been shown to be mosquito-borne following the work of Dr. Carlos Finlay and Dr. Walter Reed. Investment was made in extensive sanitation projects, including city water systems, fumigation of buildings, spraying of insect-breeding areas with oil and larvicide, installation of mosquito netting and window screens, and elimination of stagnant water. Despite opposition from the commission (one member said his ideas were barmy), Gorgas persisted, and when Stevens arrived, he threw his weight behind the project. After two years of extensive work, the mosquito-spread diseases were nearly eliminated. Even after all that effort, about 5,600 workers died of disease and accidents during the US construction phase of the canal. In 1905, a US engineering panel was commissioned to review the canal design, which had not been finalized. In January 1906 the panel, in a majority of eight to five, recommended to President Roosevelt a sea-level canal, as had been attempted by the French and temporarily abandoned by them in 1887 for a ten locks system designed by Philippe Bunau-Varilla, and definitively in 1898 for a lock-and-lake canal designed by the Comité Technique of the Compagnie Nouvelle de Canal de Panama as conceptualized by Adolphe Godin de Lépinay in 1879. But in 1906 Stevens, who had seen the Chagres in full flood, was summoned to Washington; he declared a sea-level approach to be "an entirely untenable proposition". He argued in favor of a canal using a lock system to raise and lower ships from a large reservoir above sea level. This would create both the largest dam (Gatun Dam) and the largest man-made lake (Gatun Lake) in the world at that time. The water to refill the locks would be taken from Gatun Lake by opening and closing enormous gates and valves and letting gravity propel the water from the lake. Gatun Lake would connect to the Pacific through the mountains at the Gaillard (Culebra) Cut. Unlike Godin de Lépinay with the Congrès International d'Etudes du Canal Interocéanique, Stevens successfully convinced Roosevelt of the necessity and feasibility of this alternative scheme. The construction of a canal with locks required the excavation of more than of material over and above the excavated by the French. As quickly as possible, the Americans replaced or upgraded the old, unusable French equipment with new construction equipment that was designed for a much larger and faster scale of work. 102 large, railroad-mounted steam shovels were purchased, 77 from Bucyrus-Erie, and 25 from the Marion Power Shovel Company. These were joined by enormous steam-powered cranes, giant hydraulic rock crushers, concrete mixers, dredges, and pneumatic power drills, nearly all of which were manufactured by new, extensive machine-building technology developed and built in the United States. The railroad also had to be comprehensively upgraded with heavy-duty, double-tracked rails over most of the line to accommodate new rolling stock. In many places, the new Gatun Lake flooded over the original rail line, and a new line had to be constructed above Gatun Lake's waterline. Goethals replaces Stevens as chief engineer In 1907, Stevens resigned as chief engineer. His replacement, appointed by President Theodore Roosevelt, was US Army Major George Washington Goethals of the US Army Corps of Engineers. Soon to be promoted to lieutenant colonel and later to general, he was a strong, West Point-trained leader and civil engineer with experience in canals (unlike Stevens). Goethals directed the work in Panama to a successful conclusion in 1914, two years ahead of the target date of June 10, 1916. Goethals divided the engineering and excavation work into three divisions: Atlantic, Central, and Pacific. The Atlantic Division, under Major William L. Sibert, was responsible for construction of the massive breakwater at the entrance to Limon Bay, the Gatun locks, and their 3½ mi (5.6 km) approach channel, and the immense Gatun Dam. The Pacific Division, under Sydney B. Williamson (the only civilian member of this high-level team), was similarly responsible for the Pacific 3 mi (4.8 km) breakwater in Panama Bay, the approach channel to the locks, and the Miraflores and Pedro Miguel locks and their associated dams and reservoirs. The Central Division, under Major David du Bose Gaillard of the United States Army Corps of Engineers, was assigned one of the most difficult parts: excavating the Culebra Cut through the continental divide to connect Gatun Lake to the Pacific Panama Canal locks. On October 10, 1913, President Woodrow Wilson sent a signal from the White House by telegraph which triggered the explosion that destroyed the Gamboa Dike. This flooded the Culebra Cut, thereby joining the Atlantic and Pacific oceans via the Panama Canal. Alexandre La Valley (a floating crane built by Lobnitz & Company and launched in 1887) was the first self-propelled vessel to transit the canal from ocean to ocean. This vessel crossed the canal from the Atlantic in stages during construction, finally reaching the Pacific on January 7, 1914. SS Cristobal (a cargo and passenger ship built by Maryland Steel, and launched in 1902 as SS Tremont) on August 3, 1914, was the first ship to transit the canal from ocean to ocean. The construction of the canal was completed in 1914, 401 years after Panama was first crossed overland by Europeans by Vasco Núñez de Balboa's party of conquistadores. The United States spent almost $500 million (roughly equivalent to $ billion in ) to finish the project. This was by far the largest American engineering project to date. The canal was formally opened on August 15, 1914, with the passage of the cargo ship . The opening of the Panama Canal in 1914 caused a severe drop in traffic along Chilean ports due to shifts in maritime trade routes. The burgeoning sheep farming business in southern Patagonia suffered a significant setback by the change in trade routes, as did the economy of the Falkland Islands. Throughout this time, Ernest "Red" Hallen was hired by the Isthmian Canal Commission to document the progress of the work. Later developments By the 1930s, water supply became an issue for the canal, prompting construction of the Madden Dam across the Chagres River above Gatun Lake. Completed in 1935, the dam created Madden Lake (later Alajeula Lake), which provides additional water storage for the canal. In 1939, construction began on a further major improvement: a new set of locks large enough to carry the larger warships that the United States was building at the time and planned to continue building. The work proceeded for several years, and significant excavation was carried out on the new approach channels, but the project was canceled after World War II. After World War II, US control of the canal and the Canal Zone surrounding it became contentious; relations between Panama and the United States became increasingly tense. Many Panamanians felt that the Zone rightfully belonged to Panama; student protests were met by the fencing-in of the zone and an increased military presence there. Demands for the United States to hand over the canal to Panama increased after the Suez Crisis in 1956, when the United States used financial and diplomatic pressure to force France and the UK to abandon their attempt to retake control of the Suez Canal, previously nationalized by the Nasser regime in Egypt. Panamanian unrest culminated in riots on Martyr's Day, January 9, 1964, when about 20 Panamanians and 3–5 US soldiers were killed. A decade later, in 1974, negotiations toward a settlement began and resulted in the Torrijos–Carter Treaties. On September 7, 1977, the treaty was signed by President of the United States Jimmy Carter and Omar Torrijos, de facto leader of Panama. This mobilized the process of granting the Panamanians free control of the canal so long as Panama signed a treaty guaranteeing the permanent neutrality of the canal. The treaty led to full Panamanian control effective at noon on December 31, 1999, and the Panama Canal Authority (ACP) assumed command of the waterway. The Panama Canal remains one of the chief revenue sources for Panama. Before this handover, the government of Panama held an international bid to negotiate a 25-year contract for operation of the container shipping ports located at the canal's Atlantic and Pacific outlets. The contract was not affiliated with the ACP or Panama Canal operations and was won by the firm Hutchison Whampoa, a Hong Kong–based shipping interest owned by Li Ka-shing. Canal Layout While globally the Atlantic Ocean is east of the isthmus and the Pacific is west, the general direction of the canal passage from the Atlantic to the Pacific is from northwest to southeast, because of the shape of the isthmus at the point the canal occupies. The Bridge of the Americas () at the Pacific side is about a third of a degree east of the Colón end on the Atlantic side. Still, in formal nautical communications, the simplified directions "southbound" and "northbound" are used. The canal consists of artificial lakes, several improved and artificial channels, and three sets of locks. An additional artificial lake, Alajuela Lake (known during the American era as Madden Lake), acts as a reservoir for the canal. The layout of the canal as seen by a ship passing from the Atlantic to the Pacific is: From the formal marking line of the Atlantic Entrance, one enters Limón Bay (Bahía Limón), a large natural harbor. The entrance runs . It provides a deepwater port (Cristóbal), with facilities like multimodal cargo exchange (to and from train) and the Colón Free Trade Zone (a free port). A channel forms the approach to the locks from the Atlantic side. The Gatun Locks, a three-stage flight of locks long, lifts ships to the Gatun Lake level, some above sea level. Gatun Lake, an artificial lake formed by the building of the Gatun Dam, carries vessels across the isthmus. It is the summit canal stretch, fed by the Gatun River and emptied by basic lock operations. From the lake, the Chagres River, a natural waterway enhanced by the damming of Gatun Lake, runs about . Here the upper Chagres River feeds the high-level-canal stretch. The Culebra Cut slices through the mountain ridge, crosses the continental divide and passes under the Centennial Bridge. The single-stage Pedro Miguel Lock, which is long, is the first part of the descent with a lift of . The artificial Miraflores Lake long, and above sea level. The two-stage Miraflores Locks is long, with a total descent of at mid-tide. From the Miraflores Locks one reaches Balboa harbor, again with multimodal exchange provision (here the railway meets the shipping route again). Nearby is Panama City. From this harbor an entrance/exit channel leads to the Pacific Ocean (Gulf of Panama), from the Miraflores Locks, passing under the Bridge of the Americas. Thus, the total length of the canal is . Navigation Gatun Lake Created in 1913 by damming the Chagres River, the Gatun Lake is a key part of the Panama Canal, providing the millions of liters of water necessary to operate its locks each time a ship passes through. At time of formation, Gatun Lake was the largest man-made lake in the world. The impassable rainforest around the lake has been the best defense of the Panama Canal. Today these areas remain practically unscathed by human interference and are one of the few accessible areas where various native Central American animal and plant species can be observed undisturbed in their natural habitat. The largest island on Gatun Lake is Barro Colorado Island. It was established for scientific study when the lake was formed, and is operated by the Smithsonian Institution. Many important scientific and biological discoveries of the tropical animal and plant kingdom originated here. Gatun Lake covers about , a vast tropical ecological zone and part of the Atlantic Forest Corridor. Ecotourism on the lake has become an industry for Panamanians. Gatun Lake also provides drinking water for Panama City and Colón. Fishing is one of the primary recreational pursuits on Gatun Lake. Non-native peacock bass were introduced by accident to Gatun Lake around 1967 by a local businessman, and have since flourished to become the dominant angling game fish in Gatun Lake. Locally called Sargento and believed to be the species Cichla pleiozona, these peacock bass originate from the Amazon, Rio Negro, and Orinoco river basins, where they are considered a premier game fish. Lock size The size of the locks determines the maximum size ship that can pass through. Because of the importance of the canal to international trade, many ships are built to the maximum size allowed. These are known as Panamax vessels. A Panamax cargo ship typically has a deadweight tonnage (DWT) of 65,000–80,000 tons, but its actual cargo is restricted to about 52,500 tons because of the draft restrictions within the canal. The longest ship ever to transit the canal was the San Juan Prospector (now Marcona Prospector), an ore-bulk-oil carrier that is long with a beam of . Initially the locks at Gatun were designed to be wide. In 1908, the United States Navy requested that an increased width of at least to allow the passage of US naval ships. Eventually a compromise was made and the locks were built wide. Each lock is long, with the walls ranging in thickness from at the base to at the top. The central wall between the parallel locks at Gatun is thick and over high. The steel lock gates measure an average of thick, wide, and high. Panama Canal pilots were initially unprepared to handle the significant flight deck overhang of aircraft carriers. knocked over all the adjacent concrete lamp posts while passing through the Gatun Locks for the first time in 1928. It is the size of the locks, specifically the Pedro Miguel Locks, along with the height of the Bridge of the Americas at Balboa, that determine the Panamax metric and limit the size of ships that may use the canal. The 2006 third set of locks project has created larger locks, allowing bigger ships to transit through deeper and wider channels. The allowed dimensions of ships using these locks increased by 25 percent in length, 51 percent in beam, and 26 percent in draft, as defined by New Panamax metrics. Tolls As with a toll road, vessels transiting the canal must pay tolls. Tolls for the canal are set by the Panama Canal Authority and are based on vessel type, size, and the type of cargo. For container ships, the toll is assessed on the ship's capacity expressed in twenty-foot equivalent units (TEUs), one TEU being the size of a standard intermodal shipping container. Effective April 1, 2016, this toll went from US$74 per loaded container to $60 per TEU capacity plus $30 per loaded container for a potential $90 per TEU when the ship is full. A Panamax container ship may carry up to . The toll is calculated differently for passenger ships and for container ships carrying no cargo ("in ballast"). , the ballast rate is US$60, down from US$65.60 per TEU. Passenger vessels in excess of 30,000 tons (PC/UMS) pay a rate based on the number of berths, that is, the number of passengers that can be accommodated in permanent beds. The per-berth charge since April 1, 2016 is $111 for unoccupied berths and |
a secret government agency. Conrad's third political novel, Under Western Eyes, is connected to Russian history. Its first audience read it against the backdrop of the failed Revolution of 1905 and in the shadow of the movements and impulses that would take shape as the revolutions of 1917. Conrad's earlier novella, Heart of Darkness (1899), also had political implications, in its depiction of European colonial depredations in Africa, which Conrad witnessed during his employ in the Belgian Congo. John Steinbeck's novel The Grapes of Wrath (1939) is a depiction of the plight of the poor. However, some Steinbeck's contemporaries attacked his social and political views. Bryan Cordyack writes: "Steinbeck was attacked as a propagandist and a socialist from both the left and the right of the political spectrum. The most fervent of these attacks came from the Associated Farmers of California; they were displeased with the book's depiction of California farmers' attitudes and conduct toward the migrants. They denounced the book as a 'pack of lies' and labeled it 'communist propaganda'". Some accused Steinbeck of exaggerating camp conditions to make a political point. Steinbeck had visited the camps well before publication of the novel and argued that their inhumane nature destroyed the settlers' spirit. The Quiet American (1955) by English novelist Graham Greene questions the foundations of growing American involvement in Vietnam in the 1950s. The novel has received much attention due to its prediction of the outcome of the Vietnam War and subsequent American foreign policy since the 1950s. Graham Greene portrays a U.S. official named Pyle as so blinded by American exceptionalism that he cannot see the calamities he brings upon the Vietnamese. The book uses Greene's experiences as a war correspondent for The Times and Le Figaro in French Indochina in 1951–54. The Gay Place (1961) is a set of politically-themed novellas with interlocking plots and characters by American author Billy Lee Brammer. Set in an unnamed state identical to Texas, each novella has a different protagonist: Roy Sherwood, a member of the state legislature; Neil Christiansen, the state's junior senator; and Jay McGown, the governor's speech-writer. The governor himself, Arthur Fenstemaker, a master politician (said to have been based on Brammer's mentor Lyndon Johnson) serves as the dominant figure throughout. The book also includes characters based on Brammer, his wife Nadine, Johnson's wife Ladybird, and his brother Sam Houston Johnson. The book has been widely acclaimed one of the best American political novels ever written. 21st Century Novels Since 2000, there has been a surge of Transatlantic migrant literature in French, Spanish, and English, with new narratives about political topics relating to global debt, labor abuses, mass migration, and environmental crises in the Global South. Political fiction by contemporary novelists from the Caribbean, Sub-Saharan Africa, and Latin America directly challenges political leadership, systemic racism, and economical systems. Fatou Diome, a Senegalese immigrant living France since the 1990s, writes political fiction about her experiences on France's unwelcoming borders that are dominated by white Christian culture. The work of Guadeloupean author Maryse Condé also tackles colonialism and oppression; her best known titles are Ségou (1984) and Ségou II (1985). Set in historical Segou (now part of Mali), the novels examine the violent legacies of the slave trade, Islam, Christianity, and colonization (from 1797 to 1860). A bold critic of the presidency of Nicolas Sarkozy, French novelist Marie Ndiayes won the Prix Goncourt for "Three Strong Women"(2009) about patriarchal control. Proletarian novel The proletarian novel is written by workers, mainly for other workers. It overlaps and sometimes is synonymous with the working-class novel, socialist novel, social-problem novel (also problem novel, sociological novel, or social novel), propaganda or thesis novel, and socialist-realism novel. The intention of the writers of proletarian literature is to lift the workers from the slums by inspiring them to embrace the possibilities of social change or of a political revolution. As such, it is a form of political fiction. The proletarian novel may comment on political events, systems, and theories, and is frequently seen as an instrument to promote social reform or political revolution among the working classes. Proletarian literature is created especially by communist, socialist, and anarchist authors. It is about the lives of the poor, and the period from 1930 to 1945, in particular, produced many such novels. However, proletarian works were also produced before and after those dates. In Britain, the terms "working-class" literature, novel, etc., are more generally used. Social novel A closely related type of novel, which frequently has a political dimension, is the social novel – also known as the "social-problem" or "social-protest" novel – a "work of fiction in which a prevailing social problem, such as gender, race, or class prejudice, is dramatized through its effect on the characters of a novel". More specific examples of social problems that are addressed in such works include poverty, conditions in factories and mines, the plight of child labor, violence against women, rising criminality, and epidemics caused by overcrowding and poor sanitation in cities. Charles Dickens was a fierce critic of the poverty and social stratification of Victorian society. Karl Marx asserted that Dickens "issued to the world more political and social truths than have been uttered by all the professional politicians, publicists and moralists put together". On the other hand, George Orwell, in his essay on Dickens, wrote: "There is no clear sign that he wants the existing order to be overthrown, or that he believes it would make very much difference if it were overthrown. For in reality his target is not so much society as 'human nature'." Dickens's second novel, Oliver Twist (1839), shocked readers with its images of poverty and crime: it destroyed middle-class polemics about criminals, making any pretence to ignorance about what poverty entailed impossible. Charles Dickens's Hard Times (1854) is set in a small Midlands industrial town and particularly criticizes the effect of Utilitarianism on the lives of cities' working classes. John Ruskin declared Hard Times his favourite Dickens work due to its exploration of important social questions. Walter Allen characterised Hard Times as an unsurpassed "critique of industrial society", Notable examples This is a list of a few of the early or notable examples; others belong on the main list Panchatantra (ca. 200 BCE) by Vishnu Sarma Don Quixote (1605) by Miguel de Cervantes Simplicius Simplicissimus (1668) by Hans Jakob Christoffel von Grimmelshausen The Pilgrim's Progress (1678) by John Bunyan Persian Letters (1721) by Montesquieu The History and Adventures of an Atom (1769) by Tobias Smollett Fables and Parables (1779) by Ignacy Krasicki The Partisan Leader (1836) by Nathaniel Beverley Tucker Barnaby Rudge (1841) by Charles Dickens A Tale of Two Cities (1859) by Charles Dickens The Palliser novels (1864–1879) by Anthony Trollope War and Peace (1869) by Leo Tolstoy Demons, also known as The Possessed or The Devils (1872), by Fyodor Dostoyevsky The Gilded Age (1876) by Mark Twain and Charles Dudley Warner Democracy: An American Novel (1880) by Henry Adams The Princess Casamassima (1886) by Henry James The Bostonians (1886) by Henry James Resurrection (1899) by Leo Tolstoy NEQUA or The Problem of the Ages (1900) Jack Adams The Old New Land (1902) by Theodor Herzl Mother (1906) by Maxim Gorky The Jungle (1906) by Upton Sinclair The Ragged-Trousered Philanthropists (1914) by Robert Tressell The Trial (1925) by Franz Kafka The Castle (1926) by Franz Kafka The Career of Nicodemus Dyzma (1932) by Tadeusz Dołęga-Mostowicz Walden Two (1948) by B. F. Skinner Dark Green, Bright Red (1950) by Gore Vidal Atlas Shrugged (1957) by Ayn Rand The Manchurian Candidate (1959) by Richard Condon The Comedians (1966) by Graham Greene Cancer Ward (1967) by Aleksandr Solzhenitsyn Washington, D.C. (1967) by Gore Vidal Burr (1973) by Gore Vidal | of the world's most influential works of philosophy and political theory, both intellectually and historically. The Republic is concerned with justice (δικαιοσύνη), the order and character of the just city-state, and the just man. Other influential politically-themed works include Thomas More's Utopia (1516), Jonathan Swift's Gulliver's Travels (1726), Voltaire's Candide (1759), and Harriet Beecher Stowe's Uncle Tom's Cabin (1852). Political fiction frequently employs satire, often in the utopian and dystopian genres. This includes totalitarian dystopias of the early 20th century such as Jack London's The Iron Heel, Sinclair Lewis' It Can't Happen Here, and George Orwell's Nineteen Eighty-Four. Political satire The Greek playwright Aristophanes' plays are known for their political and social satire, particularly in his criticism of the powerful Athenian general, Cleon, in plays such as The Knights. Aristophanes is also notable for the persecution he underwent. Aristophanes' plays turned upon images of filth and disease. His bawdy style was adopted by Greek dramatist-comedian Menander, whose early play, Drunkenness, contains an attack on the politician, Callimedon. Jonathan Swift's A Modest Proposal (1729) is an 18th-century Juvenalian satirical essay in which he suggests that the impoverished Irish might ease their economic troubles by selling their children as food for rich gentlemen and ladies. The satirical hyperbole mocks heartless attitudes towards the poor, as well as British policy toward the Irish in general. George Orwell's Animal Farm (1945) is an allegorical and dystopian novella which satirises the Russian Revolution of 1917 and the Soviet Union's Stalinist era. Orwell, a democratic socialist, was a critic of Joseph Stalin and was hostile to Moscow-directed Stalinism—an attitude that had been shaped by his experiences during the Spanish Civil War. The Soviet Union, he believed, had become a brutal dictatorship, built upon a cult of personality and enforced by a reign of terror. Orwell described his Animal Farm as "a satirical tale against Stalin", and in his essay "Why I Write" (1946) he wrote that Animal Farm was the first book in which he tried, with full consciousness of what he was doing, "to fuse political purpose and artistic purpose into one whole." Orwell's most famous work, however, is Nineteen Eighty-Four (published in 1949), many of whose terms and concepts, such as Big Brother, doublethink, thoughtcrime, Newspeak, Room 101, telescreen, 2 + 2 = 5, and memory hole, have entered into common use. Nineteen Eighty-Four popularised the adjective "Orwellian", which describes official deception, secret surveillance, and manipulation of recorded history by a totalitarian or authoritarian state. 16th century The poet Jan Kochanowski's play, The Dismissal of the Greek Envoys (1578), the first tragedy written in the Polish language, recounts an incident leading up to the Trojan War. Its theme of the responsibilities of statesmanship resonates to the present day. 18th century The political comedy, The Return of the Deputy (1790), by Julian Ursyn Niemcewicz—Polish poet, playwright, statesman, and comrade-in-arms of Tadeusz Kościuszko—was written in about two weeks' time while Niemcewicz was serving as a deputy to the historic Four-Year Sejm of 1788–92. The comedy's premiere in January 1791 was an enormous success, sparking widespread debate, royal communiques, and diplomatic correspondence. As Niemcewicz had hoped, it set the stage for passage of Poland's epochal Constitution of 3 May 1791, which is regarded as Europe's first, and the world's second, modern written national constitution, after the United States Constitution implemented in 1789. The comedy pits proponents against opponents of political reforms: of abolishing the destabilizing free election of Poland's kings; of abolishing the legislatively destructive liberum veto; of granting greater rights to peasants and townspeople; of curbing the privileges of the mostly self-interested noble class; and of promoting a more active Polish role in international affairs, in the interest of stopping the depredations of Poland's neighbors, Russia, Prussia, and Austria (who will in 1795 complete the dismemberment of the Polish–Lithuanian Commonwealth). Romantic interest is provided by a rivalry between a reformer and a conservative for a young lady's hand—which is won by the proponent of reforms. 19th-century novel An early example of the political novel is The Betrothed (1827) by Alessandro Manzoni, an Italian historical novel. Set in northern Italy in 1628, during the oppressive years of direct Spanish rule, it has been seen sometimes as a veiled attack on the Austrian Empire, which controlled Italy at the time the novel was written. It has been called the most famous and widely read novel in the Italian language. In the 1840s British politician Benjamin Disraeli wrote a trilogy of novels with political themes. With Coningsby; or, The New Generation (1844), Disraeli, in historian Robert Blake's view, "infused the novel genre with political sensibility, espousing the belief that England's future as a world power depended not on the complacent old guard, but on youthful, idealistic politicians." Coningsby was followed by Sybil; or, The Two Nations (1845), another political novel, which was less idealistic and more clear-eyed than Coningsby; the "two nations" of its subtitle referred to the huge economic and social gap between the privileged few and the deprived working classes. The last of Disraeli's political-novel trilogy, Tancred; or, The New Crusade (1847), promoted the Church of England's role in reviving Britain's flagging spirituality. Ivan Turgenev wrote Fathers and Sons (1862) as a response to the growing cultural schism that he saw between Russia's liberals of the 1830s and 1840s, and the growing Russian nihilist movement among their sons. Both the nihilists and the 1830s liberals sought Western-based social |
are served hot. In Australia, some parts of South Africa, New Zealand, India, and the West Indies, especially in Barbados, both forms of potato product are simply known as "chips", as are the larger "home-style" variety. In the north of New Zealand, they are sometimes affectionately known as "chippies"; however, they are marketed as "chips" throughout the country. In Australia and New Zealand, a distinction is sometimes made between "hot chips" (fried potatoes) and "chips" or "potato chips". In Bangladesh, they are generally known as "chip" or "chips", and much less frequently as "crisps" (pronounced "kirisp") and locally, alu bhaja (for their similarity to the fried potato dish, bhajji). In German-speaking countries (Austria, Germany: "Kartoffelchips", often shortened to "Chips"; Switzerland: "Pommes Chips") and in countries of the former Yugoslavia, fried thin potato slices are known as "chips" (locally pronounced very similarly to the English pronunciation), with a clear distinction from French fries. In Brazil, "home-style" potato chips are known as ("Portuguese potatoes") if their sides are relatively smooth and ("Prussian potatoes") if their sides show a wafer biscuit-like pattern, whilst American-like industrial uniform potato chips made from a fried potato purée-based dough are known as "batata chips" ("potato chips"), or just . Health concerns Most potato chips contain high levels of sodium, from salt. This has been linked to health issues such as high blood pressure. However, researchers at Queen Mary University of London in 2004 have noted that a small "bag of ready-salted crisps" contains less salt than a serving of many breakfast cereals, including "every brand of cornflakes on sale in the UK." Some potato chip companies have responded to the long-standing concerns by investing in research and development to modify existing recipes and create health-conscious products. PepsiCo research shows that about 80% of salt on chips is not sensed by the tongue before being swallowed. Frito-Lay spent $414 million in 2009 on product development, including development of salt crystals that would reduce the salt content of Lay's potato chips without adversely affecting flavor. Unsalted chips are available, e.g. the longstanding British brand Salt 'n' Shake, whose chips are not seasoned, but instead include a small salt sachet in the bag for seasoning to taste. Many other popular brands in the United States, such as Frito-Lay, also offer such a product. Another possible health concern related to potato chips is acrylamide, which is produced when potatoes are fried or baked at high temperatures. Studies show that laboratory animals exposed to high levels of acrylamide develop cancer; however, it is currently unclear whether a similar risk exists in humans. In August 2008, California Attorney General Jerry Brown announced a settlement with Frito-Lay, Kettle Foods, and Lance Inc., the makers of Cape Cod Potato Chips, for violating the state's Safe Drinking Water and Toxic Enforcement Act. The state had alleged in 2005 that potato chips from these companies failed to document that they contained high levels of acrylamide, which is listed by California since the 1990s as a carcinogen. These companies paid fines and agreed to reduce acrylamide levels to be under 275 parts per billion. Many potato chip manufacturers attempt to remove burned and thus potentially acrylamide-rich chips before the packaging process. Large scanners are used to eliminate chips worst affected by heat. Regional varieties Similar foods Another type of potato chip, notably the Pringles and Lay's Stax brands, is made by extruding or pressing a dough made from dehydrated potato flour into the desired shape before frying. This makes chips that are uniform in size and shape, which allows them to be stacked and packaged in rigid cardboard or plastic canisters. Pringles are officially branded as "potato crisps" in the US. Pringles may be termed "potato chips" in Britain, to distinguish them from traditional "crisps". Munchos, another brand that uses the term "potato crisps", has deep air pockets in its chips that give it a curved shape, though the chips themselves resemble regular bagged chips. An additional variant of potato chips exists in the form of "potato sticks", also called "shoestring potatoes". These are made as extremely thin (2 to 3 mm) versions of the popular French fry but are fried in the manner of regular salted potato chips. A hickory-smoke-flavored version is popular in Canada, going by the vending machine name "Hickory Sticks". Potato sticks are typically packaged in rigid containers, although some manufacturers use flexible pouches, similar to potato chip bags. Potato sticks were originally packed in hermetically sealed steel cans. In the 1960s, manufacturers switched to the less expensive composite canister (similar to the Pringles container). Reckitt Benckiser was a market leader in this category under the Durkee Potato Stix and French's Potato Sticks names but exited the business in 2008. In 2014, French's reentered the market. A larger variant (about 1 cm thick) made with dehydrated potatoes is marketed as Andy Capp's Pub Fries, using the theme of a long-running British comic strip, which are baked and sold in a variety of flavors. Walkers make a similar product (using the Smiths brand) called "Chipsticks" which are sold in ready-salted and salt and vinegar flavors. Some companies have also marketed baked potato chips as an alternative with lower fat content. Additionally, some varieties of fat-free chips have been made using artificial, and indigestible, fat substitutes. These became well known in the media when an ingredient | A potato chip (often just chip, or crisp in British and Irish English) is a thin slice of potato that has been either deep fried or baked until crunchy. They are commonly served as a snack, side dish, or appetizer. The basic chips are cooked and salted; additional varieties are manufactured using various flavorings and ingredients including herbs, spices, cheeses, other natural flavors, artificial flavors, and additives. Potato chips form a large part of the snack food and convenience food market in Western countries. The global potato chip market generated total revenue of US$16.49 billion in 2005. This accounted for 35.5% of the total savory snacks market in that year ($46.1 billion). History The earliest known recipe for something similar to today's potato chips is in William Kitchiner's book The Cook's Oracle published in 1817, which was a bestseller in the United Kingdom and the United States. The 1822 edition's recipe for "Potatoes fried in Slices or Shavings" reads "peel large potatoes... cut them in shavings round and round, as you would peel a lemon; dry them well in a clean cloth, and fry them in lard or dripping". An 1825 British book about French cookery calls them "Pommes de Terre frites" (second recipe) and calls for thin slices of potato fried in "clarified butter or goose dripping", drained and sprinkled with salt. Early recipes for potato chips in the US are found in Mary Randolph's Virginia House-Wife (1824) and in N.K.M. Lee's Cook's Own Book (1832), both of which explicitly cite Kitchiner. A legend associates the creation of potato chips with Saratoga Springs, New York, decades later than the first recorded recipe. By the late nineteenth century, a popular version of the story attributed the dish to George Crum, a cook at Moon's Lake House who was trying to appease an unhappy customer on August 24, 1853. The customer kept sending back his French-fried potatoes, complaining that they were too thick, too "soggy", or not salted enough. Frustrated, Crum sliced several potatoes extremely thin, fried them to a crisp, and seasoned them with extra salt. To his surprise, the customer loved them. They soon came to be called "Saratoga Chips", a name that persisted into the mid-twentieth century. A version of this story was popularized in a 1973 national advertising campaign by St. Regis Paper Company which manufactured packaging for chips, claiming that Crum's customer was Cornelius Vanderbilt. Crum was already renowned as a chef at the time, and he owned a lakeside restaurant by 1860 which he called Crum's House. The "Saratoga Chips" brand name still exists today. Production In the 20th century, potato chips spread beyond chef-cooked restaurant fare and began to be mass-produced for home consumption. The Dayton, Ohio-based Mikesell's Potato Chip Company, founded in 1910, identifies as the "oldest potato chip company in the United States". New England-based Tri-Sum Potato Chips, founded in 1908 as the Leominster Potato Chip Company, in Leominster, Massachusetts, claims to be America's first potato chip manufacturer. Flavoring In an idea originated by the Smiths Potato Crisps Company Ltd, formed in 1920, Frank Smith packaged a twist of salt with his chips in greaseproof paper bags, which were sold around London. The potato chip remained otherwise unseasoned until an important scientific development in the 1950s. After English biochemists Archer Martin and Richard Synge received a Nobel Prize for inventing partition chromatography in 1952, food scientists began to develop flavors via a gas chromatograph. After some trial and error, in 1954, Joe "Spud" Murphy, the owner of the Irish crisps company Tayto, and his employee Seamus Burke, produced the world's first seasoned chips: Cheese & Onion and Salt & Vinegar. Companies worldwide sought to buy the rights to Tayto's technique. Walkers of Leicester, England produced Cheese & Onion the same year. The first flavored chips in the United States, barbecue flavor, were being manufactured and sold by 1954. In 1958, Herr's was the |
voted for a repeal of the law, prohibition was abolished in early 1932. Today, all Nordic countries except Denmark continue to have strict controls on the sale of alcohol, which is highly taxed (dutied) to the public. There are government monopolies in place for selling spirits, wine, and stronger beers in Norway (Vinmonopolet), Finland (Alko), Sweden (Systembolaget), Iceland (Vínbúðin), and the Faroe Islands (Rúsdrekkasøla Landsins). Bars and restaurants may, however, import alcoholic beverages directly or through other companies. Greenland, which is part of the Kingdom of Denmark, does not share its easier controls on the sale of alcohol. Greenland has (like Denmark) sales in food shops, but prices are typically high. Private import when travelling from Denmark is only allowed in small quantities. Russian Empire and the Soviet Union In the Russian Empire, a limited version of a Dry Law was introduced in 1914. It continued through the turmoil of the Russian Revolution of 1917 and the Russian Civil War into the period of Soviet Russia and the Soviet Union until 1925. United Kingdom Although the sale or consumption of commercial alcohol has never been prohibited by law in the United Kingdom, historically, various groups in the UK have campaigned for the prohibition of alcohol; including the Society of Friends (Quakers), The Methodist Church and other non-conformists, as well as temperance movements such as Band of Hope and temperance Chartist movements of the nineteenth century. Formed in 1853 and inspired by the Maine law in the United States, the United Kingdom Alliance aimed at promoting a similar law prohibiting the sale of alcohol in the UK. This hard-line group of prohibitionists was opposed by other temperance organisations who preferred moral persuasion to a legal ban. This division in the ranks limited the effectiveness of the temperance movement as a whole. The impotence of legislation in this field was demonstrated when the Sale of Beer Act 1854, which restricted Sunday opening hours, had to be repealed, following widespread rioting. In 1859, a prototype prohibition bill was overwhelmingly defeated in the House of Commons. On 22 March 1917, during the First World War at a crowded meeting in the Queen's Hall in London (chaired by Alfred Booth) many influential people including Agnes Weston spoke, or letters from them were read out, against alcohol consumption, calling for prohibition; General Sir Reginald Hart wrote to the meeting that "Every experienced officer knew that practically all unhappiness and crime in the Army is due to drink". At the meeting, Lord Channing said that it was a pity that the whole Cabinet did not follow the example of King George V and Lord Kitchener when in 1914 those two spoke calling for complete prohibition for the duration of the war. Edwin Scrymgeour served as Member of Parliament for Dundee between 15 November 1922 and 8 October 1931. He remains the only person to have ever been elected to the House of Commons on a prohibitionist ticket. In 1922, he defeated incumbent Liberal member Winston Churchill; winning the seat for the Scottish Prohibition Party, which he had founded in 1901, and for which he had stood for election successfully as a Dundee Burgh Councillor in 1905 and unsuccessfully as a parliamentary candidate between 1908 and 1922. North America Canada Indigenous peoples in Canada were subject to prohibitory alcohol laws under the Indian Act of 1876. Sections of the Indian Act regarding liquor were not repealed for over a hundred years, until 1985. An official, but non-binding, federal referendum on prohibition was held in 1898. Prime Minister Wilfrid Laurier's government chose not to introduce a federal bill on prohibition, mindful of the strong antipathy in Quebec. As a result, Canadian prohibition was instead enacted through laws passed by the provinces during the first twenty years of the 20th century, especially during the 1910s. Canada did, however, enact a national prohibition from 1918 to 1920 as a temporary wartime measure. Much of the rum-running during prohibition took place in Windsor, Ontario. The provinces later repealed their prohibition laws, mostly during the 1920s, although some local municipalities remain dry. Mexico Some communities in the Chiapas state of southern Mexico are under the control of the libertarian socialist Zapatista Army of National Liberation, and often ban alcohol as part of what was described as "a collective decision". This prohibition has been used by many villages as a way to decrease domestic violence and has generally been favored by women. This prohibition, however, is not recognized by federal Mexican law as the Zapatista movement is strongly opposed by the federal government. The sale and purchase of alcohol is prohibited on and the night before certain national holidays, such as Natalicio de Benito Juárez (birthdate of Benito Juárez) and Día de la Revolución, which are meant to be dry nationally. The same "dry law" applies to the days before presidential elections every six years. United States Prohibition in the United States focused on the manufacture, transportation, and sale of alcoholic beverages; exceptions were made for medicinal and religious uses. Alcohol consumption was never illegal under federal law. Nationwide Prohibition did not begin in the United States until January 1920, when the Eighteenth Amendment to the U.S. Constitution went into effect. The 18th amendment was ratified in 1919, and was repealed in December 1933 with the ratification of the Twenty-first Amendment. Concern over excessive alcohol consumption began during the American colonial era, when fines were imposed for drunken behavior and for selling liquor without a license. Protestant religious groups urged Americans to curb their drinking habits for moral and health reasons. By the 1840s the temperance movement was actively encouraging individuals to immediately stop drinking. However, the issue of slavery, and then the Civil War, overshadowed the temperance movement until the 1870s. Prohibition was a major reform movement from the 1870s until the 1920s, when nationwide prohibition went into effect. The Women's Crusade of 1873 and the Woman's Christian Temperance Union (WCTU), founded in 1874, were means through which certain women organized and demanded political action, well before they were granted the vote. The WCTU and the Prohibition Party were major players until the 20th century, when the Anti-Saloon League emerged as the movement's leader. By 1913, 9 states had statewide prohibition and 31 others had local option laws in effect. The League then turned their efforts toward attaining a constitutional amendment and grassroots support for nationwide prohibition. A new constitutional amendment submitted by Congress in December 1917 prohibited "the manufacture, sale, or transportation of intoxicating liquors within, the importation thereof into, or the exportation thereof from the United States and all territory subject to the jurisdiction thereof for beverage purposes". The amendment was ratified and became law on January 16, 1919. On October 28, 1919, Congress passed the National Prohibition Act, also known as the Volstead Act, which provided enabling legislation to implement the 18th Amendment. After a year's required delay, national prohibition began on January 16, 1920. The illicit market soon grew to about two-thirds its pre-Prohibition levels. Illegal stills flourished in remote rural areas as well as city slums, and large quantities were smuggled from Canada. Bootlegging became a major business activity for organized crime groups, under leaders such as Al Capone in Chicago and Lucky Luciano in New York City. Prohibition lost support during the Great Depression, from 1929. The repeal movement was initiated and financed by the Association Against the Prohibition Amendment, and Pauline Sabin, a wealthy Republican, founded the Women's Organization for National Prohibition Reform (WONPR). Repeal of Prohibition in the United States was accomplished with the ratification of the Twenty-first Amendment on December 5, 1933. Under its terms, states were allowed to set their own laws for the control of alcohol. Between 1832 and 1953, federal legislation prohibited the sale of alcohol to Native Americans, with very limited success. After 1953, Native American communities and reservations were permitted to pass their own local ordinances governing the sale of alcoholic beverages. In the 21st century, there are still counties and parishes within the United States known as "dry," where the sale of alcohol is prohibited or restricted. South America Venezuela In Venezuela, twenty-four hours before every election, the government prohibits the sale and distribution of alcoholic beverages throughout the national territory, including the restriction to all dealers, liquor stores, supermarkets, restaurants, wineries, pubs, bars, public entertainment, clubs and any establishment that markets alcoholic beverages. The same is done during Holy Week as a measure to reduce the alarming rate of road traffic accidents during these holidays. Oceania Australia The Australian Capital Territory (then the Federal Capital Territory) was the first jurisdiction in Australia to have prohibition laws. In 1911, King O'Malley, then Minister of Home Affairs, shepherded laws through Parliament preventing new issue or transfer of licences to sell alcohol, to address unruly behaviour among workers building the new capital city. Prohibition was partial, since possession of alcohol purchased outside of the Territory remained legal and the few pubs that had existing licences could continue to operate. The Federal Parliament repealed the laws after residents of the Federal Capital Territory voted for the end of them in a 1928 plebiscite. Since then, some state governments and local councils have enacted dry areas. This is where the purchase or consumption of alcohol is only permitted in licensed areas such as liquor | consumption, importation and brewing of, and trafficking in liquor is strictly against the law. Yemen Alcohol is banned in Yemen. Southeast Asia Brunei In Brunei, alcohol consumption and sale is banned in public. Non-Muslims are allowed to purchase a limited amount of alcohol from their point of embarcation overseas for their own private consumption, and non-Muslims who are at least the age of 18 are allowed to bring in not more than two bottles of liquor (about two litres) and twelve cans of beer per person into the country. Indonesia Alcohol sales are banned in small shops and convenience stores. Malaysia Alcohol is banned only for Muslims in Malaysia due to its Islamic faith and sharia law. Nevertheless, alcoholic products can easily be found in supermarkets, specialty shops, and convenience stores all over the country. Non-halal restaurants also typically sell alcohol. Philippines There are only restrictions during elections in the Philippines. Alcohol is prohibited from purchase two days prior to an election. The Philippine Commission on Elections may opt to extend the liquor ban. In the 2010 elections, the liquor ban was a minimum two days; in the 2013 elections, there was a proposal that it be extended to five days. This was overturned by the Supreme Court. Other than election-related prohibition, alcohol is freely sold to anyone above the legal drinking age. Thailand Alcohol sales are prohibited during elections from 18:00 the day prior to voting, until the end of the day of voting itself. Alcohol is also prohibited on major Buddhist holy days, and sometimes on royal commemoration days, such as birthdays. Thailand also enforces time-limited bans on alcohol on a daily basis. Alcohol can only be legally purchased in stores or restaurants between 11:00–14:00 and 17:00–midnight. The law is enforced by all major retailers (most notably 7-Eleven) and restaurants, but is frequently ignored by the smaller "mom and pop" stores. Hotels and resorts are exempt from the rules. The consumption of alcohol is also banned at any time within 200 meters of a filling station (where sale of alcohol is also illegal), schools, temples or hospitals as well as on board any type of road vehicle regardless of whether it is being consumed by the driver or passenger. At certain times of the year—Thai New Year (Songkran) is an example—the government may also enforce arbitrary bans on the sale and consumption of alcohol in specific public areas where large scale festivities are due to take place and large crowds are expected. Thailand strictly regulates alcohol advertising, as specified in the Alcoholic Beverage Control Act, B.E. 2551 (2008) (ABCA). Sales of alcohol via "electronic channels" (internet) are prohibited. Europe Czech Republic On 14 September 2012, the Government of the Czech Republic banned all sales of alcoholic drinks with more than 20% alcohol. From this date, it was illegal to sell such alcoholic beverages in shops, supermarkets, bars, restaurants, filling stations, e-shops etc. This measure was taken in response to the wave of methanol poisoning cases resulting in the deaths of 18 people in the Czech Republic. Since the beginning of the "methanol affair" the total number of deaths has increased to 25. The ban was to be valid until further notice, though restrictions were eased towards the end of September. The last bans on Czech alcohol with regard to the poisoning cases were lifted on 10 October 2012, when neighbouring Slovakia and Poland allowed its import once again. Nordic countries The Nordic countries, with the exception of Denmark, have had a strong temperance movement since the late-1800s, closely linked to the Christian revival movement of the late-nineteenth century, but also to several worker organisations. As an example, in 1910 the temperance organisations in Sweden had some 330,000 members, which was about 6% of a population of 5.5 million. This heavily influenced the decisions of Nordic politicians in the early 20th century. In 1907, the Faroe Islands passed a law prohibiting all sale of alcohol, which was in force until 1992. Very restricted private importation from Denmark was allowed from 1928 onwards. In 1914, Sweden put in place a rationing system, the Bratt System, in force until 1955. A referendum in 1922 rejected an attempt to enforce total prohibition. In 1915, Iceland instituted total prohibition. The ban for wine was lifted in 1922 and spirits in 1935, but beer remained prohibited until 1989 (circumvented by mixing light beer and spirits). In 1916, Norway prohibited distilled beverages, and in 1917 the prohibition was extended to also include fortified wine and beer. The wine and beer ban was lifted in 1923, and in 1927 the ban of distilled beverages was also lifted. In 1919, Finland enacted prohibition, as one of the first acts after independence from the Russian Empire. Four previous attempts to institute prohibition in the early twentieth century had failed due to opposition from the tsar. After a development similar to the one in the United States during its prohibition, with large-scale smuggling and increasing violence and crime rates, public opinion turned against the prohibition, and after a national referendum where 70% voted for a repeal of the law, prohibition was abolished in early 1932. Today, all Nordic countries except Denmark continue to have strict controls on the sale of alcohol, which is highly taxed (dutied) to the public. There are government monopolies in place for selling spirits, wine, and stronger beers in Norway (Vinmonopolet), Finland (Alko), Sweden (Systembolaget), Iceland (Vínbúðin), and the Faroe Islands (Rúsdrekkasøla Landsins). Bars and restaurants may, however, import alcoholic beverages directly or through other companies. Greenland, which is part of the Kingdom of Denmark, does not share its easier controls on the sale of alcohol. Greenland has (like Denmark) sales in food shops, but prices are typically high. Private import when travelling from Denmark is only allowed in small quantities. Russian Empire and the Soviet Union In the Russian Empire, a limited version of a Dry Law was introduced in 1914. It continued through the turmoil of the Russian Revolution of 1917 and the Russian Civil War into the period of Soviet Russia and the Soviet Union until 1925. United Kingdom Although the sale or consumption of commercial alcohol has never been prohibited by law in the United Kingdom, historically, various groups in the UK have campaigned for the prohibition of alcohol; including the Society of Friends (Quakers), The Methodist Church and other non-conformists, as well as temperance movements such as Band of Hope and temperance Chartist movements of the nineteenth century. Formed in 1853 and inspired by the Maine law in the United States, the United Kingdom Alliance aimed at promoting a similar law prohibiting the sale of alcohol in the UK. This hard-line group of prohibitionists was opposed by other temperance organisations who preferred moral persuasion to a legal ban. This division in the ranks limited the effectiveness of the temperance movement as a whole. The impotence of legislation in this field was demonstrated when the Sale of Beer Act 1854, which restricted Sunday opening hours, had to be repealed, following widespread rioting. In 1859, a prototype prohibition bill was overwhelmingly defeated in the House of Commons. On 22 March 1917, during the First World War at a crowded meeting in the Queen's Hall in London (chaired by Alfred Booth) many influential people including Agnes Weston spoke, or letters from them were read out, against alcohol consumption, calling for prohibition; General Sir Reginald Hart wrote to the meeting that "Every experienced officer knew that practically all unhappiness and crime in the Army is due to drink". At the meeting, Lord Channing said that it was a pity that the whole Cabinet did not follow the example of King George V and Lord Kitchener when in 1914 those two |
drug-resistant tuberculosis (XDR-TB) drug-susceptible again and make methicillin-resistant Staphylococcus aureus (MRSA) susceptible to beta-lactam antibiotics. The major reason why thioridazine has not been utilized as an antimicrobial agent (it is a first-generation or "typical" antipsychotic medication) is due to its adverse effects on the central nervous system and cardiovascular system (particularly QT interval prolongation). The term "phenothiazines" describes the largest of the five main classes of antipsychotic drugs. These drugs have antipsychotic and, often, antiemetic properties, although they may also cause severe side effects such as extrapyramidal symptoms (including akathisia and tardive dyskinesia), hyperprolactinaemia, and the rare but potentially fatal neuroleptic malignant syndrome, as well as substantial weight gain. Use of phenothiazines has been associated with antiphospholipid syndrome, but no causal relationship has been established. Phenothiazine antipsychotics are classified into three groups that differ with respect to the substituent on nitrogen: the aliphatic compounds (bearing acyclic groups), the "piperidines" (bearing piperidine-derived groups), and the piperazine (bearing piperazine-derived substituents). Nondrug applications The synthetic dye methylene blue, containing the structure, was described in 1876. Many water-soluble phenothiazine derivatives, such as methylene blue, methylene green, thionine, and others, can be electropolymerized into conductive polymers used as electrocatalysts for NADH oxidation in enzymatic biosensors and biofuel cells. Phenothiazine is used as an anaerobic inhibitor for acrylic acid polymerization, often used as an in-process inhibitor during the purification of acrylic acid. Trade names Like many commercially significant compounds, phenothiazine has numerous trade names, including AFI-Tiazin, Agrazine, Antiverm, Biverm, Dibenzothiazine, Orimon, Lethelmin, Souframine, Nemazene, Vermitin, Padophene, Fenoverm, Fentiazine, Contaverm, Fenothiazine, Phenovarm, Ieeno, ENT 38, Helmetina, Helmetine, Penthazine, XL-50, Wurm-thional, Phenegic, Phenovis, Phenoxur, and Reconox. Former uses Phenothiazine was formerly used as an insecticide and as a drug to treat infections with parasitic worms (anthelminthic) in livestock and people, but its use for those purposes has been superseded by other chemicals. Phenothiazine was introduced by DuPont as an insecticide in 1935. About 3,500,000 pounds were sold in the US in 1944. However, because it was degraded by sunlight and air, it was difficult to determine how much to use in the field, and its use waned in the 1940s with the arrival of new pesticides like DDT that were more durable. As of July 2015 it is not registered for pesticide use in the US, Europe, or Australia. It was introduced as anthelminthic in livestock in 1940 and is considered, with thiabendazole, to be the first modern anthelminthic. The | but no causal relationship has been established. Phenothiazine antipsychotics are classified into three groups that differ with respect to the substituent on nitrogen: the aliphatic compounds (bearing acyclic groups), the "piperidines" (bearing piperidine-derived groups), and the piperazine (bearing piperazine-derived substituents). Nondrug applications The synthetic dye methylene blue, containing the structure, was described in 1876. Many water-soluble phenothiazine derivatives, such as methylene blue, methylene green, thionine, and others, can be electropolymerized into conductive polymers used as electrocatalysts for NADH oxidation in enzymatic biosensors and biofuel cells. Phenothiazine is used as an anaerobic inhibitor for acrylic acid polymerization, often used as an in-process inhibitor during the purification of acrylic acid. Trade names Like many commercially significant compounds, phenothiazine has numerous trade names, including AFI-Tiazin, Agrazine, Antiverm, Biverm, Dibenzothiazine, Orimon, Lethelmin, Souframine, Nemazene, Vermitin, Padophene, Fenoverm, Fentiazine, Contaverm, Fenothiazine, Phenovarm, Ieeno, ENT 38, Helmetina, Helmetine, Penthazine, XL-50, Wurm-thional, Phenegic, Phenovis, Phenoxur, and Reconox. Former uses Phenothiazine was formerly used as an insecticide and as a drug to treat infections with parasitic worms (anthelminthic) in livestock and people, but its use for those purposes has been superseded by other chemicals. Phenothiazine was introduced by DuPont as an insecticide in 1935. About 3,500,000 pounds were sold in the US in 1944. However, because it was degraded by sunlight and air, it was difficult to determine how much to use in the field, and its use waned in the 1940s with the arrival of new pesticides like DDT that were more durable. As of July 2015 it is not registered for pesticide use in the US, Europe, or Australia. It was introduced as anthelminthic in livestock in 1940 and is considered, with thiabendazole, to be the first modern anthelminthic. The first instances of resistance were noted in 1961. Among anthelmintics, Blizzard et al. 1990 found only paraherquamide to have similar activity to phenothiazine. It is possible that they share the same mode of action. Uses for this purpose in the US are still described but it has "virtually disappeared from the market." In the 1940s it also was introduced as antihelminthic for humans; since it was often given to children, the drug was often sold in |
its artistry offers "only a kibitzer's pleasure". Macdonald called the reviews he had seen, other than McCarthy's, "cautiously unfavorable". TIME magazine's 1962 review stated that "Pale Fire does not really cohere as a satire; good as it is, the novel in the end seems to be mostly an exercise in agility – or perhaps in bewilderment", though this did not prevent TIME from including the book in its 2005 list of the 100 best English-language novels published since 1923. The first Russian translation of the novel, one created by Véra Nabokov, its dedicatee, was published in 1983 by Ardis in Ann Arbor, Michigan (Alexei Tsvetkov initially played an important role in this translation). After Nabokov's reputation was rehabilitated in the Soviet Union (his novels started being published there in 1986 and the first book composed entirely of Nabokov's works was printed in 1988), Pale Fire was published in 1991 in Sverdlovsk (in Sergei Ilyin's Russian translation). Interpretations Some readers concentrate on the apparent story, focusing on traditional aspects of fiction such as the relationship among the characters. In 1997, Brian Boyd published a much-discussed study arguing that the ghost of John Shade influenced Kinbote's contributions. He expanded this essay into a book in which he also argues that, in order to trigger Shade's poem, Hazel Shade's ghost induced Kinbote to recount his Zemblan delusions to Shade. Some readers, starting with Mary McCarthy and including Boyd, Nabokov's annotator Alfred Appel, and D. Barton Johnson, see Charles Kinbote as an alter-ego of the insane Professor V. Botkin, to whose delusions John Shade and the rest of the faculty of Wordsmith College generally condescend. Nabokov himself endorsed this reading, stating in an interview in 1962 (the novel's year of publication) that Pale Fire "is full of plums that I keep hoping somebody will find. For instance, the nasty commentator is not an ex-King of Zembla nor is he professor Kinbote. He is professor Botkin, or Botkine, a Russian and a madman." The novel's intricate structure of teasing cross-references leads readers to this "plum". The Index, supposedly created by Kinbote, features an entry for a "Botkin, V.," describing this Botkin as an "American scholar of Russian descent"—and referring to a note in the Commentary on line 894 of Shade's poem, in which no such person is directly mentioned but a character suggests that "Kinbote" is "a kind of anagram of Botkin or Botkine". In this interpretation, "Gradus" the murderer is an American named Jack Grey who wanted to kill Judge Goldsworth, whose house "Pale Fire's" commentator—whatever his "true" name is—is renting. Goldsworth had condemned Grey to an asylum from which he escaped shortly before mistakenly killing Shade, who resembled Goldsworth. Other readers see a story quite different from the apparent narrative. "Shadeans" maintain that John Shade wrote not only the poem, but the commentary as well, having invented his own death and the character of Kinbote as a literary device. According to Boyd, Andrew Field invented the Shadean theory and Julia Bader expanded it; Boyd himself espoused the theory for a time. In an alternative version of the Shadean theory, Tiffany DeRewal and Matthew Roth argued that Kinbote is not a separate person but is a dissociated, alternative personality of John Shade. (An early reviewer had mentioned that "a case might be made" for such a reading.) "Kinboteans", a decidedly smaller group, believe that Kinbote invented the existence of John Shade. Boyd credits the Kinbotean theory to Page Stegner and adds that most of its adherents are newcomers to the book. Some readers see the book as oscillating undecidably between these alternatives, like the Rubin vase (a drawing that may be two profiles or a goblet). Though a minority of commentators believe or at least accept the possibility that Zembla is as "real" as New Wye, most assume that Zembla, or at least the operetta-quaint and homosexually gratified palace life enjoyed by Charles Kinbote before he is overthrown, is imaginary in the context of the story. The name "Zembla" (taken from "Nova Zembla", a former latinization of Novaya Zemlya) may evoke popular fantasy literature about royalty such as The Prisoner of Zenda. As in other Nabokov books, however, the fiction is an exaggerated or comically distorted version of his own life as a son of privilege before the Russian Revolution and an exile afterwards, and the central murder has resemblances (emphasized by Priscilla Meyer) to Nabokov's father's murder by an assassin who was trying to kill someone else. Still other readers de-emphasize any sort of "real story" and may doubt the existence of such a thing. In the interplay of allusions and thematic links, they find a multifaceted image of English literature, criticism, or glimpses of a higher world and an afterlife. Allusions and references The first two lines of John Shade's 999-line poem, "Pale Fire", have become Nabokov's most quoted couplet: I was the shadow of the waxwing slain By the false azure in the window pane Like many of Nabokov's fictions, Pale Fire alludes to others of his. "Hurricane Lolita" is mentioned, and Pnin appears as a minor character. There are many resemblances to "Ultima Thule" and "Solus Rex", two short stories by Nabokov intended to be the first two chapters of a novel in Russian that he never continued. The placename Thule appears in Pale Fire, as does the phrase solus rex (a chess problem in which one player has no pieces but the king). The book is also full of references to culture, nature, and literature. They include: Bobolink Maud Bodkin The Brothers Karamazov Robert Browning, including "My Last Duchess" and Pippa Passes (inspired in a wood near Dulwich) Cedar, including a colloquial American meaning, juniper Ben Chapman. Some have said the newspaper headline "Red Sox Beat Yanks 5–4 On Chapman's Homer" was genuine and "[u]nearthed by Nabokov in the stacks of the Cornell Library", but others have stated no such game occurred. However, a different player, Sam Chapman of the Philadelphia Athletics, did hit a home run in the 9th inning on September 29, 1938, to defeat the Yankees, 5–4. Another player, Ray Chapman of the Cleveland Indians, was the only person killed in a Major League baseball game, dying after being struck in the head by a ball thrown by a Yankees pitcher. Charles II of England Charles VI of France, known as Charles the Well-Beloved and Charles the Mad Disa orchid and the butterflies Erebia disa and E. embla (which may lead to Disa and Embla) T. S. Eliot and Four Quartets "Der Erlkönig" Et in Arcadia ego Thomas Flatman Edsel Ford (poet) and the poem "The Image of Desire" Forever Amber Robert Frost and the poems "Stopping by Woods on a Snowy Evening" and possibly "Of a Winter's Evening" Oliver Goldsmith Gradus ad Parnassum Gutnish Thomas Hardy and the poem "Friends Beyond" (for the word "stillicide") Bret Harte and his character Colonel Starbottle Hebe and the poem "Vesennyaya Groza" ("Spring Thunderstorm") by Fyodor Tyutchev Sherlock Holmes and "The Adventure of the Empty House" A Hero of Our Time A. E. Housman, including "To an Athlete Dying Young" In Memoriam A.H.H. Strange Case of Dr Jekyll and Mr Hyde Samuel Johnson, James Boswell, Boswell's Life of Johnson and Hodge James Joyce Kalevala John Keats, including La Belle Dame sans Merci The Konungs skuggsjá or Royal Mirror Krummholz Jean de La Fontaine and "The Ant and the Grasshopper" (or cicada) Franklin Knight Lane Angus McDiarmid or MacDiarmid, author of Striking and Picturesque Delineations... The Magi, including Balthasar and Melchior Novaya Zemlya Papilio nitra (now P. zelicaon nitra) and P. indra Parthenocissus Edgar Allan Poe and the poem "To One in Paradise" (for the phrase "Dim gulf") Alexander Pope and Jonathan Swift Marcel Proust François Rabelais Red admiral butterfly, Vanessa atalanta Alberto Santos-Dumont Walter Scott, including "Glenfinlas, or Lord Ronald's Coronach", "The Lady of the Lake", and The Pirate Robert Southey, in particular, the Poet Laureate's rivalry to Lord Byron as alluded to in the latter's Don Juan dedication Speyeria diana and S. atlantis Thormodus Torfaeus Waxwing Pierinae Word golf William Wordsworth, including "The | Shade. After Shade was murdered, Kinbote acquired the manuscript, including some variants, and has taken it upon himself to oversee the poem's publication, telling readers that it lacks only line 1000. Kinbote's second story deals with King Charles II, "The Beloved", the deposed king of Zembla. King Charles escaped imprisonment by Soviet-backed revolutionaries, making use of a secret passage and brave adherents in disguise. Kinbote repeatedly claims that he inspired Shade to write the poem by recounting King Charles's escape to him and that possible allusions to the king, and to Zembla, appear in Shade's poem, especially in rejected drafts. However, no explicit reference to King Charles is to be found in the poem. Kinbote's third story is that of Gradus, an assassin dispatched by the new rulers of Zembla to kill the exiled King Charles. Gradus makes his way from Zembla through Europe and America to New Wye, suffering comic mishaps. In the last note, to the missing line 1000, Kinbote narrates how Gradus killed Shade by mistake. Towards the end of the narrative, Kinbote all but states that he is in fact the exiled King Charles, living incognito; however, enough details throughout the story, as well as direct statements of ambiguous sincerity by Kinbote towards the novel's end, suggest that King Charles and Zembla are both fictitious. In the latter interpretation, Kinbote is delusional and has built an elaborate picture of Zembla complete with samples of a constructed language as a by-product of insanity; similarly, Gradus was simply an unhinged man trying to kill Shade, and his backstory as a revolutionary assassin is also made up. In an interview, Nabokov later claimed that Kinbote killed himself after finishing the book. The critic Michael Wood has stated, "This is authorial trespassing, and we don't have to pay attention to it", but Brian Boyd has argued that internal evidence points to Kinbote's suicide. One of Kinbote's annotations to Shade's poem (corresponding to line 493) addresses the subject of suicide at some length. Explanation of the title As Nabokov pointed out himself, the title of John Shade's poem is from Shakespeare's Timon of Athens: "The moon's an arrant thief, / And her pale fire she snatches from the sun" (Act IV, scene 3), a line often taken as a metaphor about creativity and inspiration. Kinbote quotes the passage but does not recognize it, as he says he has access only to an inaccurate Zemblan translation of the play "in his Timonian cave", and in a separate note he even rails against the common practice of using quotations as titles. Some critics have noted a secondary reference in the book's title to Hamlet, where the Ghost remarks how the glow-worm "'gins to pale his uneffectual fire" (Act I, scene 5). The title is first mentioned in the foreword: "I recall seeing him from my porch, on a brilliant morning, burning a whole stack of [index cards of drafts of the poem] in the pale fire of the incinerator...". Initial reception According to Norman Page, Pale Fire excited as diverse criticism as any of Nabokov's novels. Mary McCarthy's review was extremely laudatory; the Vintage edition excerpts it on the front cover. She tried to explicate hidden references and connections. Dwight Macdonald responded by saying the book was "unreadable" and both it and McCarthy's review were as pedantic as Kinbote. Anthony Burgess, like McCarthy, extolled the book, while Alfred Chester condemned it as "a total wreck". Some other early reviews were less decided, praising the book's satire and comedy but noting its difficulty and finding its subject slight or saying that its artistry offers "only a kibitzer's pleasure". Macdonald called the reviews he had seen, other than McCarthy's, "cautiously unfavorable". TIME magazine's 1962 review stated that "Pale Fire does not really cohere as a satire; good as it is, the novel in the end seems to be mostly an exercise in agility – or perhaps in bewilderment", though this did not prevent TIME from including the book in its 2005 list of the 100 best English-language novels published since 1923. The first Russian translation of the novel, one created by Véra Nabokov, its dedicatee, was published in 1983 by Ardis in Ann Arbor, Michigan (Alexei Tsvetkov initially played an important role in this translation). After Nabokov's reputation was rehabilitated in the Soviet Union (his novels started being published there in 1986 and the first book composed entirely of Nabokov's works was printed in 1988), Pale Fire was published in 1991 in Sverdlovsk (in Sergei Ilyin's Russian translation). Interpretations Some readers concentrate on the apparent story, focusing on traditional aspects of fiction such as the relationship among the characters. In 1997, Brian Boyd published a much-discussed study arguing that the ghost of John Shade influenced Kinbote's contributions. He expanded this essay into a book in which he also argues that, in order to trigger Shade's poem, Hazel Shade's ghost induced Kinbote to recount his Zemblan delusions to Shade. Some readers, starting with Mary McCarthy and including Boyd, Nabokov's annotator Alfred Appel, and D. Barton Johnson, see Charles Kinbote as an alter-ego of the insane Professor V. Botkin, to whose delusions John Shade and the rest of the faculty of Wordsmith College generally condescend. Nabokov himself endorsed this reading, stating in an interview in 1962 (the novel's year of publication) that Pale Fire "is full of plums that I keep hoping somebody will find. For instance, the nasty commentator is not an ex-King of Zembla nor is he professor Kinbote. He is professor Botkin, or Botkine, a Russian and a madman." The novel's intricate structure of teasing cross-references leads readers to this "plum". The Index, supposedly created by Kinbote, features an entry for a "Botkin, V.," describing this Botkin as an "American scholar of Russian descent"—and referring to a note in the Commentary on line 894 of Shade's poem, in which no such person is directly mentioned but a character suggests that "Kinbote" is "a kind of anagram of Botkin or Botkine". In this interpretation, "Gradus" the murderer is an American named Jack Grey who wanted to kill Judge Goldsworth, whose house "Pale Fire's" commentator—whatever his "true" name is—is renting. Goldsworth had condemned Grey to an asylum from which he escaped shortly before mistakenly killing Shade, who resembled Goldsworth. Other readers see a story quite different from the apparent narrative. "Shadeans" maintain that John Shade wrote not only the poem, but the commentary as well, having invented his own death and the character of Kinbote as a literary device. According to Boyd, Andrew Field invented the Shadean theory and Julia Bader expanded it; Boyd himself espoused the theory for a time. In an alternative version of the Shadean theory, Tiffany DeRewal and Matthew Roth argued that Kinbote is not a separate person but is a dissociated, alternative personality of John Shade. (An early reviewer had mentioned that "a case might be made" for such a reading.) "Kinboteans", a decidedly smaller group, believe that Kinbote invented the existence of John Shade. Boyd credits the Kinbotean theory to Page Stegner and adds that most of its adherents are newcomers to the book. Some readers see the book as oscillating undecidably between these alternatives, like the Rubin vase (a drawing that may be two profiles or a goblet). Though a minority of commentators believe or at least accept the possibility that Zembla is as "real" as New Wye, most assume that Zembla, or at least the operetta-quaint and homosexually gratified palace life enjoyed by Charles Kinbote before he is overthrown, is imaginary in the context of the story. The name "Zembla" (taken from "Nova Zembla", a former latinization of Novaya Zemlya) may evoke popular fantasy literature about royalty such as The Prisoner of Zenda. As in other Nabokov books, however, the fiction is an exaggerated or comically distorted version of his own life as a son of privilege before the Russian Revolution and an exile afterwards, and the central murder has resemblances (emphasized by Priscilla Meyer) to Nabokov's father's murder by an assassin who was trying to kill someone else. Still other readers de-emphasize any sort of "real story" and may doubt the existence of such a thing. In the interplay of allusions and thematic links, they find a multifaceted image of |
citrates), tartaric acid, and lecithin. Nonsynthetic compounds for food preservation Citric and ascorbic acids target enzymes that degrade fruits and vegetables, e.g., mono/polyphenol oxidase which turns surfaces of cut apples and potatoes brown. Ascorbic acid and tocopherol, which are vitamins, are common preservatives. Smoking entails exposing food to a variety of phenols, which are antioxidants. Natural preservatives include rosemary and oregano extract, hops, salt, sugar, vinegar, alcohol, diatomaceous earth and castor oil. Traditional preservatives, such as sodium benzoate have raised health concerns in the past. Benzoate was shown in a study to cause hypersensitivity in some asthma sufferers. This has caused reexamination of natural preservatives which occur in vegetables. Public awareness of food preservation Public awareness of food preservatives is uneven. Americans have a perception that food-borne illnesses happen more often in other countries. This may be true, but the occurrence of illnesses, hospitalizations, and deaths are still high. It is estimated by the Centers for Disease Control (CDC) that each year there are 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths linked to food-borne illness. The increasing demand for ready-to-eat fresh food products has led to challenges for food distributors regarding the safety and quality of their foods. Artificial preservatives meet some of these challenges by preserving freshness for longer periods of time, but these preservatives can cause negative side-effects as well. Sodium nitrite is a preservative used in lunch meats, hams, sausages, hot dogs, and bacon to prevent botulism. It serves the important function of controlling the bacteria that cause botulism, but sodium nitrite can react with proteins, or during cooking at high heats, to form carcinogenic N-nitrosamines. It has also been linked to cancer in lab animals. The commonly used sodium benzoate has been found to extend the shelf life of bottled tomato paste to 40 weeks without loss of quality. However, it can form the carcinogen benzene when combined with vitamin C. Many food manufacturers have reformed their products to eliminate this combination, but a risk still exists. Consumption of sodium benzoate may also cause hyperactivity. For over 30 years, there has been a debate about whether or not preservatives and other food additives can cause hyperactivity. Studies have found that there may | that do not have strong governments to regulate food additives face either harmful levels of preservatives in foods or a complete avoidance of foods that are considered unnatural or foreign. These countries have also proven useful in case studies surrounding chemical preservatives, as they have been only recently introduced. In urban slums of highly populated countries, the knowledge about contents of food tends to be extremely low, despite consumption of these imported foods. Antimicrobial preservatives Antimicrobial preservatives prevent degradation by bacteria. This method is the most traditional and ancient type of preserving—ancient methods such as pickling and adding honey prevent microorganism growth by modifying the pH level. The most commonly used antimicrobial preservative is lactic acid. Common antimicrobial preservatives are presented in the table. Nitrates and nitrites are also antimicrobial. The detailed mechanism of these chemical compounds range from inhibiting growth of the bacteria to the inhibition of specific enzymes. Water-based home and personal care products use broad-spectrum preservatives, such as isothiazolinones and formaldehyde releasers, which may cause sensitization, leading to allergic skin. Antioxidants The oxidation process spoils most food, especially those with a high fat content. Fats quickly turn rancid when exposed to oxygen. Antioxidants prevent or inhibit the oxidation process. The most common antioxidant additives are ascorbic acid (vitamin C) and ascorbates. Thus, antioxidants are commonly added to oils, cheese, and chips. Other antioxidants include the phenol derivatives BHA, BHT, TBHQ and propyl gallate. These agents suppress the formation of hydroperoxides. Other preservatives include ethanol and methylchloroisothiazolinone. A variety of agents are added to sequester (deactivate) metal ions that otherwise catalyze the oxidation of fats. Common sequestering agents are disodium EDTA, citric acid (and citrates), tartaric acid, and lecithin. Nonsynthetic compounds for food preservation Citric and ascorbic acids target enzymes that degrade fruits and vegetables, e.g., mono/polyphenol oxidase which turns surfaces of cut apples and potatoes brown. Ascorbic acid and tocopherol, which are vitamins, are common preservatives. Smoking entails exposing food to a variety of phenols, which are antioxidants. Natural preservatives include rosemary and oregano extract, hops, salt, |
defined primarily in terms of ribosomal RNA (rRNA) sequences. The "Proteobacteria" are divided into nine classes with validly published names, referred to by the Greek letters alpha through zeta, the Acidithiobacillia, Hydrogenophilalia, and Oligoflexia. These were previously regarded as subclasses of the phylum, but they are now treated as classes. These classes are monophyletic. The genus Acidithiobacillus, part of the Gammaproteobacteria until it was transferred to class Acidithiobacillia in 2013, was previously regarded as paraphyletic to the Betaproteobacteria according to multigenome alignment studies. In 2017, the Betaproteobacteria was subject to major revisions and the class Hydrogenophilalia was created to contain the order Hydrogenophilales Proteobacterial classes with validly published names include some prominent genera: e.g.: Alphaproteobacteria: Brucella, Rhizobium, Agrobacterium, Caulobacter, Rickettsia, Wolbachia, etc. Betaproteobacteria: Bordetella, Ralstonia, Neisseria, Nitrosomonas, etc. Gammaproteobacteria: Escherichia, Shigella, Salmonella, Yersinia, Buchnera, Haemophilus, Vibrio, Pseudomonas, etc. Deltaproteobacteria: Desulfovibrio, Geobacter, Bdellovibrio, etc. Epsilonproteobacteria: Helicobacter, Campylobacter, Wolinella, etc. Zetaproteobacteria: Mariprofundus, Ghiorsea Oligoflexia: Oligoflexus. Acidithiobacillia: Acidithiobacillus thiooxidans, Thermithiobacillus tepidarius Hydrogenophilalia: Hydrogenophilus thermoluteolus, Tepidiphilus margaritifer Transformation Transformation, a process in which genetic material passes from one bacterium to another, has been reported in at least 30 species of "Proteobacteria" distributed in the classes alpha, beta, gamma and epsilon. The best-studied "Proteobacteria" with respect to natural genetic transformation are the medically important human pathogens Neisseria gonorrhoeae (class beta), Haemophilus influenzae (class gamma) and Helicobacter pylori (class epsilon). Natural genetic transformation is a sexual process involving DNA transfer from one bacterial cell to another through the intervening medium and the integration of the donor sequence into the recipient genome. In pathogenic "Proteobacteria", transformation appears to serve as a DNA repair process that protects the pathogen's DNA from attack by their host's phagocytic defenses that employ oxidative free radicals. References External links Proteobacteria information from Palaeos. Proteobacteria. – J. P. Euzéby: List of Prokaryotic | of the microbiotic community in the lower reproductive tract of women. These species are associated with inflammation. Some Alphaproteobacteria can grow at very low levels of nutrients and have unusual morphology such as stalks and buds. Others include agriculturally important bacteria capable of inducing nitrogen fixation in symbiosis with plants. The type order is the Caulobacterales, comprising stalk-forming bacteria such as Caulobacter. The mitochondria of eukaryotes are thought to be descendants of an alphaproteobacterium. The Betaproteobacteria are highly metabolically diverse and contain chemolithoautotrophs, photoautotrophs, and generalist heterotrophs. The type order is the Burkholderiales, comprising an enormous range of metabolic diversity, including opportunistic pathogens. The Gammaproteobacteria are the largest class in terms of species with validly published names. The type order is the Pseudomonadales, which include the genera Pseudomonas and the nitrogen-fixing Azotobacter. The Deltaproteobacteria include bacteria that are predators on other bacteria and are important contributors to the anaerobic side of the sulfur cycle. The type order is the Myxococcales, which includes organisms with self-organising abilities such as Myxococcus spp. The Epsilonproteobacteria are often slender, Gram-negative rods that are helical or curved. The type order is the Campylobacterales, which includes important food pathogens such as Campylobacter spp. The Zetaproteobacteria are iron-oxidizing neutrophilic chemolithoautotrophs, distributed worldwide in estuaries and marine habitats. The type order is the Mariprofundales. The Hydrogenophilalia are obligate thermophiles and include heterotrophs and autotrophs. The type order is the Hydrogenophilales. The Acidithiobacillia contain only sulfur, iron, and uranium-oxidising autotrophs. The type order is the Acidithiobacillales, which includes economically important organisms used in the mining industry such as Acidithiobacillus spp. The Oligoflexia are filamentous aerobes. The type order is the Oligoflexales, which contains the genus |
the most intense reaction. Heightened interest results in higher attendance, increased ticket sales, higher ratings on television broadcasts (greater ad revenue), higher pay-per-view buyrates, and sales of branded merchandise and recorded video footage. All of these contribute to the profit of the promotion company. Character/gimmick In Latin America and English-speaking countries, most wrestlers (and other on-stage performers) portray character roles, sometimes with personalities wildly different from their own. These personalities are a gimmick intended to heighten interest in a wrestler without regard to athletic ability. Some can be unrealistic and cartoon-like (such as Doink the Clown), while others carry more verisimilitude (such as Chris Jericho, The Rock, John Cena, Steve Austin, and CM Punk). In lucha libre, many characters wear masks, adopting a secret identity akin to a superhero or a supervillain, a near-sacred tradition. An individual wrestler may use his real name, or a minor variation of it, for much of his career, such as Bret Hart, John Cena and Randy Orton. Others can keep one ring name for their entire career (Shawn Michaels, CM Punk and Ricky Steamboat), or may change from time to time to better suit the demands of the audience or company. Sometimes a character is owned and trademarked by the company, forcing the wrestler to find a new one when he leaves (although a simple typeset change, such as changing Rhyno to Rhino, can get around this), and sometimes a character is owned by the wrestler. Sometimes, a wrestler may change his legal name to obtain ownership of his ring name (Andrew Martin and Warrior). Many wrestlers (such as The Rock and The Undertaker) are strongly identified with their character, even responding to the name in public or between friends. It's actually considered proper decorum for fellow wrestlers to refer to each other by their stage names/characters rather than their birth/legal names, unless otherwise introduced. A character can become so popular that it appears in other media (Hulk Hogan and El Santo) or even gives the performer enough visibility to enter politics (Antonio Inoki and Jesse Ventura). Typically, matches are staged between a protagonist (historically an audience favorite, known as a babyface, or "the good guy") and an antagonist (historically a villain with arrogance, a tendency to break rules, or other unlikable qualities, called a heel, or "the bad guy"). In recent years, however, antiheroes have also become prominent in professional wrestling. There is also a less common role of a "tweener", who is neither fully face nor fully heel yet able to play either role effectively (case in point, Samoa Joe during his first run in Impact Wrestling from June 2005 to November 2006). At times, a character may "turn", altering their face/heel alignment. This may be an abrupt, surprising event, or it may slowly build over time. It is almost always accomplished with a markable change in behavior. Some turns become defining points in a career, as when Hulk Hogan turned heel after being a top face for over a decade. Others may have no noticeable effect on the character's status. If a character repeatedly switches between face and heel, this lessens the effect of such turns, and may result in apathy from the audience. Big Show is a good example of having more heel and face turns than anyone in WWE history. As with personae in general, a character's face or heel alignment may change with time, or remain constant over its lifetime (the most famous example of the latter is Ricky Steamboat, a WWE Hall of Famer who remained a babyface throughout his entire career). Sometimes a character's heel turn will become so popular that eventually the audience response will alter the character's heel-face cycle to the point where the heel persona will, in practice, become a face persona, and what was previously the face persona, will turn into the heel persona, such as when Dwayne Johnson first began using "The Rock" persona as a heel character, as opposed to his original "Rocky Maivia" babyface persona. Another legendary example is Stone Cold Steve Austin, who was originally booked as a heel, with such mannerisms as drinking on the job, using profanity, breaking company property, and even breaking into people's private homes. However, much to WWF's surprise, the fans response to Austin' was so positive that he effectively became one of the most popular antiheroes in professional wrestling. Austin, along with the stable of D-Generation X, Bret Hart and his Hart Foundation, is generally credited with ushering the Attitude Era of WWF programming. Story While real exhibition matches are now not uncommon, most matches tell a story analogous to an episode of a serial drama: the face will from time to time win (triumph) or from time to time lose (tragedy), and longer story arcs can result from a couple of matches. Since most promotions have a championship title, opposition for the championship is a frequent impetus for stories. For added stakes, anything from a character's own hair to his job can be wagered in a match. Some matches are designed to further the story of only one participant. It could be intended to portray an unstoppable force, a lucky underdog, a sore loser, or any other characterization. Sometimes non-wrestling vignettes are shown to enhance a character's image without the need for matches. Other stories result from a natural rivalry. Outside of performance, these are referred to as feuds. A feud can exist between any number of participants and can last from a few days to decades. The feud between Ric Flair and Ricky Steamboat lasted from the late 1970s into the early 1990s and allegedly spanned over two thousand matches (although most of those matches were mere dark matches). The career-spanning history between characters Mike Awesome and Masato Tanaka is another example of a long-running feud, as is the case of Steve Austin vs. Vince McMahon, one of the most lucrative feuds in the World Wrestling Federation during 1998 and 1999. In theory, the longer a feud is built up, the more audience interest (aka heat) lasts. The main event of a wrestling show is generally the most heated. Commonly, a heel will hold the upper hand over a face until a final showdown, heightening dramatic tension as the face's fans desire to see him win. Throughout the history of professional wrestling, many other elements of media have been utilized in professional wrestling storytelling: pre- and post-match interviews, "backstage" skits, positions of authority and worked behind-the-scenes feuds, division rankings (typically the #1-contendership spot), contracts, lotteries, news stories on websites, and in recent years social media. Also, anything that can be used as an element of drama can exist in professional wrestling stories: romantic relationships (including love triangles and marriage), racism, classism, nepotism, favoritism, corporate corruption, family bonds, personal histories, grudges, theft, cheating, assault, betrayal, bribery, seduction, stalking, confidence tricks, extortion, blackmail, substance abuse, self-doubt, self-sacrifice; even kidnapping, sexual fetishism, necrophilia, misogyny, rape and death have been portrayed in wrestling. Some promotions have included supernatural elements such as magic, curses, the undead and Satanic imagery (most notably the Undertaker and his Ministry of Darkness, a stable that regularly performed evil rituals and human sacrifice in Satanic-like worship of a hidden power figure). Celebrities would also be involved in storylines. Commentators have become important in communicating the relevance of the characters' actions to the story at hand, filling in past details and pointing out subtle actions that may otherwise go unnoticed. Promos A main part of the story-telling part of wrestling is a promo, short for promotional interview. Promos are performed, or "cut" in wrestling jargon, for a variety of reasons, including to heighten interest in a wrestler, or to hype an upcoming match. Since the crowd is often too loud or the venue too large for promos to be heard naturally, wrestlers will use amplification when speaking in the ring. Unlike most Hollywood acting, large and highly visible handheld microphones are typically used and wrestlers often speak directly to the audience. Championships Professional wrestling mimics the structure of title match combat sports. Participants compete for a championship and must defend it after winning it. These titles are represented physically by a title belt that can be worn by the champion. In the case of team wrestling, there is a title belt for each member of the team. Almost all professional wrestling promotions have one major title, and some have more. Championships are designated by divisions of weight, height, gender, wrestling style and other qualifications. Typically, each promotion only recognizes the "legitimacy" of their own titles, although cross-promotion does happen. When one promotion absorbs or purchases another, the titles from the defunct promotion may continue to be defended in the new promotion or be decommissioned. Behind the scenes, the bookers in a company will place the title on the most accomplished performer, or those the bookers believe will generate fan interest in terms of event attendance and television viewership. Historically, a world champion was typically a legit shooter/hooker who had the skills to prevent double crosses by would-be shooters who would deviate from the planned finish for personal glory. Lower ranked titles may also be used on the performers who show potential, thus allowing them greater exposure to the audience. However other circumstances may also determine the use of a championship. A combination of a championship's lineage, the caliber of performers as champion, and the frequency and manner of title changes, dictates the audience's perception of the title's quality, significance and reputation. A wrestler's championship accomplishments can be central to their career, becoming a measure of their performance ability and drawing power. In general, a wrestler with multiple title reigns or an extended title reign is indicative of a wrestler's ability to maintain audience interest or a wrestler's ability to perform in the ring. As such, the most accomplished or decorated wrestlers tend to be revered as legends due to the amount of title reigns they hold. American wrestler Ric Flair has had multiple world heavyweight championship reigns spanning over three decades. Japanese wrestler Último Dragón once held and defended a record 10 titles simultaneously. Non-standard matches Often a match will take place under additional rules, usually serving as a special attraction or a climactic point in a feud or storyline. Sometimes this will be the culmination of an entire feud, ending it for the immediate future (known as a blowoff match). Perhaps the most well-known non-standard match is the cage match, in which the ring is surrounded by a fence or similar metal structure, with the express intention of preventing escape or outside interference—and with the added bonus of the cage being a potentially brutal weapon or platform for launching attacks. The WWE has another provision where a standard cage match can end with one wrestler or wrestling team escaping the cage through the door or over the top. Another example is the WWE's Royal Rumble match, which involves thirty participants in a random and unknown order. The Rumble match is itself a spectacle in that it is a once-yearly event with multiple participants, including individuals who might not interact otherwise. It also serves as a catalyst for the company's ongoing feuds, as well as a springboard for new storylines. Ring entrance While the wrestling matches themselves are the primary focus of professional wrestling, a key dramatic element of the business can be entrances of the wrestlers to the arena and ring. It is typical for a wrestler to get their biggest crowd reaction (or "pop") for their ring entrance, rather than for anything they do in the wrestling match itself, especially if former main event stars are returning to a promotion after a long absence. All notable wrestlers now enter the ring accompanied by music, and regularly add other elements to their entrance. The music played during the ring entrance will usually mirror the wrestler's personality. Many wrestlers, particularly in America, have music and lyrics specially written for their ring entrance. While invented long before, the practice of including music with the entrance gained rapid popularity during the 1980s, largely as a result of the huge success of Hulk Hogan and the WWF, and their Rock 'n' Wrestling Connection. When a match is won, the victor's theme music is usually also played in celebration. Because wrestling is predetermined, a wrestler's entrance music will play as they enter the arena, even if they are, in kayfabe, not supposed to be there. For example, in 2012 through 2014, The Shield was a trio of wrestlers who were (in kayfabe) not at the time under contract with WWE (hence their gimmick of entering the ring through the crowd), but they still had entrance music which was played whenever they entered the arena, despite the fact that they were kayfabe invaders. With the introduction of the Titantron entrance screen in 1997, WWF/WWE wrestlers also had entrance videos made that would play along with their entrance music. Other dramatic elements of a ring entrance can include: Pyrotechnics such as a ring of fire for The Brood when they ascend to the stage, multi-colour fireworks (most notably for Edge), fire for Kane and Seth Rollins, a stage of smoke for Finn Bálor and (for a short period of time) falling fireworks for Christian Cage. Additional visual graphics or staging props to complement the entrance video/routine or further emphasize the character. For instance, Kane's entrance graphics employ heavy use of fire-themed visuals, The Undertaker's entrance features dark lighting, fire, fog and dry ice, and lightning-themed effects, John Morrison's entrance would feature use of multicolored psychedelic style patterns, The Miz has in the past incorporated inflatable lettering spelling out the word "AWESOME" into his entrance, and Montel Vontavious Porter frequently used an inflatable entrance tunnel during his WWE tenure. Goldust has been known to use on-screen visual effects in his entrance to simulate the presentation of a feature film (i.e. widescreen, production company credits), as to emphasize his Hollywood-themed film aficionado character. Brodus Clay entered with disco ball lighting effects to emphasize his "Funkasaurus" character. A distinct sound or opening note in the music (used to elicit a Pavlovian response from the crowd). For example, the glass shattering in Steve Austin's entrance theme, The Undertaker's signature bell toll, and the sound of bells and a cow's moo in JBL's theme. Darkening of the arena, often accompanied by mood lighting or strobe lighting, such as in The Undertaker's, Triple H's, or Sting's entrances. Certain colors of lighting have been associated with specific wrestlers; for instance, blue lighting for The Undertaker and Alexa Bliss, green lighting for Triple H, D-Generation X, and Shane McMahon, a mixture of red and yellow lighting for Brock Lesnar, a lot of red for Seth Rollins (mainly for his "Embrace The Vision" character, a.k.a when using his theme named "Visionary"), a mixture of red and orange lighting for Kane, multicolored lighting for John Morrison, gold lighting for Goldust, pink lighting for Val Venis and Trish Stratus, and so forth. Costumes that evoke "otherworldly" or "fictional" themes. With examples such as Big Van Vader's bio-mechanical themed headdress which spewed steam, Pyro's fire-shooting outfit, Shockmaster's bejeweled stormtrooper helmet, Ricky Steamboat's dragon costume and Mankind's leather mask, etc. Entering in a manner in keeping with their character traits, such as a fast, highly energetic entrance, or a slow, stoic entrance. For example, The Ultimate Warrior would run at high speed down the entrance ramp and into the ring while Randy Orton would walk slowly. The Undertaker has adopted one of the most notable entrances, taking around 4 to 5 minutes, darkening the whole arena, and performing a slow, intimidating walk. Goldberg walked slowly to the ring while being escorted by security guards from the locker room. Like sound effects, some entrance mannerisms often become signature to individual wrestlers. For example, Steve Austin's entrance often involves him standing on the second turnbuckle, raising his hands in the air for few seconds, and then doing the same thing for the other three turnbuckles, a mannerism which has become just as much a signature part of Austin's entrance as the glass-shattering sound effect. Driving a vehicle into the arena. For example, Eddie Guerrero would arrive into the arena in a lowrider, The Undertaker (in his "American Bad Ass" biker gimmick), Chuck Palumbo, Tara, and the Disciples of Apocalypse on motorcycles, The Mexicools on riding lawn mowers, JBL in his limousine, Alberto Del Rio arriving into the arena in various luxury cars, Steve Austin driving an all-terrain vehicle, and Camacho and Hunico entering on a lowrider bicycle. Acting out a trademark behavior, such as posing to display their muscularity, mounting the ring ropes, or sitting in the corner. Talking to the crowd using a distinctive patter. For instance, chanting or rapping along with the music (i.e. Road Dogg, R-Truth). Another example is Vickie Guerrero entering to no music, but announcing her arrival with the words "Excuse me!" Many heels with narcissistic gimmicks (Lex Luger, Shawn Michaels, Cody Rhodes, Paul Orndorff, etc.) would admire themselves in a mirror on their way to the ring. Coming through the audience, such as The Sandman's beer drinking and can smashing entrance, or Diamond Dallas Page's exit through the crowd, or Jon Moxley entering through the crowd. Accompaniment by a ringside crew or personal security, an example of which would be Goldberg. Entering the arena by a lift in the stage, such as Kurt Angle, The Brood and Rey Mysterio If a wrestler is a current champion, he will attempt to visually draw attention to his championship belt by either holding it high over his head or (if the belt is worn around the waist) moving his hands across it or pointing to it. Recently, Bobby Lashley has incorporated graphics and a dramatic opening into his entrance. The opening starts with lightning graphics hitting the stage, then going into a montage (for a short period of time, Lashley used parts of his entrance theme in the montage). Afterwards, graphics of him appear with the collective phrase "All Mighty" above his head, before switching to his entrance music. Another method of entry involves descending from the ceiling with a zip-line or rappel line and stunt harness. This has been done by Shawn Michaels at WrestleMania XII, by Sting many times in WCW and Impact and gained major controversy over its role in the death of wrestler Owen Hart at Over the Edge. Special ring entrances are also developed for big occasions, most notably the WrestleMania event. For example, WrestleMania III and VI both saw all wrestlers enter the arena on motorized miniature wrestling rings. Live bands are sometimes hired to perform live entrance music at special events. John Cena and Triple H are particularly notable in recent years for their highly theatrical entrances at WrestleMania. Other types of wrestling Women's wrestling The women's division of professional wrestling has maintained a recognized world champion since 1937, when Mildred Burke won the original World Women's title. She then formed the World Women's Wrestling Association in the early 1950s and recognized herself as the first champion, although the championship would be vacated upon her retirement in 1956. The NWA however, ceased to acknowledge Burke as the Women's World champion in 1954, and instead acknowledged June Byers as champion after a controversial finish to a high-profile match between Burke and Byers that year. Upon Byers's retirement in 1964, The Fabulous Moolah, who won a junior heavyweight version of the NWA World Women's Championship (the predecessor to the WWE Women's Championship) in a tournament back in 1958, was recognized by most NWA promoters as champion by default. Intergender wrestling For most of its history, men and women would rarely compete against each other in professional wrestling, as it was deemed to be unfair and unchivalrous. Andy Kaufman used this to gain notoriety when he created an Intergender Championship and declared it open to any female challenger. This led to a long (worked) feud with Jerry Lawler. In the 1980s, mixed tag team matches began to take place, with a male and female on each team and a rule stating that each wrestler could only attack the opponent of the same gender. If a tag was made, the other team had to automatically switch their legal wrestler as well. Despite these restrictions, many mixed tag matches do feature some physical interaction between participants of different genders. For example, a heel may take a cheap shot at the female wrestler of the opposing team to draw a negative crowd reaction. In lucha libre, cheap shots and male-female attacks are not uncommon. Intergender singles bouts were first fought on a national level in the 1990s. This began with Luna Vachon, who faced men in ECW and WWF. Later, Chyna became the first female to hold a belt that was not exclusive to women when she won the WWF Intercontinental Championship. Intergender wrestling was uncommon in Impact Wrestling. ODB, had participated in intergender matches and once held the Impact Knockouts Tag Team Championship with Eric Young for a record 478 days. Other notable Impact Knockouts that competed in intergender matches include Scarlett Bordeaux; Tessa Blanchard, who became the first woman to win the Impact World Championship; and Jordynne Grace, who became the inaugural Impact Digital Media Championship. Midget wrestling Midget wrestling can be traced to professional wrestling's carnival and vaudeville origins. In recent years, the popularity and prevalence of midgets in wrestling has greatly decreased due to wrestling companies depriving midget divisions of storyline or feud. However, WWE has made a few attempts to enter this market with their "minis" in the 1990s and the "junior's league" as recent as 2006. It is still a popular form of entertainment in Mexican wrestling, mostly as a "sideshow". Some wrestlers may have their own specific "mini me", like Mascarita Sagrada, Alebrije has Quije, etc. There are also cases in which midgets can become valets for a wrestler, and even get physically involved in matches, like Alushe, who often accompanies Tinieblas, or KeMonito, who is portrayed as Consejo Mundial de Lucha Libre's mascot and is also a valet for Mistico. Dave Finlay was often aided in his matches by a midget known mainly as Hornswoggle while in WWE, who hid under the ring and gave a shillelagh to Finlay to use on his opponent. Finlay also occasionally threw him at his opponents. Hornswoggle has also been given a run with the WWE Cruiserweight Championship and feuded with D-X in 2009. Styles and characteristics in other countries The U.S., Japan and Mexico are three countries where there is a huge market and high popularity for professional wrestling. But the styles of professional wrestling are different, given their independent development for a long period. Professional wrestling in the U.S. tends to have a heavy focus on story building and the establishment of characters (and their personalities). There is a story for each match, and even a longer story for successive matches. The stories usually contain characters like faces and heels, and less often antiheroes and tweeners. It is a "triumph" if the face wins, while it is a "tragedy" if the heel wins. The characters usually have strong and sharp personalities. The opposition between faces and heels is very intense in the story, and the heels may even attack the faces during TV interviews. The relationship between different characters can also be very complex. Although professional wrestling in Mexico (Lucha libre) also has stories and characters, they are less emphasized. Mexican professional wrestling tradition repeats very usually brutal tactics, specially more aerial holds than professional wrestlers in the U.S. who, more often, rely on power moves and strikes to subdue their opponents. The difference in styles is due to the independent evolution of the sport in Mexico beginning in the 1930s and the fact that wrestlers in the cruiserweight division () are often the most popular wrestlers in Mexican lucha libre. Wrestlers often execute high flying moves characteristic of lucha libre by utilizing the wrestling ring's ropes to catapult themselves towards their opponents, using intricate combinations in rapid-fire succession, and applying complex submission holds. Lucha libre is also known for its tag team wrestling matches, in which the teams are often made up of three members, instead of two as is common in the U.S. The style of Japanese professional wrestling (puroresu) is again different. With its origins in traditional American style of wrestling and still being under the same genre, it has become an entity in itself. Despite the similarity to its American counterpart in that the outcome of the matches remains predetermined, the phenomena are different in the form of the psychology and presentation of the sport. In most of the largest promotions, such as New Japan Pro-Wrestling, All Japan Pro Wrestling and Pro Wrestling Noah, it is treated as a full contact combat sport as it mixes hard hitting martial arts strikes with shoot style submission holds, while in the U.S. it is rather more regarded as an entertainment show. Wrestlers incorporate kicks and strikes from martial arts disciplines, and a strong emphasis is placed on submission wrestling, and unlike the use of involved storylines in the U.S., they are not as intricate in Japan, more emphasis is placed on the concept of "fighting spirit", meaning the wrestlers display of physical and mental stamina are valued a lot more than theatrics. Many of Japan's wrestlers including top stars such as Shinya Hashimoto, Riki Chōshū and Keiji Mutoh came from a legitimate martial arts background and many Japanese wrestlers in the 1990s began to pursue careers in mixed martial arts organizations such as Pancrase and Shooto which at the time retained the original look of puroresu but were actual competitions. Other companies, such as Michinoku Pro Wrestling and Dragon Gate, wrestle in a style similar to Mexican companies like AAA and CMLL. This is known as "Lucharesu". Culture Professional wrestling has developed its own cultures, both internal and external. Those involved in producing professional wrestling have developed a kind of global fraternity, with familial bonds, shared language and passed-down traditions. New performers are expected to "pay their dues" for a few years by working in lower-profile promotions and working as ring crew before working their way upward. The permanent rosters of most promotions develop a backstage pecking order, with veterans mediating conflicts and mentoring younger wrestlers. For many decades (and still to a lesser extent today) performers were expected to keep the illusions of wrestling's legitimacy alive even while not performing, essentially acting in character any time they were in public. Some veterans speak of a "sickness" among wrestling performers, an inexplicable pull to remain active in the wrestling world despite the devastating effects the job can have on one's life and health. Fans of professional wrestling have their own subculture, comparable to those of science fiction, video games, or comic books. Those who are interested in the backstage occurrences, future storylines and reasonings behind company decisions read newsletters written by journalists with inside ties to the wrestling industry. These "rags" or "dirt sheets" have expanded into the Internet, where their information can be dispensed on an up-to-the-minute basis. Some have expanded into radio shows. Some fans enjoy a pastime of collecting recordings of wrestling shows from specific companies, of certain wrestlers, or of specific genres. The internet has given fans exposure to worldwide variations of wrestling they would be unable to see otherwise. Since the 1990s, many companies have been founded which deal primarily in wrestling footage. When the WWE purchased both WCW and ECW in 2001, they also obtained the entire past video libraries of both productions and have released many past matches online and on home video. Like some other sports, fantasy leagues have developed around professional wrestling. Some take this concept further by creating E-feds (electronic federations), where a user can create their own fictional wrestling character, and role-playing storylines with other users, leading to scheduled "shows" where match results are determined by the organizers, usually based on a combination of the characters' statistics and the players' roleplaying aptitude, sometimes with audience voting. Professional wrestling in mainstream culture From the first established world championship, the top professional wrestlers have garnered fame within mainstream society. Each successive generation has produced a number of wrestlers who extend their careers into the realms of music, acting, writing, business, politics or public speaking, and are known to those who are unfamiliar with wrestling in general. Conversely, celebrities from other sports or general pop culture also become involved with wrestling for brief periods of time. A prime example of this is The Rock 'n' Wrestling Connection of the 1980s, which combined wrestling with MTV. Professional wrestling is often portrayed within other works using parody, and its general elements have become familiar tropes and memes in American culture. Some terminology originating in professional wrestling has found its way into the common vernacular. Phrases such as "body slam", "sleeper hold" and "tag team" are used by those who do not follow professional wrestling. The term "smackdown", popularized by The Rock and SmackDown! in the 1990s, has been included in Merriam-Webster dictionaries since 2007. Many television shows and films have been produced which portray in-character professional wrestlers as protagonists, such as Ready to Rumble, ¡Mucha Lucha!, | both members on the outside at any given time. In these matches, tags can be made between any two teams regardless if they are on the same team or not. As a result of this stipulation, tags between different teams are not usually mutual effort; a non-legal wrestler will usually tag themselves in against the legal wrestler's will. A legal wrestler will only voluntarily tag themselves out to another team if their own partner is incapacitated, or are being held in a submission hold and are closer to another tag team than their own. Sometimes, poly-sided matches that pit every man for himself will incorporate tagging rules. Outside of kayfabe, this is done to give wrestlers a break from the action (as these matches tend to go on for long periods of time), and to make the action in the ring easier to choreograph. One of the most mainstream examples of this is the Four-Corner match, the most common type of match in the WWE before it was replaced with its equivalent Fatal Four-Way; four wrestlers, each for himself, fight in a match, but only two wrestlers can be in the match at any given time. The other two are positioned in the corner, and tags can be made between any two wrestlers. In a Texas Tornado Tag Team match, all the competitors are legal in the match, and tagging in and out is not necessary. All matches fought under hardcore rules (such as no disqualification, no holds barred, ladder match, etc.) are all contested under de facto Texas Tornado rules, since the lack of ability of a referee to issue a disqualification renders any tagging requirements moot. Regardless of rules of tagging, a wrestler cannot pin his or her own tag team partner, even if it is technically possible from the rules of the match (e.g. Texas Tornado rules, or a three-way tag team match). This is called the "Outlaw Rule" because the first team to attempt to use that (in an attempt to unfairly retain their tag team titles) was the New Age Outlaws. Decisions Pinfall To score by pinfall, a wrestler must pin both his opponent's shoulders against the mat while the referee slaps the mat three times (referred to as a "three count"). This is the most common form of defeat. The pinned wrestler must also be on his back and, if they're lying on his stomach, it usually does not count. A count may be started at any time that a wrestler's shoulders are down (both shoulders touching the mat), back-first and any part of the opponent's body is lying over the wrestler. This often results in pins that can easily be kicked out of, if the defensive wrestler is even slightly conscious. For example, an attacking wrestler who is half-conscious may simply drape an arm over an opponent, or a cocky wrestler may place his foot gently on the opponent's body, prompting a three-count from the referee. Illegal pinning methods include using the ropes for leverage and hooking the opponent's clothing, which are therefore popular cheating methods for heels, unless certain stipulations make such an advantage legal. Pins such as these are rarely seen by the referee and are subsequently often used by heels and on occasion by cheating faces to win matches. Even if it is noticed, it is rare for such an attempt to result in a disqualification (see below) and instead it simply results in nullification of the pin attempt, so the heel wrestler rarely has anything to lose for trying it anyway. Occasionally, there are instances where a pinfall is made where both wrestlers' shoulders were on the mat for the three-count. This situation will most likely lead to a draw, and in some cases a continuation of the match or a future match to determine the winner. Submission To score by submission, the wrestler must make his opponent give up, usually, but not necessarily, by putting him in a submission hold (e.g. figure four leg-lock, arm-lock, sleeper-hold). A wrestler may voluntarily submit by verbally informing the referee (usually used in moves such as the Mexican Surfboard, where all four limbs are incapacitated, making tapping impossible). Also, since Ken Shamrock popularized it in 1997, a wrestler can indicate a voluntary submission by "tapping out", that is, tapping a free hand against the mat or against an opponent. Occasionally, a wrestler will reach for a rope (see rope breaks below), only to put his hand back on the mat so he can crawl towards the rope some more; this is not a submission, and the referee decides what his intent is. Submission was initially a large factor in professional wrestling, but following the decline of the submission-oriented catch-as-catch-can style from mainstream professional wrestling, the submission largely faded. Despite this, some wrestlers, such as Chris Jericho, Ric Flair, Bret Hart, Kurt Angle, Ken Shamrock, Dean Malenko, Chris Benoit, and Tazz, became famous for winning matches via submission. A wrestler with a signature submission technique is portrayed as better at applying the hold, making it more painful or more difficult to get out of than others who use it, or can be falsely credited as inventing the hold (such as when Tazz popularized the kata ha jime judo choke in pro wrestling as the "Tazzmission"). Since all contact between the wrestlers must cease if any part of the body is touching, or underneath, the ropes, many wrestlers will attempt to break submission holds by deliberately grabbing the bottom ropes. This is called a "rope break", and it is one of the most common ways to break a submission hold. Most holds leave an arm or leg free, so that the person can tap out if he wants. Instead, he uses these free limbs to either grab one of the ring ropes (the bottom one is the most common, as it is nearest the wrestlers, though other ropes sometimes are used for standing holds such as Chris Masters's Master Lock) or drape his foot across, or underneath one. Once this has been accomplished, and witnessed by the referee, the referee will demand that the offending wrestler break the hold, and start counting to five if the wrestler does not. If the referee reaches the count of five, and the wrestler still does not break the hold, he is disqualified. If a manager decides that his client wrestler should tap out, but cannot convince the wrestler himself to do so, he may "throw in the towel" (by literally taking a gym towel and hurling it into the ring where the referee can see it). This is the same as a submission, as in kayfabe the manager is considered the wrestlers agent and therefore authorized to make formal decisions (such as forfeiting a match) on the client's behalf. Knockout Passing out in a submission hold constitutes a loss by technical knockout. To determine if a wrestler has passed out in WWE, the referee usually picks up and drops his hand. If it drops to the mat or floor one or three consecutive times without the wrestler having the strength to hold it up, the wrestler is considered to have passed out. At one point this was largely ignored. However, the rule is now much more commonly observed for safety reasons. If the wrestler has passed out, the opponent then has won by technical knockout or technical submission. A wrestler can also win by technical knockout even if he does not resort to submission holds, but still attacks the opponent to the point of unconsciousness. To check for a technical knockout in this manner a referee would wave his hand in front of the wrestler's face and, if this produces no reaction of any kind, the referee would award the victory to the other wrestler. Countout A countout (alternatively "count-out" or "count out") happens when a wrestler is out of the ring long enough for the referee to count to ten (twenty in some promotions) and thus disqualified. The count is broken and restarted when a wrestler in the ring exits the ring. Playing into this, some wrestlers would "milk" the count by sliding in the ring and immediately sliding back out. As he was technically inside the ring for a split second before exiting again, it is sufficient to restart the count. This is often referred to by commentators as "breaking the count". Heels often use this tactic in order to buy themselves more time to catch their breath, or to attempt to frustrate their babyface opponents. If all the active wrestlers in a match are down inside the ring at the same time, the referee would begin a count (usually ten seconds, twenty in Japan). If nobody rises to their feet by the end of the count, the match is ruled a draw. Any participant who stands up in time would end the count for everyone else, while in a Last Man Standing match this form of a countout is the only way that the match can end, so the referee would count when one or more wrestlers are down and one wrestler standing up before the 10-count does not stop the count for another wrestler who is still down. In some promotions (and most major modern ones), Championships cannot change hands via a countout, unless the on-screen authority declares it for at least one match, although in others, championships may change hands via countout. Heels are known to take advantage of this and will intentionally get counted out when facing difficult opponents, especially when defending championships. Disqualification Disqualification (sometimes abbreviated as "DQ") occurs when a wrestler violates the match's rules, thus losing automatically. Although a countout can technically be considered a disqualification (as it is, for all intents and purposes, an automatic loss suffered as a result of violating a match rule), the two concepts are often distinct in wrestling. A no disqualification match can still end by countout (although this is rare). Typically, a match must be declared a "no holds barred" match, a "street fight" or some other term, in order for both disqualifications and countouts to be waived. Disqualification from a match is called for a number of reasons: Performing any illegal holds or maneuvers, such as refusing to break a hold when an opponent is in the ropes, hair-pulling, choking or biting an opponent, or repeatedly punching with a closed fist. These violations are usually subject to a referee-administered five count and will result in disqualification if the wrestler does not cease the offending behavior in time. Note that the ban on closed fists does not apply if the attacker is in midair when the punch connects, like with Jerry Lawler's diving fist drop or Roman Reigns's Superman Punch. Deliberate injury of an opponent, such as attacking an opponent's eye, such as raking it, poking it, gouging it, punching it or other severe attacks to the eye. This was imposed when Sexy Star was disqualified for a legitimate injury on Rosemary at AAA Triplemanía XXV by popping her arm out of the socket. This type of disqualification can also be grounds for stripping a wrestler of a championship, as AAA overturned the result of that AAA Women's Championship match, stripping her of the title. Any outside interference involving a person not involved in the match striking or holding a wrestler. Sometimes (depending on the promotion and uniqueness of the situation), if a heel attempts to interfere but is ejected from the ring by a wrestler or referee before this occurs, there may not be a disqualification (All Elite Wrestling is known to use ejections, as AEW referees Earl Hebner and Aubrey Edwards have ejected numerous wrestlers during events, all for outside interference). In this disqualification method, the wrestler being attacked by the foreign member is awarded the win. Sometimes, however, this can work in heels' favor. In February 2009, Shawn Michaels, who was under the kayfabe employment of John "Bradshaw" Layfield, interfered in a match and super kicked JBL in front of the referee to get his employer the win via "outside interference". Striking an opponent with a foreign object (an object not permitted by the rules of the match; see hardcore wrestling). Sometimes the win decision can be reversed if the referee spots the weapon before pin attempt or after the match because a wrestler tried to strike when the referee was either distracted or knocked out. Using any kind of "banned" move (see below for details). A direct low blow to the groin (unless the rules of the match specifically allow this). Intentionally laying hands on the referee. Pulling an opponent's mask off during a match (this is illegal in Mexico, and sometimes in Japan). Throwing an opponent over the top rope during a match (this move is still illegal in the National Wrestling Alliance; however, in cases like the Royal Rumble match, this will be allowed in order to eliminate a wrestler from the match). In a mixed tag team match, a male wrestler hitting a female wrestler (intergender), or a normal sized wrestler attacking an opposing midget wrestler (tag team matches involving teams with one normal-sized and one midget wrestler). In practice, not all rule violations will result in a disqualification as the referee may use his own judgement and is not obligated to stop the match. Usually, the only offenses that the referee will see and immediately disqualify a wrestler for (as opposed to having multiple offenses) are low blows, weapon usage, interference, or assaulting the referee. In WWE, a referee must see the violation with his own eyes to rule that the match end in a disqualification (simply watching the video tape is usually not enough) and the referee's ruling is almost always final, although dusty finishes (named after, and made famous by, Dusty Rhodes) will often result in the referee's decision being overturned. It is not uncommon for the referees themselves to get knocked out during a match, which is commonly referred to by the term "ref bump". While the referee remains "unconscious", wrestlers are free to violate rules until he is revived or replaced. In some cases, a referee might disqualify a person under the presumption that it was that wrestler who knocked him out; most referee knockouts are arranged to allow a wrestler, usually a heel, to gain an advantage. For example, a wrestler may get whipped into a referee at a slower speed, knocking the ref down for short amount of time; during that interim period, one wrestler may pin his opponent for a three-count and would have won the match but for the referee being down (sometimes, another referee will sprint to the ring from backstage to attempt to make the count, but by then, the other wrestler has had enough time to kick out on his own accord). In most promotions, a championship title cannot normally change hands via disqualification; this rule is explicitly enforced in a title match under special circumstances. If all participants in a match continue to breach the referee's instructions, the match may end in a double disqualification, where both wrestlers or teams (in a tag team match) have been disqualified. The match is essentially nullified, and called a draw or in some cases a restart or the same match being held at a pay-per-view or next night's show. Sometimes, however, if this happens in a match to determine the challenger for a heel champion's title, the champion is forced to face both opponents simultaneously for the title. Usually, the double disqualification is caused by the heel wrestler's associates in a match between two face wrestlers to determine his opponent. Forfeit Although extremely rare, a match can end in a forfeit if the opponent either does not show up for the match, or shows up but refuses to compete. Although a championship usually cannot change hands except by pinfall or submission, a forfeit victory is enough to crown a new champion. A famous example of this happened on the December 8, 1997 episode of Raw is War, when Stone Cold Steve Austin handed the WWE Intercontinental Championship to The Rock after refusing to defend the title. When a pay-per-view match is booked and one wrestler is unable to make it for one reason or another, it is usually customary to insert a last minute replacement rather than award a wrestler a victory by forfeit. Forfeit victories are almost always reserved for when the story the promotion is telling specifically requires such an ending. Despite being, statistically, an extremely rare occurrence, Charles Wright is one wrestler who is famous for turning forfeit victories into his own gimmick. During the late 1990s, Wright called himself "The Godfather" and portrayed the gimmick of a pimp. He would often bring multiple women, who he referred to as "hos," to the ring with him, and would offer the sexual services of these women to his opponents in exchange for them forfeiting their matches against him. Draw A professional wrestling match can end in a draw. A draw occurs if both opponents are simultaneously disqualified (as via countout or if the referee loses complete control of the match and both opponents attack each other with no regard to being in a match, like Brock Lesnar vs. Undertaker at 2002 Unforgiven), neither opponent is able to answer a ten-count, or both opponents simultaneously win the match. The latter can occur if, for example, one opponent's shoulders touch the mat while maintaining a submission hold against another opponent. If the opponent in the hold begins to tap out at the same time a referee counts to three for pinning the opponent delivering the hold, both opponents have legally achieved scoring conditions simultaneously. Traditionally, a championship may not change hands in the event of a draw (though it may become vacant), though some promotions such as Impact Wrestling (formally Total Nonstop Action (TNA) Wrestling) have endorsed rules where the champion may lose a title by disqualification. A variant of the draw is the time-limit draw, where the match does not have a winner by a specified time period (a one-hour draw, which was once common, is known in wrestling circles as a "Broadway"). Also if two wrestlers have been given a disqualification by either the referee or the chairman, this is a no contest and if there is a title on the line the champion keeps the championship. No contest A wrestling match may be declared a no contest if the winning conditions are unable to occur. This can be due to excessive interference, loss of referee's control over the match, one or more participants sustaining debilitating injury not caused by the opponent, or the inability of a scheduled match to even begin. A no contest is a state separate and distinct from a draw — a draw indicates winning conditions were met. Although the terms are sometimes used interchangeably in practice, this usage is technically incorrect. Dramatic elements While each wrestling match is ostensibly a competition of athletics and strategy, the goal from a business standpoint is to excite and entertain the audience. Although the competition is staged, dramatic emphasis draws out the most intense reaction. Heightened interest results in higher attendance, increased ticket sales, higher ratings on television broadcasts (greater ad revenue), higher pay-per-view buyrates, and sales of branded merchandise and recorded video footage. All of these contribute to the profit of the promotion company. Character/gimmick In Latin America and English-speaking countries, most wrestlers (and other on-stage performers) portray character roles, sometimes with personalities wildly different from their own. These personalities are a gimmick intended to heighten interest in a wrestler without regard to athletic ability. Some can be unrealistic and cartoon-like (such as Doink the Clown), while others carry more verisimilitude (such as Chris Jericho, The Rock, John Cena, Steve Austin, and CM Punk). In lucha libre, many characters wear masks, adopting a secret identity akin to a superhero or a supervillain, a near-sacred tradition. An individual wrestler may use his real name, or a minor variation of it, for much of his career, such as Bret Hart, John Cena and Randy Orton. Others can keep one ring name for their entire career (Shawn Michaels, CM Punk and Ricky Steamboat), or may change from time to time to better suit the demands of the audience or company. Sometimes a character is owned and trademarked by the company, forcing the wrestler to find a new one when he leaves (although a simple typeset change, such as changing Rhyno to Rhino, can get around this), and sometimes a character is owned by the wrestler. Sometimes, a wrestler may change his legal name to obtain ownership of his ring name (Andrew Martin and Warrior). Many wrestlers (such as The Rock and The Undertaker) are strongly identified with their character, even responding to the name in public or between friends. It's actually considered proper decorum for fellow wrestlers to refer to each other by their stage names/characters rather than their birth/legal names, unless otherwise introduced. A character can become so popular that it appears in other media (Hulk Hogan and El Santo) or even gives the performer enough visibility to enter politics (Antonio Inoki and Jesse Ventura). Typically, matches are staged between a protagonist (historically an audience favorite, known as a babyface, or "the good guy") and an antagonist (historically a villain with arrogance, a tendency to break rules, or other unlikable qualities, called a heel, or "the bad guy"). In recent years, however, antiheroes have also become prominent in professional wrestling. There is also a less common role of a "tweener", who is neither fully face nor fully heel yet able to play either role effectively (case in point, Samoa Joe during his first run in Impact Wrestling from June 2005 to November 2006). At times, a character may "turn", altering their face/heel alignment. This may be an abrupt, surprising event, or it may slowly build over time. It is almost always accomplished with a markable change in behavior. Some turns become defining points in a career, as when Hulk Hogan turned heel after being a top face for over a decade. Others may have no noticeable effect on the character's status. If a character repeatedly switches between face and heel, this lessens the effect of such turns, and may result in apathy from the audience. Big Show is a good example of having more heel and face turns than anyone in WWE history. As with personae in general, a character's face or heel alignment may change with time, or remain constant over its lifetime (the most famous example of the latter is Ricky Steamboat, a WWE Hall of Famer who remained a babyface throughout his entire career). Sometimes a character's heel turn will become so popular that eventually the audience response will alter the character's heel-face cycle to the point where the heel persona will, in practice, become a face persona, and what was previously the face persona, will turn into the heel persona, such as when Dwayne Johnson first began using "The Rock" persona as a heel character, as opposed to his original "Rocky Maivia" babyface persona. Another legendary example is Stone Cold Steve Austin, who was originally booked as a heel, with such mannerisms as drinking on the job, using profanity, breaking company property, and even breaking into people's private homes. However, much to WWF's surprise, the fans response to Austin' was so positive that he effectively became one of the most popular antiheroes in professional wrestling. Austin, along with the stable of D-Generation X, Bret Hart and his Hart Foundation, is generally credited with ushering the Attitude Era of WWF programming. Story While real exhibition matches are now not uncommon, most matches tell a story analogous to an episode of a serial drama: the face will from time to time win (triumph) or from time to time lose (tragedy), and longer story arcs can result from a couple of matches. Since most promotions have a championship title, opposition for the championship is a frequent impetus for stories. For added stakes, anything from a character's own hair to his job can be wagered in a match. Some matches are designed to further the story of only one participant. It could be intended to portray an unstoppable force, a lucky underdog, a sore loser, or any other characterization. Sometimes non-wrestling vignettes are shown to enhance a character's image without the need for matches. Other stories result from a natural rivalry. Outside of performance, these are referred to as feuds. A feud can exist between any number of participants and can last from a few days to decades. The feud between Ric Flair and Ricky Steamboat lasted from the late 1970s into the early 1990s and allegedly spanned over two thousand matches (although most of those matches were mere dark matches). The career-spanning history between characters Mike Awesome and Masato Tanaka is another example of a long-running feud, as is the case of Steve Austin vs. Vince McMahon, one of the most lucrative feuds in the World Wrestling Federation during 1998 and 1999. In theory, the longer a feud is built up, the more audience interest (aka heat) lasts. The main event of a wrestling show is generally the most heated. Commonly, a heel will hold the upper hand over a face until a final showdown, heightening dramatic tension as the face's fans desire to see him win. Throughout the history of professional wrestling, many other elements of media have been utilized in professional wrestling storytelling: pre- and post-match interviews, "backstage" skits, positions of authority and worked behind-the-scenes feuds, division rankings (typically the #1-contendership spot), contracts, lotteries, news stories on websites, and in recent years social media. Also, anything that can be used as an element of drama can exist in professional wrestling stories: romantic relationships (including love triangles and marriage), racism, classism, nepotism, favoritism, corporate corruption, family bonds, personal histories, grudges, theft, cheating, assault, betrayal, bribery, seduction, stalking, confidence tricks, extortion, blackmail, substance abuse, self-doubt, self-sacrifice; even kidnapping, sexual fetishism, necrophilia, misogyny, rape and death have been portrayed in wrestling. Some promotions have included supernatural elements such as magic, curses, the undead and Satanic imagery (most notably the Undertaker and his Ministry of Darkness, a stable that regularly performed evil rituals and human sacrifice in Satanic-like worship of a hidden power figure). Celebrities would also be involved in storylines. Commentators have become important in communicating the relevance of the characters' actions to the story at hand, filling in past details and pointing out subtle actions that may otherwise go unnoticed. Promos A main part of the story-telling part of wrestling is a promo, short for promotional interview. Promos are performed, or "cut" in wrestling jargon, for a variety of reasons, including to heighten interest in a wrestler, or to hype an upcoming match. Since the crowd is often too loud or the venue too large for promos to be heard naturally, wrestlers will use amplification when speaking in the ring. Unlike most Hollywood acting, large and highly visible handheld microphones are typically used and wrestlers often speak directly to the audience. Championships Professional wrestling mimics the structure of title match combat sports. Participants compete for a championship and must defend it after winning it. These titles are represented physically by a title belt that can be worn by the champion. In the case of team wrestling, there is a title belt for each member of the team. Almost all professional wrestling promotions have one major title, and some have more. Championships are designated by divisions of weight, height, gender, wrestling style and other qualifications. Typically, each promotion only recognizes the "legitimacy" of their own titles, although cross-promotion does happen. When one promotion absorbs or purchases another, the titles from the defunct promotion may continue to be defended in the new promotion or be decommissioned. Behind the scenes, the bookers in a company will place the title on the most accomplished performer, or those the bookers believe will generate fan interest in terms of event attendance and television viewership. Historically, a world champion was typically a legit shooter/hooker who had the skills to prevent double crosses by would-be shooters who would deviate from the planned finish for personal glory. Lower ranked titles may also be used on the performers who show potential, thus allowing them greater exposure to the audience. However other circumstances may also determine the use of a championship. A combination of a championship's lineage, the caliber of performers as |
a pure state, in polar coordinates, , the idempotent density matrix acts on the state eigenvector with eigenvalue +1, hence it acts like a projection operator. Relation with the permutation operator Let be the transposition (also known as a permutation) between two spins and living in the tensor product space This operator can also be written more explicitly as Dirac's spin exchange operator, Its eigenvalues are therefore 1 or −1. It may thus be utilized as an interaction term in a Hamiltonian, splitting the energy eigenvalues of its symmetric versus antisymmetric eigenstates. SU(2) The group SU(2) is the Lie group of unitary matrices with unit determinant; its Lie algebra is the set of all anti-Hermitian matrices with trace 0. Direct calculation, as above, shows that the Lie algebra is the 3-dimensional real algebra spanned by the set . In compact notation, As a result, each can be seen as an infinitesimal generator of SU(2). The elements of SU(2) are exponentials of linear combinations of these three generators, and multiply as indicated above in discussing the Pauli vector. Although this suffices to generate SU(2), it is not a proper representation of , as the Pauli eigenvalues are scaled unconventionally. The conventional normalization is so that As SU(2) is a compact group, its Cartan decomposition is trivial. SO(3) The Lie algebra is isomorphic to the Lie algebra , which corresponds to the Lie group SO(3), the group of rotations in three-dimensional space. In other words, one can say that the are a realization (and, in fact, the lowest-dimensional realization) of infinitesimal rotations in three-dimensional space. However, even though and are isomorphic as Lie algebras, and are not isomorphic as Lie groups. is actually a double cover of , meaning that there is a two-to-one group homomorphism from to , see relationship between SO(3) and SU(2). Quaternions The real linear span of is isomorphic to the real algebra of quaternions , represented by the span of the basis vectors The isomorphism from to this set is given by the following map (notice the reversed signs for the Pauli matrices): Alternatively, the isomorphism can be achieved by a map using the Pauli matrices in reversed order, As the set of versors forms a group isomorphic to , gives yet another way of describing . The two-to-one homomorphism from to may be given in terms of the Pauli matrices in this formulation. Physics Classical mechanics In classical mechanics, Pauli matrices are useful in the context of the Cayley-Klein parameters. The matrix corresponding to the position of a point in space is defined in terms of the above Pauli vector matrix, Consequently, the transformation matrix for rotations about the -axis through an angle may be written in terms of Pauli matrices and the unit matrix as Similar expressions follow for general Pauli vector rotations as detailed above. Quantum mechanics In quantum mechanics, each Pauli matrix is related to an angular momentum operator that corresponds to an observable describing the spin of a spin particle, in each of the three spatial directions. As an immediate consequence of the Cartan decomposition mentioned above, are the generators of a projective representation (spin representation) of the rotation group SO(3) acting on non-relativistic particles with spin . The states of the particles are represented as two-component spinors. In the same way, the Pauli matrices are related to the isospin operator. An interesting property of spin particles is that they must be rotated by an angle of 4 in order to return to their original configuration. This is due to the two-to-one correspondence between SU(2) and SO(3) mentioned above, and the fact that, although one visualizes spin up/down as the north/south pole on the 2-sphere they are actually represented by orthogonal vectors in the two dimensional complex Hilbert space. For a spin particle, the spin operator is given by , the fundamental representation of SU(2). By taking Kronecker products of this representation with itself repeatedly, one may construct all higher irreducible representations. That is, the resulting spin operators for higher spin systems in three spatial dimensions, for arbitrarily large j, can be calculated using this spin operator and ladder operators. They can be found in . The analog formula to the above generalization of Euler's formula for Pauli matrices, the group element in terms of spin matrices, is tractable, but less simple. Also useful in the quantum mechanics of multiparticle systems, the general Pauli group is defined to consist of all -fold | the Pauli vector, namely rotation effectively by double the angle , Completeness relation An alternative notation that is commonly used for the Pauli matrices is to write the vector index in the superscript, and the matrix indices as subscripts, so that the element in row and column of the -th Pauli matrix is In this notation, the completeness relation for the Pauli matrices can be written Proof: The fact that the Pauli matrices, along with the identity matrix , form an orthogonal basis for the Hilbert space of all 2 × 2 complex matrices means that we can express any matrix as where is a complex number, and is a 3-component, complex vector. It is straightforward to show, using the properties listed above, that where "" denotes the trace, and hence that which can be rewritten in terms of matrix indices as where summation over the repeated indices is implied and . Since this is true for any choice of the matrix , the completeness relation follows as stated above. As noted above, it is common to denote the 2 × 2 unit matrix by so The completeness relation can alternatively be expressed as The fact that any Hermitian complex 2 × 2 matrices can be expressed in terms of the identity matrix and the Pauli matrices also leads to the Bloch sphere representation of 2 × 2 mixed states’ density matrix, (positive semidefinite 2 × 2 matrices with unit trace. This can be seen by first expressing an arbitrary Hermitian matrix as a real linear combination of as above, and then imposing the positive-semidefinite and trace conditions. For a pure state, in polar coordinates, , the idempotent density matrix acts on the state eigenvector with eigenvalue +1, hence it acts like a projection operator. Relation with the permutation operator Let be the transposition (also known as a permutation) between two spins and living in the tensor product space This operator can also be written more explicitly as Dirac's spin exchange operator, Its eigenvalues are therefore 1 or −1. It may thus be utilized as an interaction term in a Hamiltonian, splitting the energy eigenvalues of its symmetric versus antisymmetric eigenstates. SU(2) The group SU(2) is the Lie group of unitary matrices with unit determinant; its Lie algebra is the set of all anti-Hermitian matrices with trace 0. Direct calculation, as above, shows that the Lie algebra is the 3-dimensional real algebra spanned by the set . In compact notation, As a result, each can be seen as an infinitesimal generator of SU(2). The elements of SU(2) are exponentials of linear combinations of these three generators, and multiply as indicated above in discussing the Pauli vector. Although this suffices to generate SU(2), it is not a proper representation of , as the Pauli eigenvalues are scaled unconventionally. The conventional normalization is so that As SU(2) is a compact group, its Cartan decomposition is trivial. SO(3) The Lie algebra is isomorphic to the Lie algebra , which corresponds to the Lie group SO(3), the group of rotations in three-dimensional space. In other words, one can say that the are a realization (and, in fact, the lowest-dimensional realization) of infinitesimal rotations in three-dimensional space. However, even though and are isomorphic as Lie algebras, and are not isomorphic as Lie groups. is actually a double cover of , meaning that there is a two-to-one group homomorphism from to , see relationship between SO(3) and SU(2). Quaternions The real linear span of is isomorphic to the real algebra of quaternions , represented by the span of the basis vectors The isomorphism from to this set is given by the following map (notice the reversed signs for the Pauli matrices): Alternatively, the isomorphism can be achieved by a map using the Pauli matrices in reversed order, As the set of versors forms a group isomorphic to , gives yet another way of describing . The two-to-one homomorphism from to may be given in terms of the Pauli matrices in this formulation. Physics Classical mechanics In classical mechanics, Pauli matrices are useful in the context of the Cayley-Klein parameters. The matrix corresponding to the position of a point in space is defined in terms of the above Pauli vector matrix, Consequently, the transformation matrix for rotations about the -axis through an angle may be written in terms of Pauli matrices and the unit matrix as Similar expressions follow for general Pauli vector rotations as detailed above. Quantum mechanics In quantum mechanics, each Pauli matrix is related to an angular momentum operator that corresponds to an observable describing the spin of a spin particle, in each of the three spatial directions. As an immediate consequence of the Cartan decomposition mentioned above, are the generators of a projective representation (spin representation) of the rotation group SO(3) acting on non-relativistic particles with spin . |
inactive center and works best with stylus input, and well with a mouse. Pie slices are drawn with a hole in the middle for an easy way to exit the menu. Pie menus work well with keyboard acceleration, particularly four and eight item menus, on the cursor keys and the number pad. A goal of pie menus is to provide a smooth, reliable gestural style of interaction for novices and experts. A slice can lead to another pie menu; selecting this may center the pointer in the new menu. A marking menu is a variant of this technique that makes the menu less sensitive to variance in gesture size. As a kind of context menu, pie menus are often context-sensitive, showing different options depending on what the pointer was pointing at when the menu was requested. History The first documented radial menu is attributed to a system called PIXIE in 1969. Some universities explored alternative visual layouts. In 1986, Mike Gallaher and Don Hopkins together independently arrived at the concept of a context menu based on the angle to the origin where the exact angle and radius could be passed as parameters to a command, and a mouse click could be used to trigger an item or submenu. The first performance comparison to linear menus was performed in 1988 showing an increase in performance of 15% less time and a reduction of selection errors. The role-playing video game Secret of Mana featured an innovative icon-based radial menu system in 1993. Its ring menu system was adopted by later video games. Usage For novice users, pie menus are easy because they are a self-revealing gestural interface: They show what you can do and direct you how to do it. By clicking and popping up a pie menu, looking at the labels, moving the pointer in the desired direction, then clicking to make a selection, users learn the menu and practice the gesture to "mark ahead" ("mouse ahead" in the case of a mouse, "wave ahead" in the case of a dataglove). With a little practice, it becomes quite easy to mark ahead even through nested pie menus. For the expert, the pie menus are more efficient. Because they might have built up the muscle memory for certain menu actions, and able to select the option they want without looking the pop up selections. In some cases, only when used more slowly like a traditional menu, does a pie menu pop up on the screen, to reveal the available selections. Moreover, novices can gradually become experts when they practice the same pie menu selection for many times and start to remember the menu and the motion. As Jaron Lanier of VPL Research has remarked, "The mind may forget, but the body remembers." Pie menus take advantage of the body's ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels. Comparison with other interaction techniques Pie menus are faster and more reliable to select from than linear menus, because selection depends on direction instead of distance. The circular menu slices are large in size and near the pointer for fast interaction (see Fitts's law). Experienced users use muscle memory without looking at the menu while selecting from it. Nested pie menus can efficiently offer many options, and some pie menus can pop up linear menus, and combine linear and radial items in the same menu. Pie menus just like any popup menu are shown only when requested, resulting in less visual distraction and cognitive load than toolbars and menu bars that are always shown. Pie menus show available options, in contrast to invisible mouse gestures. Pie menus, which delay appearance until the pointer is not moving, reduce intrusiveness to the same level as mouse gestures for experienced users. Pie menus take up more screen space than linear menus, and the number of slices in an individual menu must be kept low for effectiveness by using submenus. When using pie menus, submenus may overlap with the parent menu, but the parent menu may become translucent or hidden. Pie menus | even through nested pie menus. For the expert, the pie menus are more efficient. Because they might have built up the muscle memory for certain menu actions, and able to select the option they want without looking the pop up selections. In some cases, only when used more slowly like a traditional menu, does a pie menu pop up on the screen, to reveal the available selections. Moreover, novices can gradually become experts when they practice the same pie menu selection for many times and start to remember the menu and the motion. As Jaron Lanier of VPL Research has remarked, "The mind may forget, but the body remembers." Pie menus take advantage of the body's ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels. Comparison with other interaction techniques Pie menus are faster and more reliable to select from than linear menus, because selection depends on direction instead of distance. The circular menu slices are large in size and near the pointer for fast interaction (see Fitts's law). Experienced users use muscle memory without looking at the menu while selecting from it. Nested pie menus can efficiently offer many options, and some pie menus can pop up linear menus, and combine linear and radial items in the same menu. Pie menus just like any popup menu are shown only when requested, resulting in less visual distraction and cognitive load than toolbars and menu bars that are always shown. Pie menus show available options, in contrast to invisible mouse gestures. Pie menus, which delay appearance until the pointer is not moving, reduce intrusiveness to the same level as mouse gestures for experienced users. Pie menus take up more screen space than linear menus, and the number of slices in an individual menu must be kept low for effectiveness by using submenus. When using pie menus, submenus may overlap with the parent menu, but the parent menu may become translucent or hidden. Pie menus are most suited for actions that have been laid out by humans, and have logical grouping choices. Linear menus are most suited for dynamic, large menus that have many possible options, without any logical grouping, since pie menus can only show a limited number of menu items. Around 3-12 items can be reasonably accommodated in a radial layout, but additional items past that tend to counteract the benefits of using pie menus in the first place. This can be overcome with related techniques that allow chaining commands in one single gesture through submenus. However, using interaction techniques that are not pointer-based have proven problematic with both pie and linear menus for cluttered digital tabletop, where physical objects might occlude menu items. Pie menus are unavailable as standard graphical control element in common commercial toolkits. Video games often require custom widget development, so pie menu cost is lower in that particular scenario. Notable implementations Secret of Mana and its successor Secret of Evermore (where the menu was used to accelerate the pacing of combat) Blender, an open source |
20th and 21st century The primitive conditions were intolerable for a world national capital, and the Imperial German government brought in its scientists, engineers, and urban planners to not only solve the deficiencies, but to forge Berlin as the world's model city. A British expert in 1906 concluded that Berlin represented "the most complete application of science, order and method of public life," adding "it is a marvel of civic administration, the most modern and most perfectly organized city that there is." The emergence of great factories and consumption of immense quantities of coal gave rise to unprecedented air pollution and the large volume of industrial chemical discharges added to the growing load of untreated human waste. Chicago and Cincinnati were the first two American cities to enact laws ensuring cleaner air in 1881. Pollution became a major issue in the United States in the early twentieth century, as progressive reformers took issue with air pollution caused by coal burning, water pollution caused by bad sanitation, and street pollution caused by the 3 million horses who worked in American cities in 1900, generating large quantities of urine and manure. As historian Martin Melosi notes, the generation that first saw automobiles replacing the horses saw cars as "miracles of cleanliness". By the 1940s, however, automobile-caused smog was a major issue in Los Angeles. Other cities followed around the country until early in the 20th century, when the short lived Office of Air Pollution was created under the Department of the Interior. Extreme smog events were experienced by the cities of Los Angeles and Donora, Pennsylvania in the late 1940s, serving as another public reminder. Air pollution would continue to be a problem in England, especially later during the industrial revolution, and extending into the recent past with the Great Smog of 1952. Awareness of atmospheric pollution spread widely after World War II, with fears triggered by reports of radioactive fallout from atomic warfare and testing. Then a non-nuclear event – the Great Smog of 1952 in London – killed at least 4000 people. This prompted some of the first major modern environmental legislation: the Clean Air Act of 1956. Pollution began to draw major public attention in the United States between the mid-1950s and early 1970s, when Congress passed the Noise Control Act, the Clean Air Act, the Clean Water Act, and the National Environmental Policy Act. Severe incidents of pollution helped increase consciousness. PCB dumping in the Hudson River resulted in a ban by the EPA on consumption of its fish in 1974. National news stories in the late 1970s – especially the long-term dioxin contamination at Love Canal starting in 1947 and uncontrolled dumping in Valley of the Drums – led to the Superfund legislation of 1980. The pollution of industrial land gave rise to the name brownfield, a term now common in city planning. The development of nuclear science introduced radioactive contamination, which can remain lethally radioactive for hundreds of thousands of years. Lake Karachay – named by the Worldwatch Institute as the "most polluted spot" on earth – served as a disposal site for the Soviet Union throughout the 1950s and 1960s. Chelyabinsk, Russia, is considered the "Most polluted place on the planet". Nuclear weapons continued to be tested in the Cold War, especially in the earlier stages of their development. The toll on the worst-affected populations and the growth since then in understanding about the critical threat to human health posed by radioactivity has also been a prohibitive complication associated with nuclear power. Though extreme care is practiced in that industry, the potential for disaster suggested by incidents such as those at Three Mile Island, Chernobyl, and Fukushima pose a lingering specter of public mistrust. Worldwide publicity has been intense on those disasters. Widespread support for test ban treaties has ended almost all nuclear testing in the atmosphere. International catastrophes such as the wreck of the Amoco Cadiz oil tanker off the coast of Brittany in 1978 and the Bhopal disaster in 1984 have demonstrated the universality of such events and the scale on which efforts to address them needed to engage. The borderless nature of atmosphere and oceans inevitably resulted in the implication of pollution on a planetary level with the issue of global warming. Most recently the term persistent organic pollutant (POP) has come to describe a group of chemicals such as PBDEs and PFCs among others. Though their effects remain somewhat less well understood owing to a lack of experimental data, they have been detected in various ecological habitats far removed from industrial activity such as the Arctic, demonstrating diffusion and bioaccumulation after only a relatively brief period of widespread use. A much more recently discovered problem is the Great Pacific Garbage Patch, a huge concentration of plastics, chemical sludge and other debris which has been collected into a large area of the Pacific Ocean by the North Pacific Gyre. This is a less well known pollution problem than the others described above, but nonetheless has multiple and serious consequences such as increasing wildlife mortality, the spread of invasive species and human ingestion of toxic chemicals. Organizations such as 5 Gyres have researched the pollution and, along with artists like Marina DeBris, are working toward publicizing the issue. Pollution introduced by light at night is becoming a global problem, more severe in urban centres, but nonetheless contaminating also large territories, far away from towns. Growing evidence of local and global pollution and an increasingly informed public over time have given rise to environmentalism and the environmental movement, which generally seek to limit human impact on the environment. See also Anthropocene Aspinall V. Mitchell - landmark pollution trial, 1880 Biological contamination Chemical contamination Environmental health Environmental racism Hazardous Substances Data Bank Marine pollution Overpopulation Pollutants Pollutant release and transfer register Polluter pays principle Pollution haven hypothesis Regulation of greenhouse gases under the Clean Air Act Rossby wave Plastic pollution Pollution is Colonialism Sacrifice zone Gallery References External links OEHHA proposition 65 list National Toxicology Program – from US National Institutes of Health. Reports and studies on how pollutants affect people TOXNET – NIH databases and reports on toxicology TOXMAP – Geographic Information System (GIS) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency (EPA) Toxics Release Inventory and Superfund Basic Research Programs EPA.gov – manages Superfund sites and the pollutants in them (CERCLA). Map the EPA Superfund Toxic Release Inventory – tracks how much waste US companies release into the water and air. Gives permits for releasing specific quantities of these pollutants each year. Map EPA's Toxic Release Inventory Agency for Toxic Substances and Disease Registry – Top 20 pollutants, how they affect people, what US industries use them and the products in which they are found Toxicology Tutorials from the National Library of Medicine – resources to review human toxicology. World's Worst Polluted Places 2007, according to the Blacksmith Institute The World's Most Polluted Places at Time.com (a division of Time Magazine) Chelyabinsk: The Most Contaminated Spot on the Planet Documentary Film by Slawomir Grünberg (1996) Nieman | dioxide, chlorofluorocarbons (CFCs) and nitrogen oxides produced by industry and motor vehicles. Photochemical ozone and smog are created as nitrogen oxides and hydrocarbons react to sunlight. Particulate matter, or fine dust is characterized by their micrometre size PM10 to PM2.5. Electromagnetic pollution: the overabundance of electromagnetic radiation in their non-ionizing form, like radio waves, etc, that people are constantly exposed at, especially in large cities. It's still unknown whether or not those types of radiation have any effects on human health, though. Light pollution: includes light trespass, over-illumination and astronomical interference. Littering: the criminal throwing of inappropriate man-made objects, unremoved, onto public and private properties. Noise pollution: which encompasses roadway noise, aircraft noise, industrial noise as well as high-intensity sonar. Plastic pollution: involves the accumulation of plastic products and microplastics in the environment that adversely affects wildlife, wildlife habitat, or humans. Soil contamination occurs when chemicals are released by spill or underground leakage. Among the most significant soil contaminants are hydrocarbons, heavy metals, MTBE, herbicides, pesticides and chlorinated hydrocarbons. Radioactive contamination, resulting from 20th century activities in atomic physics, such as nuclear power generation and nuclear weapons research, manufacture and deployment. (See alpha emitters and actinides in the environment.) Thermal pollution, is a temperature change in natural water bodies caused by human influence, such as use of water as coolant in a power plant. Visual pollution, which can refer to the presence of overhead power lines, motorway billboards, scarred landforms (as from strip mining), open storage of trash, municipal solid waste or space debris. Water pollution, by the discharge of industrial wastewater from commercial and industrial waste (intentionally or through spills) into surface waters; discharges of untreated sewage, and chemical contaminants, such as chlorine, from treated sewage; release of waste and contaminants into surface runoff flowing to surface waters (including urban runoff and agricultural runoff, which may contain chemical fertilizers and pesticides; also including human feces from open defecation – still a major problem in many developing countries); groundwater pollution from waste disposal and leaching into the ground, including from pit latrines and septic tanks; eutrophication and littering. Pollutants A pollutant is a waste material that pollutes air, water, or soil. Three factors determine the severity of a pollutant: its chemical nature, the concentration, the area affected and the persistence. Sources and causes Air pollution comes from both natural and human-made (anthropogenic) sources. However, globally human-made pollutants from combustion, construction, mining, agriculture and warfare are increasingly significant in the air pollution equation. Motor vehicle emissions are one of the leading causes of air pollution. China, United States, Russia, India Mexico, and Japan are the world leaders in air pollution emissions. Principal stationary pollution sources include chemical plants, coal-fired power plants, oil refineries, petrochemical plants, nuclear waste disposal activity, incinerators, large livestock farms (dairy cows, pigs, poultry, etc.), PVC factories, metals production factories, plastics factories, and other heavy industry. Agricultural air pollution comes from contemporary practices which include clear felling and burning of natural vegetation as well as spraying of pesticides and herbicides About 400 million metric tons of hazardous wastes are generated each year. The United States alone produces about 250 million metric tons. Americans constitute less than 5% of the world's population, but produce roughly 25% of the world's , and generate approximately 30% of world's waste. In 2007, China overtook the United States as the world's biggest producer of , while still far behind based on per capita pollution (ranked 78th among the world's nations). Some of the more common soil contaminants are chlorinated hydrocarbons (CFH), heavy metals (such as chromium, cadmium – found in rechargeable batteries, and lead – found in lead paint, aviation fuel and still in some countries, gasoline), MTBE, zinc, arsenic and benzene. In 2001 a series of press reports culminating in a book called Fateful Harvest unveiled a widespread practice of recycling industrial byproducts into fertilizer, resulting in the contamination of the soil with various metals. Ordinary municipal landfills are the source of many chemical substances entering the soil environment (and often groundwater), emanating from the wide variety of refuse accepted, especially substances illegally discarded there, or from pre-1970 landfills that may have been subject to little control in the U.S. or EU. There have also been some unusual releases of polychlorinated dibenzodioxins, commonly called dioxins for simplicity, such as TCDD. Pollution can also be the consequence of a natural disaster. For example, hurricanes often involve water contamination from sewage, and petrochemical spills from ruptured boats or automobiles. Larger scale and environmental damage is not uncommon when coastal oil rigs or refineries are involved. Some sources of pollution, such as nuclear power plants or oil tankers, can produce widespread and potentially hazardous releases when accidents occur. In the case of noise pollution the dominant source class is the motor vehicle, producing about ninety percent of all unwanted noise worldwide. Greenhouse gases emissions Carbon dioxide, while vital for photosynthesis, is sometimes referred to as pollution, because raised levels of the gas in the atmosphere are affecting the Earth's climate. Disruption of the environment can also highlight the connection between areas of pollution that would normally be classified separately, such as those of water and air. Recent studies have investigated the potential for long-term rising levels of atmospheric carbon dioxide to cause slight but critical increases in the acidity of ocean waters, and the possible effects of this on marine ecosystems. In February 2007, a report by the Intergovernmental Panel on Climate Change (IPCC), representing the work of 2,500 scientists, economists, and policymakers from more than 120 countries, confirmed that humans have been the primary cause of global warming since 1950. Humans have ways to cut greenhouse gas emissions and avoid the consequences of global warming, a major climate report concluded. But to change the climate, the transition from fossil fuels like coal and oil needs to occur within decades, according to the final report this year from the UN's Intergovernmental Panel on Climate Change (IPCC). Effects Human health Adverse air quality can kill many organisms, including humans. Ozone pollution can cause respiratory disease, cardiovascular disease, throat inflammation, chest pain, and congestion. Water pollution causes approximately 14,000 deaths per day, mostly due to contamination of drinking water by untreated sewage in developing countries. An estimated 500 million Indians have no access to a proper toilet, Over ten million people in India fell ill with waterborne illnesses in 2013, and 1,535 people died, most of them children. Nearly 500 million Chinese lack access to safe drinking water. A 2010 analysis estimated that 1.2 million people died prematurely each year in China because of air pollution. The high smog levels China has been facing for a long time can do damage to civilians' bodies and cause different diseases. The WHO estimated in 2007 that air pollution causes half a million deaths per year in India. Studies have estimated that the number of people killed annually in the United States could be over 50,000. Oil spills can cause skin irritations and rashes. Noise pollution induces hearing loss, high blood pressure, stress, and sleep disturbance. Mercury has been linked to developmental deficits in children and neurologic symptoms. Older people are majorly exposed to diseases induced by air pollution. Those with heart or lung disorders are at additional risk. Children and infants are also at serious risk. Lead and other heavy metals have been shown to cause neurological problems. Chemical and radioactive substances can cause cancer and as well as birth defects. An October 2017 study by the Lancet Commission on Pollution and Health found that global pollution, specifically toxic air, water, soils and workplaces, kills nine million people annually, which is triple the number of deaths caused by AIDS, tuberculosis and malaria combined, and 15 times higher than deaths caused by wars and other forms of human violence. The study concluded that "pollution is one of the great existential challenges of the Anthropocene era. Pollution endangers the stability of the Earth’s support systems and threatens the continuing survival of human societies." Environment Pollution has been found to be present widely in the environment. There are a number of effects of this: Biomagnification describes situations where toxins (such as heavy metals) may pass through trophic levels, becoming exponentially more concentrated in the process. Carbon dioxide emissions cause ocean acidification, the ongoing decrease in the pH of the Earth's oceans as becomes dissolved. The emission of greenhouse gases leads to global warming which affects ecosystems in many ways. Invasive species can outcompete native species and reduce biodiversity. Invasive plants can contribute debris and biomolecules (allelopathy) that can alter soil and chemical compositions of an environment, often reducing native species competitiveness. Nitrogen oxides are removed from the air by rain and fertilise land which can change the species composition of ecosystems. Smog and haze can reduce the amount of sunlight received by plants to carry out photosynthesis and leads to the production of tropospheric ozone which damages plants. Soil can become infertile and unsuitable for plants. This will affect other organisms in the food web. Sulfur dioxide and nitrogen oxides can cause acid rain which lowers the pH value of soil. Organic pollution of watercourses can deplete oxygen levels and reduce species diversity. A 2022 study published in Environmental Science & Technology found that levels of anthropogenic chemical pollution have exceeded planetary boundaries and now threaten entire ecosystems around the world. Environmental health information The Toxicology and Environmental Health Information Program (TEHIP) at the United States National Library of Medicine (NLM) maintains a comprehensive toxicology and environmental health web site that includes access to resources produced by TEHIP and by other government agencies and organizations. This web site includes links to databases, bibliographies, tutorials, and other scientific and consumer-oriented resources. TEHIP also is responsible for the Toxicology Data Network (TOXNET) an integrated system of toxicology and environmental health databases that are available free of charge on the web. TOXMAP is a Geographic Information System (GIS) that is part of TOXNET. TOXMAP uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. Crime A 2021 study found that exposure to pollution causes an increase in violent crime. School outcomes A 2019 paper linked pollution to adverse school outcomes for children. Worker productivity A number of studies show that pollution has an adverse effect on the productivity of both indoor and outdoor workers. Regulation and monitoring To protect the environment from the adverse effects of pollution, many nations worldwide have enacted legislation to regulate various types of pollution as well as to mitigate the adverse effects of pollution. Pollution control Pollution control is a term used in environmental management. It means the control of emissions and effluents into air, water or soil. Without pollution control, the waste products from overconsumption, heating, agriculture, mining, manufacturing, transportation and other human activities, whether they accumulate or disperse, will degrade the environment. In the hierarchy of controls, pollution prevention and waste minimization are more desirable than pollution control. In the field of land development, low impact development is a similar technique for the prevention of urban runoff. Practices Recycling Reusing Waste minimisation Mitigating Pollution prevention Compost Pollution control devices Air pollution control Thermal oxidizer Dust collection systems Baghouses Cyclones Electrostatic precipitators Scrubbers Baffle spray scrubber Cyclonic spray scrubber Ejector venturi scrubber Mechanically aided scrubber Spray tower |
extended reach and thrusting tactics used in pike square or phalanx combat; those designed to increase leverage (thanks to hands moving freely on a pole) to maximize centrifugal force against cavalry; and those designed for throwing tactics used in skirmish line combat. Because their versatility, high effectiveness and cheap cost, polearms experimentation led to many variants and were the most frequently used weapons on the battlefield: bills, picks, dane axes, spears, glaives, guandaos, pudaos, poleaxes, halberds, harpoons, sovnyas, tridents, naginatas, bardiches, war scythes, and lances are all varieties of pole arms. Pole arms were common weapons on post-classical battlefields of Asia and Europe. Their range and impact force made them effective weapons against armored warriors on horseback, because they could be dismounted and/or penetrate said armor. The Renaissance saw a plethora of varieties. Pole arms in modern times are largely constrained to ceremonial military units such as the Papal Swiss Guard or Yeomen of the Guard, or traditional martial arts. Chinese martial arts in particular have preserved a wide variety of weapons and techniques. Classification difficulties The classification of pole weapons can be difficult, and European weapon classifications in particular can be confusing. This can be due to a number of factors, including uncertainty in original descriptions, changes in weapons or nomenclature through time, mistranslation of terms, and the well-meaning inventiveness of later experts. For example, the word "halberd" is also used to translate the Chinese ji and also a range of medieval Scandinavian weapons as described in sagas, such as the atgeir. As well, all pole arms developed from three early tools (the axe, the scythe, and the knife) and one weapon, the spear. In the words of the arms expert Ewart Oakeshott, While men-at-arms may have been armed with custom designed military weapons, militias were often armed with whatever was available. These may or may not have been mounted on poles and described by one of more names. The problems with precise definitions can be inferred by a contemporary description of Royalist infantry which were engaged in the Battle of Birmingham (1643) during the first year of English Civil War (in the early modern period). The infantry regiment that accompanied Prince Rupert's cavalry were armed: List of pole weapons Ancient pole weapons European Falx Rhomphaia Kontos (weapon) Dory (spear) Sarissa Xyston Asian Dagger-axe The dagger-axe, or gee (Chinese: 戈; pinyin: gē; Wade–Giles: ko; sometimes confusingly translated "halberd") is a type of weapon that was in use from Shang dynasty until at least Han dynasty China. It consists of a dagger-shaped blade made of bronze (or later iron) mounted by the tang to a perpendicular wooden shaft: a common Bronze Age infantry weapon, also used by charioteers. Some dagger axes include a spear-point. There is a (rare) variant type with a divided two-part head, consisting of the usual straight blade and a scythe-like blade. Other rarities include archaeology findings with two or sometimes three blades stacked in line on top of a pole, but were generally thought as ceremonial pole arms. Though the weapon saw frequent use in ancient China, the use of the dagger-axe decreased dramatically after the Qin and Han dynasties. The ji combines the dagger axe with a spear. By the post-classical Chinese dynasties, with the decline of chariot warfare, the use of the dagger-axe was almost nonexistent. Ji The ji (Chinese: 戟) was created by combining the dagger-axe with a spear. It was used as a military weapon at least as early as the Shang dynasty until the end of the Northern and Southern dynasties. Ngao The ngao or ngau (ง้าว,ของ้าว) is a Thai pole arm that was traditionally used by elephant-riding infantry and is still used by practitioners of krabi krabong. Known in Malay as a dap, it consists of a wooden shaft with a curved blade fashioned onto the end, and is similar in design to the Korean woldo. Usually, it also had a hook (ขอ) between the blade and shaft used for commanding the elephant. The elephant warrior used the ngao like a blade from atop an elephant or horse during battle. Post-classical pole weapons European Danish axe The Dane axe is a weapon with a heavy crescent-shaped head mounted on a haft in length. Originally a Viking weapon, it was adopted by the Anglo-Saxons and Normans in the 11th century, spreading through Europe in the 12th and 13th centuries. Variants of this basic weapon continued in use in Scotland and Ireland into the 16th century. A form of 'long axe'. Sparth axe In the 13th century, variants on the Danish axe are seen. Described in English as a sparth (from the Old Norse ) or pale-axe, the weapon featured a larger head with broader blade, the rearward part of the crescent sweeping up to contact (or even be attached to) the haft. In Ireland, this axe was known as a sparr axe. Originating in either Western Scotland or Ireland, the sparr was widely used by the galloglass. Although sometimes said to derive from the Irish for a joist or beam, a more likely definition is as a variant of sparth. Although attempts have been made to suggest that the sparr had a distinctive shaped head, illustrations and surviving weapons show there was considerable variation and the distinctive feature of the weapon was its long haft. Fauchard A fauchard is a type of pole arm which was used in medieval Europe from the 11th through the 14th centuries. The design consisted of a curved blade put atop a pole. The blade bore a moderate to strong curve along its length; however, unlike a bill or guisarme, the cutting edge was on the convex side. Guisarme A guisarme (sometimes gisarme, giserne or bisarme) was a pole weapon used in Europe primarily between 1000 and 1400. It was used primarily to dismount knights and horsemen. Like most pole arms it was developed by peasants by combining hand tools with long poles, in this case by putting a pruning hook onto a spear shaft. While hooks are fine for dismounting horsemen from mounts, they lack the stopping power of a spear especially when dealing with static opponents. While early designs were simply a hook on the end of a long pole, later designs implemented a small reverse spike on the back of the blade. Eventually weapon makers incorporated the usefulness of the hook in a variety of different pole arms and guisarme became a catch-all for any weapon that included a hook on the blade. Ewart Oakeshott has proposed an alternative description of the weapon as a | weapons on the battlefield: bills, picks, dane axes, spears, glaives, guandaos, pudaos, poleaxes, halberds, harpoons, sovnyas, tridents, naginatas, bardiches, war scythes, and lances are all varieties of pole arms. Pole arms were common weapons on post-classical battlefields of Asia and Europe. Their range and impact force made them effective weapons against armored warriors on horseback, because they could be dismounted and/or penetrate said armor. The Renaissance saw a plethora of varieties. Pole arms in modern times are largely constrained to ceremonial military units such as the Papal Swiss Guard or Yeomen of the Guard, or traditional martial arts. Chinese martial arts in particular have preserved a wide variety of weapons and techniques. Classification difficulties The classification of pole weapons can be difficult, and European weapon classifications in particular can be confusing. This can be due to a number of factors, including uncertainty in original descriptions, changes in weapons or nomenclature through time, mistranslation of terms, and the well-meaning inventiveness of later experts. For example, the word "halberd" is also used to translate the Chinese ji and also a range of medieval Scandinavian weapons as described in sagas, such as the atgeir. As well, all pole arms developed from three early tools (the axe, the scythe, and the knife) and one weapon, the spear. In the words of the arms expert Ewart Oakeshott, While men-at-arms may have been armed with custom designed military weapons, militias were often armed with whatever was available. These may or may not have been mounted on poles and described by one of more names. The problems with precise definitions can be inferred by a contemporary description of Royalist infantry which were engaged in the Battle of Birmingham (1643) during the first year of English Civil War (in the early modern period). The infantry regiment that accompanied Prince Rupert's cavalry were armed: List of pole weapons Ancient pole weapons European Falx Rhomphaia Kontos (weapon) Dory (spear) Sarissa Xyston Asian Dagger-axe The dagger-axe, or gee (Chinese: 戈; pinyin: gē; Wade–Giles: ko; sometimes confusingly translated "halberd") is a type of weapon that was in use from Shang dynasty until at least Han dynasty China. It consists of a dagger-shaped blade made of bronze (or later iron) mounted by the tang to a perpendicular wooden shaft: a common Bronze Age infantry weapon, also used by charioteers. Some dagger axes include a spear-point. There is a (rare) variant type with a divided two-part head, consisting of the usual straight blade and a scythe-like blade. Other rarities include archaeology findings with two or sometimes three blades stacked in line on top of a pole, but were generally thought as ceremonial pole arms. Though the weapon saw frequent use in ancient China, the use of the dagger-axe decreased dramatically after the Qin and Han dynasties. The ji combines the dagger axe with a spear. By the post-classical Chinese dynasties, with the decline of chariot warfare, the use of the dagger-axe was almost nonexistent. Ji The ji (Chinese: 戟) was created by combining the dagger-axe with a spear. It was used as a military weapon at least as early as the Shang dynasty until the end of the Northern and Southern dynasties. Ngao The ngao or ngau (ง้าว,ของ้าว) is a Thai pole arm that was traditionally used by elephant-riding infantry and is still used by practitioners of krabi krabong. Known in Malay as a dap, it consists of a wooden shaft with a curved blade fashioned onto the end, and is similar in design to the Korean woldo. Usually, it also had a hook (ขอ) between the blade and shaft used for commanding the elephant. The elephant warrior used the ngao like a blade from atop an elephant or horse during battle. Post-classical pole weapons European Danish axe The Dane axe is a weapon with a heavy crescent-shaped head mounted on a haft in length. Originally a Viking weapon, it was adopted by the Anglo-Saxons and Normans in the 11th century, spreading through Europe in the 12th and 13th centuries. Variants of this basic weapon continued in use in Scotland and Ireland into the 16th century. A form of 'long axe'. Sparth axe In the 13th century, variants on the Danish axe are seen. Described in English as a sparth (from the Old Norse ) or pale-axe, the weapon featured a larger head with broader blade, the rearward part of the crescent sweeping up to contact (or even be attached to) the haft. In Ireland, this axe was known as a sparr axe. Originating in either Western Scotland or Ireland, the sparr was widely used by the galloglass. Although sometimes said to derive from the Irish for a joist or beam, a more likely definition is as a variant of sparth. Although attempts have been made to suggest that the sparr had a distinctive shaped head, illustrations and surviving weapons show there was considerable variation and the distinctive feature of the weapon was its long haft. Fauchard A fauchard is a type of pole arm which was used in medieval Europe from the 11th through the 14th centuries. The design consisted of a curved blade put atop a pole. The blade bore a moderate to strong curve along its length; however, unlike a bill or guisarme, the cutting edge was on the convex side. Guisarme A guisarme (sometimes gisarme, giserne or bisarme) was a pole weapon used in Europe primarily between 1000 and 1400. It was used primarily to dismount knights and horsemen. Like most pole arms it was developed by peasants by combining hand tools with long poles, in this case by putting a pruning hook onto a spear shaft. While hooks are fine for dismounting horsemen from mounts, they lack the stopping power of a spear especially when dealing with static opponents. While early designs were simply a hook on the end of a long pole, later designs implemented a small reverse spike on the back of the blade. Eventually weapon makers incorporated the usefulness of the hook in a variety of different pole arms and guisarme became a catch-all for any weapon that included a hook on the blade. Ewart Oakeshott has proposed an alternative description of the weapon as a crescent shaped socketed axe. Glaive A glaive is a pole arm consisting of a single-edged tapering blade similar in shape to a modern kitchen knife on the end of a pole. The blade was around long, on the end of a pole long. However, instead of having a tang like a sword or naginata, the blade is affixed in a socket-shaft configuration similar to an axe head, both the blade and shaft varying in length. Illustrations in the 13th century Maciejowski Bible show a short staffed weapon with a long blade used by both infantry and cavalry. Occasionally glaive blades were created with a small hook or spike on the reverse side. Such glaives |
PHD or PhD may refer to: Doctor of Philosophy (PhD), an academic qualification Entertainment PhD: Phantasy Degree, a Korean comic series Piled | series Piled Higher and Deeper, a web comic Ph.D. (band), a 1980s British group Ph.D. (Ph.D. album) Ph.D. (Art Farmer album) "PHD", a song on the album Tweekend by the Crystal Method PHD Flopper, a popular perk drink |
Violations by a rogue state could be checked via collateral attack: when a plaintiff sought recovery against a defendant's assets in another state, that state could refuse judgment on the grounds that the original judgment was invalid. Difficulties in applying Pennoyer territorial jurisdiction Following Pennoyer, extreme applications of territorial jurisdiction revealed imperfections in the doctrine, and societal changes began to present new problems as the United States' national economy became more integrated by increasingly efficient multi-state transportation technology and business practices. While determining the physical location of an individual for the purposes of in personam jurisdiction was easy enough, applying the same principle to non-physical entities became difficult. Courts were presented with the question of where a company was present and amenable to service for the purpose of in personam jurisdiction over the company. Extension of quasi in rem jurisdiction led to extreme results that threatened the justification for the jurisdiction. Bearing in mind that territorial jurisdiction existed in a pre-industrial society where transportation across the country was difficult, long, and potentially treacherous, and consider the hypothetical wherein Alice owes Bob money, and Bob owes Carmel, a resident of New York, money. Carmel seeks to recover on Bob's debt to Carmel, however cannot do so because Bob avoids Carmel by travelling to California. Alice, however, happens to travel through New York. Carmel serves notice upon Alice, and attaches Alice's debt to Bob (considered to be property within the state) to the proceeding. Alice can no more certainly provide notice to Bob in California than Carmel could provide, and the transient and involuntary exposure of Bob to being hauled into court in New York by this attachment seems to erode the original rationale of quasi in rem jurisdiction. The US Supreme Court largely abolished the exercise of jurisdiction on the basis of quasi in rem in Shaffer v. Heitner, except in exceptional circumstances, which sometimes would arise while dealing with real property such as land, and when the owner of the land cannot be found. Modern Constitutional doctrine: International Shoe doctrine In the modern era, the reach of personal jurisdiction has been expanded by judicial re-interpretation and legislative enactments. Under the new and current doctrine, a state court may only exert personal jurisdiction over an individual or entity with "sufficient minimal contacts" with the forum state such that the particular suit "does not offend 'traditional notions of fair play and justice.'" The "minimum contacts" must be purposefully directed towards the state by the defendant. This jurisdiction was initially limited to the particulars of the International Shoe Co. v. Washington holding, that is to jurisdictional inquiries regarding companies, but was soon extended to apply to all questions of personal jurisdiction. When an individual or entity has no "minimum contacts" with a forum State, the Due Process Clause of the Fourteenth Amendment prohibits that State from acting against that individual, or entity. The lack of "minimum contacts" with the owner of property also constitutionally prohibits action against that property (in rem jurisdiction) even when the property is located within the forum state. What constitutes sufficient "minimum contacts" has been delineated in numerous cases which followed the International Shoe decision. For example, in Hanson v. Denckla, the Court proclaimed the "unilateral activity of those who claim some relationship with a nonresident cannot satisfy the requirement of contact with the forum State. The application of that rule will vary with the nature and quality of the defendant's activity, but it is essential in each case that there be some act by which the defendant purposefully avails itself of the privilege of conducting activities within the forum State, thus invoking the benefits and protection of its laws." The additional requirement of "'purposeful availment' ensures that a defendant will not be hauled into a jurisdiction solely as a result of 'random,' 'fortuitous,' or 'attenuated' contacts, or of the unilateral activity of another party or a third person". Jurisdiction may, however, be exercised, under some circumstances, even though the defendant never physically entered the forum state. In addition, the claim must arise from those contacts that the defendant had with the forum state. In addition to the minimum contacts test asserted in International Shoe, the assertion of specific personal jurisdiction must be reasonable. The Court in World-Wide Volkswagen Corp. v. Woodson asserted a five-part test for determining if the assertion of personal jurisdiction in a forum state was reasonable. This test considers: the burden on the defendant from litigating in the forum state; the interest of the forum state in having the case adjudicated there; the interests of the plaintiff in adjudicating in the forum state; the interests of the inter-state judiciary—that is, that a court's assertion of personal jurisdiction over an out-of state defendant would not overreach and preempt the interests and judicial sovereignty of another state; and the interests in preserving the judicial integrity of the several states—that is, ensuring one court's assertion of personal jurisdiction over an out of state defendant does not violate the Due Process Clause of the Fourteenth Amendment. In another recent case of Goodyear Dunlop Tires Operations, S. A. v. Brown, Justice Ginsburg held that for the exercise of general jurisdiction in personam, the defendant must be "essentially at home." This applies when the defendant has contacts with the forum state, but the claim that arises is not related to those contacts. For example, if Harrods (a British store) sets up an office in California to export and sell goods there, and because of that someone gets injured, it would be amenable to suit in California for that injury. On the other hand, if someone is injured in Harrods in London and for some reason finds that California law is more favorable and decides to sue in California, the suit would not be maintainable under general jurisdiction since the contacts that Harrods has are not continuous and systematic, and they are not "essentially at home" in California. However, there would be personal jurisdiction. By selling shoes in California, Harrod’s purposefully availed itself of the benefits of California law and the lawsuit arose out of that contact. This holding was reaffirmed in 2014 by the Supreme Court in Daimler AG v. Bauman. Statutory authorization While the Pennoyer and later Shoe doctrines limit the maximum power of a sovereign state, courts must also have authorization to exercise the state's power; an individual state may choose to not grant its courts the full power that the state is Constitutionally permitted to exercise. Similarly, the jurisdiction of Federal courts (other than the Supreme Court) are statutorily-defined. Thus, a particular exercise of personal jurisdiction must not only be permitted by Constitutional doctrine, but be statutorily authorized as well. Under Pennoyer, personal jurisdiction was authorized by statutes authorizing service of process, but these methods of service often lacked because they required such service to be effected by officers of the state, such as sheriffs – an untenable method for defendants located outside of the state but still subject to jurisdiction due to their contacts with the state. Subsequent to the development of the Shoe Doctrine, states have enacted so-called long-arm statutes, by which courts in a state can serve process and thus exercise jurisdiction over a party located outside the state. The doctrine of International Shoe applies only in cases where there is no presence in the forum state. For example, if A committed a tort in State X. He is sued by B and B serves him with process just before he leaves State X before the flight was took off, the service would be valid and State X would have jurisdiction over A. If A did not comply with the final judgement passed by the courts of State X, B could enforce that judgement in the state where A resides under the full faith and credit clause of the US Constitution. There was one case where a defendant was served while the airplane was in the air over the forum State, and the federal district court held that this was valid service, since at law the territory of a state includes the airspace above the State. Relationship to venue Venue and personal jurisdiction are closely related for practical purposes. A lawyer should usually perform joint analysis of personal jurisdiction and venue issues. Personal jurisdiction is largely a constitutional requirement, though also shaped by state long-arm statutes and Rule 4 of the Federal Rules of Civil Procedure, while venue is purely statutory. It is possible for either venue or personal jurisdiction to preclude a court from hearing a case. Consider these examples: Personal jurisdiction is the limiting factor. In World-Wide Volkswagen Corp. v. Woodson, the plaintiffs sued, in an Oklahoma state court, an automobile dealership based in New York for damages from an explosion that occurred on June 11, 1977, as the plaintiffs drove the car through Oklahoma. Had the plaintiffs sued in U.S. federal court sited in Oklahoma, personal jurisdiction against the dealership would have been unavailable, as the dealership did not have minimum contacts with the forum state. Venue, however, would have been proper under , the general federal venue statute, because Oklahoma was a state in which a substantial part of the events or omissions giving rise to the claim occurred. However, the United States Supreme Court found that the defendants (World-Wide Volkswagen Corp.) did not have the minimum contacts with Oklahoma necessary to create personal jurisdiction there. [World-Wide Volkswagen was one of the "defendants"; the case cited is WWV Corp (original defendant) v. Woodson (the Oklahoma state judge) ] Venue is the limiting factor. Suppose Dale resides in California. Peter from Nevada wants to sue Dale for battery which Dale committed against Peter in California. Peter knows Dale is going to a week-long conference in South Carolina. Peter realizes that Dale would settle a suit that would take place in South | rule of law – for example an absolute monarchy with no independent judiciary – may arbitrarily choose to assert jurisdiction over a case without citing any particular justification. Such assertion can cause problems, such as encouraging other countries to take arbitrary actions over foreign citizens and property, or even provoking skirmishes or armed conflict. In practice, many countries operate by one or another principles, either in written law or in practice, which communicate when the country will and will not assert jurisdiction: treaty jurisdiction — An international treaty explicitly decides the issue. territorial principle — A country asserts jurisdiction over people, property, and events taking place on its own territory. nationality principle — A country asserts jurisdiction over the conduct of its citizens, anywhere in the world. passive personality principle — A country asserts jurisdiction over acts committed against its citizens, anywhere in the world. protective principle — A country asserts jurisdiction over issues that affect its interests, such as conspiracies to overthrow its government, or resources critical to its economy (such as access to an international waterway) universal jurisdiction — A country asserts jurisdiction over certain acts committed by anyone, anywhere in the world. Usually reserved for exceptionally serious crimes, such as war crimes and crimes against humanity. Different principles are applied by different countries, and different principles may be applied by the same country in different circumstances. Determination of whether or not a court has jurisdiction to hear a case is the first stage of a conflict of laws proceeding, potentially followed by choice of law to determine which jurisdiction's laws apply. Executive prosecutorial authority and foreign policy also play a role in scope and practical impact of jurisdiction choices. Any assertion of jurisdiction based on anything other than the territorial principle is known as extraterritorial jurisdiction. Prosecution of a case against an out-of-territory defendant is known as assertion of long-arm jurisdiction. When a person commits a crime in a foreign country against the laws of that country, usually the host country is responsible for prosecution. The Vienna Convention on Consular Relations requires that the host country notify the foreign embassy, potentially allowing the foreign country to assist in legal defense and monitor conditions of detention. (Most countries protect their citizens against foreign powers in general.) Foreign diplomats enjoy diplomatic immunity in many countries based on the Vienna Convention on Diplomatic Relations or bilateral agreement, and foreign military personnel may be subject to the jurisdiction of their home country based on a status of forces agreement or Visiting Forces Agreement. If a person is not physically present in the country which wishes to prosecute a case, that country may either wait until the person enters the national territory, or pursue extradition by legal or extralegal means, and with or without a general extradition treaty. Some countries (like China) prefer to prosecute their own citizens for crimes committed abroad rather than extradite them. Other countries defer to the host country. When a crime is committed outside the territory of any country, such as in Antarctica, on watercraft in international waters, on aircraft in international airspace, and on spacecraft, jurisdiction is usually determined by the nationality of defendants or victims, or by the flag state of the vessel. This is determined by the admiralty law of the countries involved and in international agreements. History in English and U.S. law The concept of personal jurisdiction in English law has its origin in the idea that a monarch could not exercise power over persons or property located outside of his or her kingdom. To some degree, this was a de facto rule; the monarch's men could not arrest people or seize property outside the kingdom without risking physical conflict with the soldiers and police of other kingdoms. Slowly this principle was incorporated into written law, but problems arose in cases where property owners could not be sued because they had left the kingdom or had died and therefore were not present within the kingdom at the time they were being sued. To solve this problem, the courts created another type of jurisdiction, called quasi in rem, that is, jurisdiction over the land itself, even if the person who owned the land was not in the country. However, this jurisdiction was limited to the settlement of debts owed by the owner of the land. In the United States, the exercise of personal jurisdiction by a court must both comply with Constitutional limitations, and be authorized by a statute. In the United Kingdom, the exercise of personal jurisdiction does not need a statutory basis, since the United Kingdom does not have a written constitution. United States The intersection of American federalism and the rules and theories of jurisdiction inherited from the common law of England has resulted in a highly complex body of law respecting personal jurisdiction in the United States. These rules limit both state and federal courts in their ability to hear cases. Principles of personal jurisdiction Three fundamentals of personal jurisdiction constrain the ability of courts in the United States to bind individuals or property to its decisions: consent, power, and notice. Consent The United States legal system is an adversarial system. Civil suits cannot be initiated by third parties, but must be filed by the aggrieved party who seeks redress. Generally, the action is initiated in the jurisdiction where the event occurred, where the defendant can be served or where the parties have agreed to have the case located. The filing of a complaint or prayer for relief is a voluntary action by the person aggrieved, and as a necessity of this request, the person seeking relief consents to be bound by the judgment of the court. The doctrine of consent is also extended to defendants who attend and litigate actions without challenging the court's personal jurisdiction. Consent may also derive from a pre-litigation agreement by the parties, such as a forum selection clause in a contract (not to be confused with a choice of law clause). Doctrines such as claim preclusion prevent re-litigation of failed complaints in alternative forums. Claim preclusion does not, however, prevent the refiling of a claim that was filed in a court that did not have personal jurisdiction over the defendant. Power In cases where a defendant challenges personal jurisdiction, a court may still exercise personal jurisdiction if it has independent power to do so. This power is founded in the inherent nature of the State: sovereignty over secular affairs within its territory. Notice The Fifth and Fourteenth Amendment to the United States Constitution preserve the right of the individual to due process. Due process requires that notice be given in a manner "reasonably calculated" to inform a party of the action affecting him. Originally, "Notice" (and the power of the State) was often exercised more forcefully, the defendant in a civil case sometimes being seized and brought before the court under a writ of capias ad respondendum. Notice in such a case is inferred from consent of the defendant to go with the officer. Nowadays, when exercising power over an individual without consent, notice is usually given by formal delivery of suitable papers to the defendant (service of process). Historical background: territorial jurisdiction Originally, jurisdiction over parties in the United States was determined by strict interpretation of the geographic boundaries of each state's sovereign power. In Pennoyer v. Neff, the Supreme Court discussed that though each state ceded certain powers (e.g. foreign relations) to the Federal Government or to no entity at all (e.g. the powers that are eliminated by the protections of the bill of rights), the states retained all the other powers of sovereignty, including the exclusive power to regulate the affairs of individuals and property within its territory. Necessarily following from this, one state's exercise of power could not infringe upon the sovereignty of another state. Thus, Constitutional limitations applied to the validity of state court judgments. Three types of jurisdiction developed, collectively termed territorial jurisdiction because of their reliance upon territorial control: in personam jurisdiction, in rem jurisdiction, and quasi in rem jurisdiction. Some sources refer to all three types of territorial jurisdiction as personal jurisdiction, since most actions against property (in rem jurisdiction) bear, in the end, upon the rights and obligations of persons. Others continue to recognize the traditional distinction between personal jurisdiction and jurisdiction over property, even after Shaffer v. Heitner (discussed below). In personam jurisdiction referred to jurisdiction over a particular person (or entity, such as a company). In personam jurisdiction, if held by a state court, permitted that court to rule upon any case over which it otherwise held jurisdiction. Under territorial jurisdiction, pure in personam jurisdiction could only be established by serving notice upon the individual while that individual was within the territory of the state. In rem jurisdiction referred to jurisdiction over a particular piece of property, most commonly real estate or land. Certain cases, notably government suits for unpaid property taxes, proceed not against an individual but against their property directly. Under territorial jurisdiction, in rem jurisdiction could be exercised by the courts of a state by seizing the property in question. Since an actual tract of land could not literally be brought into a courtroom as a person could, this was effected by giving notice upon the real property itself. In rem jurisdiction was thus supported by the assumption that the owner of that property, having a concrete economic interest in the property, had a duty to look after the affairs of their property, and would be notified of the pending case by such seizure. In rem jurisdiction was limited to deciding issues regarding the specific property in question. Quasi in rem jurisdiction involved the seizure of property held by the individual against whom the suit was brought, and attachment of that property to the case in question. This form of territorial jurisdiction developed from the rationale of in rem jurisdiction, namely that seizure of the property was reasonably calculated to inform an individual of the proceedings against them. Once a valid judgment was obtained against an individual, however, the plaintiff could pursue recovery against the assets of the defendant regardless of their location, as other states were obligated by the Full Faith and Credit Clause of the Constitution to recognize such a judgment (i.e. had ceded their power to refuse comity to fellow states of the Union). Violations by a rogue state could be checked via collateral attack: when a plaintiff sought recovery against a defendant's assets in another state, that state could refuse judgment on the grounds that the original judgment was invalid. Difficulties in applying Pennoyer territorial jurisdiction Following Pennoyer, extreme applications of territorial jurisdiction revealed imperfections in the doctrine, and societal changes began to present new problems as the United States' national economy became more integrated by increasingly efficient multi-state transportation technology and business practices. While determining the physical location of an individual for the purposes of in personam jurisdiction was easy enough, applying the same principle to non-physical entities became difficult. Courts were presented with the question of where a company was present and amenable to service for the purpose of in personam jurisdiction over the company. Extension |
the cases N = 151 or 313. Both Wallis and William Brouncker gave solutions to these problems, though Wallis suggests in a letter that the solution was due to Brouncker. John Pell's connection with the equation is that he revised Thomas Branker's translation of Johann Rahn's 1659 book Teutsche Algebra into English, with a discussion of Brouncker's solution of the equation. Leonhard Euler mistakenly thought that this solution was due to Pell, as a result of which he named the equation after Pell. The general theory of Pell's equation, based on continued fractions and algebraic manipulations with numbers of the form was developed by Lagrange in 1766–1769. Solutions Fundamental solution via continued fractions Let denote the sequence of convergents to the regular continued fraction for . This sequence is unique. Then the pair solving Pell's equation and minimizing x satisfies x1 = hi and y1 = ki for some i. This pair is called the fundamental solution. Thus, the fundamental solution may be found by performing the continued fraction expansion and testing each successive convergent until a solution to Pell's equation is found. The time for finding the fundamental solution using the continued fraction method, with the aid of the Schönhage–Strassen algorithm for fast integer multiplication, is within a logarithmic factor of the solution size, the number of digits in the pair . However, this is not a polynomial-time algorithm because the number of digits in the solution may be as large as , far larger than a polynomial in the number of digits in the input value n. Additional solutions from the fundamental solution Once the fundamental solution is found, all remaining solutions may be calculated algebraically from expanding the right side, equating coefficients of on both sides, and equating the other terms on both sides. This yields the recurrence relations Concise representation and faster algorithms Although writing out the fundamental solution (x1, y1) as a pair of binary numbers may require a large number of bits, it may in many cases be represented more compactly in the form using much smaller integers ai, bi, and ci. For instance, Archimedes' cattle problem is equivalent to the Pell equation , the fundamental solution of which has digits if written out explicitly. However, the solution is also equal to where and and only have 45 and 41 decimal digits respectively. Methods related to the quadratic sieve approach for integer factorization may be used to collect relations between prime numbers in the number field generated by and to combine these relations to find a product representation of this type. The resulting algorithm for solving Pell's equation is more efficient than the continued fraction method, though it still takes more than polynomial time. Under the assumption of the generalized Riemann hypothesis, it can be shown to take time where N = log n is the input size, similarly to the quadratic sieve. Quantum algorithms Hallgren showed that a quantum computer can find a product representation, as described above, for the solution to Pell's equation in polynomial time. Hallgren's algorithm, which can be interpreted as an algorithm for finding the group of units of a real quadratic number field, was extended to more general fields by Schmidt and Völlmer. Example As an example, consider the instance of Pell's equation for n = 7; that is, The sequence of convergents for the square root of seven are {| class="wikitable" style="text-align:center;" |- ! h/k (convergent) ! h2 − 7k2 (Pell-type approximation) |- | 2/1 | −3 |- | 3/1 | +2 |- | 5/2 | −3 |- | 8/3 | +1 |} Therefore, the fundamental solution is formed by the pair (8, 3). Applying the recurrence formula to this solution generates the infinite sequence of solutions (1, 0); (8, 3); (127, 48); (2024, 765); (32257, 12192); (514088, 194307); (8193151, 3096720); (130576328, 49353213); ... (sequence (x) and (y) in OEIS) The smallest solution can be very large. For example, the smallest solution to is (, ), and this is the equation which Frenicle challenged Wallis to solve. Values of n such that the smallest solution of is greater than the smallest solution for any smaller value of n are 1, 2, 5, 10, 13, 29, 46, 53, 61, 109, 181, 277, 397, 409, 421, 541, 661, 1021, 1069, 1381, 1549, 1621, 2389, 3061, 3469, 4621, 4789, 4909, 5581, 6301, 6829, 8269, 8941, 9949, ... . (For these records, see for x and for y.) List of fundamental solutions of Pell's equations The following is a list of the fundamental solution to with n ≤ 128. For square n, there is no solution except (1, 0). The values of x are sequence and those of y are sequence in OEIS. Connections Pell's equation has connections to several other important subjects in mathematics. Algebraic number theory Pell's equation is closely related to the theory of algebraic numbers, as the formula is the norm for the ring and for the closely related quadratic field . Thus, a pair of integers solves Pell's equation if and only if is a unit with norm 1 in . Dirichlet's unit theorem, that all units of can be expressed as powers of a single fundamental unit (and multiplication by a sign), is an algebraic restatement of the fact that all solutions to the Pell's equation can be generated from the fundamental solution. The fundamental unit can in general be found by solving a Pell-like equation but it does not always correspond directly to the fundamental solution of Pell's equation itself, because the fundamental unit may have norm −1 rather than 1 and its coefficients may be half integers rather than integers. Chebyshev polynomials Demeyer mentions a connection between Pell's equation and the Chebyshev polynomials: If and are the Chebyshev polynomials of the first and second kind respectively, then these polynomials satisfy a form of Pell's equation in any polynomial ring , with : Thus, these polynomials can be generated by the standard technique for Pell's equations of taking powers of a fundamental solution: It may further be observed that if are the solutions to any integer Pell's equation, then and . Continued fractions A general development of solutions of Pell's equation in terms of continued fractions of can be presented, as the solutions x and y are approximates to the square root of n and thus are a special case of continued fraction approximations for quadratic irrationals. The relationship to the continued fractions implies that the solutions to Pell's equation form a semigroup subset of the modular group. Thus, for example, if p and q satisfy Pell's equation, then is a matrix of unit determinant. Products of such matrices take exactly the same form, and thus all such products yield solutions to Pell's equation. This can be understood in part to arise from the fact that successive convergents of a continued fraction share the same property: If pk−1/qk−1 and pk/qk are two successive convergents of a continued fraction, then the matrix has determinant (−1)k. Smooth numbers Størmer's theorem applies Pell equations to find pairs of consecutive smooth numbers, positive integers whose prime factors are all smaller than a given value. As part of this theory, Størmer also investigated divisibility relations among solutions to Pell's equation; in particular, he showed that each solution other than the fundamental solution has a prime factor that does not divide n. The negative Pell's equation The negative Pell's equation is given by and has also been extensively studied. It can be solved by the same method of continued fractions and has solutions if and only if the period of the continued fraction has odd length. However, it | 1 and y = 0. Joseph Louis Lagrange proved that, as long as n is not a perfect square, Pell's equation has infinitely many distinct integer solutions. These solutions may be used to accurately approximate the square root of n by rational numbers of the form x/y. This equation was first studied extensively in India starting with Brahmagupta, who found an integer solution to in his Brāhmasphuṭasiddhānta circa 628. Bhaskara II in the 12th century and Narayana Pandit in the 14th century both found general solutions to Pell's equation and other quadratic indeterminate equations. Bhaskara II is generally credited with developing the chakravala method, building on the work of Jayadeva and Brahmagupta. Solutions to specific examples of Pell's equation, such as the Pell numbers arising from the equation with n = 2, had been known for much longer, since the time of Pythagoras in Greece and a similar date in India. William Brouncker was the first European to solve Pell's equation. The name of Pell's equation arose from Leonhard Euler mistakenly attributing Brouncker's solution of the equation to John Pell. History As early as 400 BC in India and Greece, mathematicians studied the numbers arising from the n = 2 case of Pell's equation, and from the closely related equation because of the connection of these equations to the square root of 2. Indeed, if x and y are positive integers satisfying this equation, then x/y is an approximation of . The numbers x and y appearing in these approximations, called side and diameter numbers, were known to the Pythagoreans, and Proclus observed that in the opposite direction these numbers obeyed one of these two equations. Similarly, Baudhayana discovered that x = 17, y = 12 and x = 577, y = 408 are two solutions to the Pell equation, and that 17/12 and 577/408 are very close approximations to the square root of 2. Later, Archimedes approximated the square root of 3 by the rational number 1351/780. Although he did not explain his methods, this approximation may be obtained in the same way, as a solution to Pell's equation. Likewise, Archimedes's cattle problem — an ancient word problem about finding the number of cattle belonging to the sun god Helios — can be solved by reformulating it as a Pell's equation. The manuscript containing the problem states that it was devised by Archimedes and recorded in a letter to Eratosthenes, and the attribution to Archimedes is generally accepted today. Around AD 250, Diophantus considered the equation where a and c are fixed numbers, and x and y are the variables to be solved for. This equation is different in form from Pell's equation but equivalent to it. Diophantus solved the equation for (a, c) equal to (1, 1), (1, −1), (1, 12), and (3, 9). Al-Karaji, a 10th-century Persian mathematician, worked on similar problems to Diophantus. In Indian mathematics, Brahmagupta discovered that a form of what is now known as Brahmagupta's identity. Using this, he was able to "compose" triples and that were solutions of , to generate the new triples and Not only did this give a way to generate infinitely many solutions to starting with one solution, but also, by dividing such a composition by , integer or "nearly integer" solutions could often be obtained. For instance, for , Brahmagupta composed the triple (10, 1, 8) (since ) with itself to get the new triple (192, 20, 64). Dividing throughout by 64 ("8" for and ) gave the triple (24, 5/2, 1), which when composed with itself gave the desired integer solution (1151, 120, 1). Brahmagupta solved many Pell's equations with this method, proving that it gives solutions starting from an integer solution of for k = ±1, ±2, or ±4. The first general method for solving the Pell's equation (for all N) was given by Bhāskara II in 1150, extending the methods of Brahmagupta. Called the chakravala (cyclic) method, it starts by choosing two relatively prime integers and , then composing the triple (that is, one which satisfies ) with the trivial triple to get the triple , which can be scaled down to When is chosen so that is an integer, so are the other two numbers in the triple. Among such , the method chooses one that minimizes and repeats the process. This method always terminates with a solution (proved by Joseph-Louis Lagrange in 1768). Bhaskara used it to give the solution x = , y = to the N = 61 case. Several European mathematicians rediscovered how to solve Pell's equation in the 17th century, apparently unaware that it had been solved almost five hundred years earlier in India. Pierre de Fermat found how to solve the equation and in a 1657 letter issued it as a challenge to English mathematicians. In a letter to Kenelm Digby, Bernard Frénicle de Bessy said that Fermat found the smallest solution for N up to 150 and challenged John Wallis to solve the cases N = 151 or 313. Both Wallis and William Brouncker gave solutions to these problems, though Wallis suggests in a letter that the solution was due to Brouncker. John Pell's connection with the equation is that he revised Thomas Branker's translation of Johann Rahn's 1659 book Teutsche Algebra into English, with a discussion of Brouncker's solution of the equation. Leonhard Euler mistakenly thought that this solution was due to Pell, as a result of which he named the equation after Pell. The general theory of Pell's equation, based on continued fractions and algebraic manipulations with numbers of the form was developed by Lagrange in 1766–1769. Solutions Fundamental solution via continued fractions Let denote the sequence of convergents to the regular continued fraction for . This sequence is unique. Then the pair solving Pell's equation and minimizing x satisfies x1 = hi and y1 = ki for some i. This pair is called the fundamental solution. Thus, the fundamental solution may be found by performing the continued fraction expansion and testing each successive convergent until a solution to Pell's equation is found. The time for finding the fundamental solution using the continued fraction method, with the aid of the Schönhage–Strassen algorithm for fast integer multiplication, is within a logarithmic factor of the solution size, the number of digits in the pair . However, this is not a polynomial-time algorithm because the number of digits in the solution may be as large as , far larger than a polynomial in the number of digits in the input value n. Additional solutions from the fundamental solution Once the fundamental solution is found, all remaining solutions may be calculated algebraically from expanding the right side, equating coefficients of on both sides, and equating the other terms on both sides. This yields the recurrence relations Concise representation and faster algorithms Although writing out the fundamental solution (x1, y1) as a pair of binary numbers may require a large number of bits, it may in many cases be represented more compactly in the form using much smaller integers ai, bi, and ci. For instance, Archimedes' cattle problem is equivalent to the Pell equation , the fundamental solution of which has digits if written out explicitly. However, the solution is also equal to where and and only have 45 and 41 decimal digits respectively. Methods related to the quadratic sieve approach for integer factorization may be used to collect relations between prime numbers in the number field generated by and to combine these relations to find a product representation of this type. The resulting algorithm for solving Pell's equation is more efficient than the continued fraction method, though it still takes more than polynomial time. Under the assumption of the generalized Riemann hypothesis, it can be shown to take time where N = log n is the input size, similarly to the quadratic sieve. Quantum algorithms Hallgren showed that a quantum computer can find a product representation, as described above, for |
Egypt. The third system of stored-value phone cards are smart cards and use an embedded microchip. These were first launched on a large scale in 1986 in Germany by Deutsche Bundespost after three years of testing, and in France by France Télécom. Many other countries followed suit, including Ireland in 1990 and the UK circa 1994–1995, which phased out the old green Landis+Gyr cards in favor of the chip (smart) cards. The initial microchips were easy to hack, typically by scratching off the programming-voltage contact on the card, which rendered the phone unable to reduce the card's value after a call. But by the mid-to-late 1990s, highly secure technology aided the spread of chip phonecards worldwide. Remote memory systems Making a remote memory prepaid or calling card call requires the user to make two calls. It is necessary to dial an access telephone number to connect to the calling card system. There are several methods. One is via a toll-free number, with larger companies offering this internationally. Access through a local number has become increasingly popular in recent years. Toll-free calls are paid for by the recipient (the calling card company), which passes on the cost through higher call charges; total cost of a call to the user is often lower using a local number. When travelling through several local areas a toll-free service may be preferable. Once connected to the access number, the account is identified by keying in a PIN (the most popular method) or by swiping a card with embedded chip or magnetic stripe. After validation the balance remaining on the card may be announced, and the desired number may be keyed in. The available minutes may be announced, and the call is connected. Many cards make a verbal announcement if credit is running out. Prepaid or calling cards are usually much cheaper than other telephone services, particularly for travelers who do not have easy access to other services. Hotel telephones can be very expensive, particularly for long-distance calls. Cellular services are flexible, but may attract high roaming charges away from the home area. Telephone accounts symbolized by a card The second main technology of phonecards is remote memory, which uses a toll or toll-free access number to reach the database and check for balance on product. The first public prepaid remote memory phonecard was issued in the United States in December 1980 by Phone Line. As telecom industries around the world became deregulated, remote memory cards were issued in various countries. Remote memory phonecards can be used from any tone-mode phone and do not require special card readers. Since remote memory cards are more accessible and have lower costs, remote memory phone cards have proliferated. However, the utility of these cards is reduced by the large number of digits that need to be entered during usage. To call a long-distance number, the user first dials the local access number, then keys in the secret code, followed by the actual long-distance number. Based on the long-distance number entered, the time remaining on the card is announced, and the call is finally processed through. Remote memory phonecards are in essence text; requiring an access number, a unique PIN and instructions. Therefore, the instructions can be printed on virtually anything, or can be delivered via e-mail or the Internet. Currently many websites post phone card details through e-mail. Phone cards are available in most countries in retail stores, retail chains and commonly post offices or corner stores. In general, remote memory phonecards can be issued by any company and come in countless varieties. They can focus on calling to certain countries or regions and have specific features such as rechargeability, pinless dial, speed dial and more. Phone cards may have connection fees, taxes and maintenance fees, all influencing the rates. Accounts without a card (Virtual phonecards) Since the early 2000s calling card service providers have introduced calling accounts not associated with a physical card. Calling accounts can be purchased over the Internet using credit cards and are instantly delivered to the customer via e-mail. This e-mail contains the PIN and instructions for using the service. The service may be prepaid, or may take payment from a credit card or by direct debit. Some prepaid card companies allow accounts to be recharged online manually or automatically via a method called auto-top-up. Some virtual cards offer PINless Dialing, either by dialling a number unique to the customer, or by recognising the telephone number which originated the call by Caller ID and relating it to the appropriate account. | payphones, stored-value systems avoid the time lag and expense of communication with a central database, which would have been technically complex before the 1990s. There are several ways in which the value can be encoded on the card: The earliest system used a magnetic stripe as information carrier, similar to the technology of ATMs and key cards. The first magnetic strip phonecard, manufactured by SIDA, was issued in 1976 in Italy. The next technology used optical storage. Optical phonecards get their name from optical structure embossed inside the cards. This optical structure is heated and destroyed after use of the units. Visible marks are left on the top of the cards, so that the user can see the balance of remaining units. Optical cards were produced by Landis+Gyr and Sodeco from Switzerland and were popular early phonecards in many countries with first optical phonecards successfully introduced in 1977 in Belgium. Such technology was very secure and not easily hackable but chip cards phased out the optical phone cards around the world and the last Landis+Gyr factory closed in May 2006 when optical phonecards were still in use in few countries like Austria, Israel and Egypt. The third system of stored-value phone cards are smart cards and use an embedded microchip. These were first launched on a large scale in 1986 in Germany by Deutsche Bundespost after three years of testing, and in France by France Télécom. Many other countries followed suit, including Ireland in 1990 and the UK circa 1994–1995, which phased out the old green Landis+Gyr cards in favor of the chip (smart) cards. The initial microchips were easy to hack, typically by scratching off the programming-voltage contact on the card, which rendered the phone unable to reduce the card's value after a call. But by the mid-to-late 1990s, highly secure technology aided the spread of chip phonecards worldwide. Remote memory systems Making a remote memory prepaid or calling card call requires the user to make two calls. It is necessary to dial an access telephone number to connect to the calling card system. There are several methods. One is via a toll-free number, with larger companies offering this internationally. Access through a local number has become increasingly popular in recent years. Toll-free calls are paid for by the recipient (the calling card company), which passes on the cost through higher call charges; total cost of a call to the user is often lower using a local number. When travelling through several local areas a toll-free service may be preferable. Once connected to the access number, the account is identified by keying in a PIN (the most popular method) or by swiping a card with embedded chip or magnetic stripe. After validation the balance remaining on the card may be announced, and the desired number may be keyed in. The available minutes may be announced, and the call is connected. Many cards make a verbal announcement if credit is running out. Prepaid or calling cards are usually much cheaper than other telephone services, particularly for travelers who do not have easy access to other services. Hotel telephones can be very expensive, particularly for long-distance calls. Cellular services are flexible, but may attract high roaming charges away from the home area. Telephone accounts symbolized by a card The second main technology of phonecards is remote memory, which uses a toll or toll-free access number to reach the database and check for balance on product. The first public prepaid remote memory phonecard was issued in the United States in December 1980 by Phone Line. As telecom industries around the world became deregulated, remote memory cards were issued in various countries. Remote memory phonecards can be used from any tone-mode phone and do not require special card readers. Since remote memory cards are more accessible and have lower costs, remote memory phone cards have proliferated. However, the utility of these cards is reduced by the large number of digits that need to be entered during usage. To call a long-distance number, the user first dials the local access number, then keys in the secret code, followed by the actual long-distance number. Based on the long-distance number entered, the time remaining on the card is announced, and the call is finally processed through. Remote memory phonecards are in essence text; requiring an access number, a unique PIN and instructions. Therefore, the instructions can be printed on virtually anything, or can be delivered via e-mail or the Internet. Currently many websites post phone card details through e-mail. Phone cards are available in most countries in retail stores, retail chains and commonly post offices or corner stores. In general, remote memory phonecards can be issued by any company and come in countless varieties. |
this line generally include support for floppy disk drives, keyboards and other computer peripherals. Some models can also be connected to an emulator and have software testing and debugging features. The CD-I player 700 series, which consists of the 740 model, the most advanced player and featuring an RS-232 port. It was only released in limited quantities. There also exist a number of hard-to-categorize models, such as the FW380i, an integrated mini-stereo and CD-i player; the 21TCDi30, a television with a built-in CD-i device; the CD-i/PC 2.0, a CD-i module with an ISA interface for IBM-compatible 486 PCs. Gallery Other manufacturers In addition to Philips, several manufacturers produced CD-i players some of which were still on sale years after Philips itself abandoned the format. Manufacturers included: Magnavox (a Philips subsidiary) made rebranded players for the American market. GoldStar / LG Electronics, the LG GDI-700 (c. 1997) was a professional player with a Motorola 68341 processor, faster than the Philips model. GoldStar had a portable player, including another small one without an LCD screen. Digital Video Systems Memorex Grundig Kyocera made the portable Pro 1000S model Maspro Denkoh released a GPS car navigation system with a built-in CD-i player, released in Japan in 1992. Saab Electric Sony produced two models branded Intelligent Discman, a hybrid home/portable CD-i player released in 1990-1991 for professional use only. NBS International Interactive Media (I2m) released in 1995 a CD-i PCI expansion card for 486 PCs, Pentium PCs, 68k-based Macintosh and PowerPC-based Macintosh computers Vobis Highscreen Manna Space branded CD-i models (based on Magnavox's or GoldStar's version of Philips CDI 450) were made for a Japanese travel agency with the same name in 1995. Bang & Olufsen, who produced a high-end television with a built-in CD-i device (Beocenter AV5) on the market from 1997-2001. Before the actual commercial debut of the CD-i format, some other companies had interest in building players and some made prototypes, but were never released – this includes Panasonic (who were originally a major backer of the format), Pioneer, JVC, Toshiba, Epson, Ricoh, Fujitsu, Samsung and Yamaha. In addition, Sanyo showed a prototype portable CD-i player in 1992. Hardware specifications TeleCD-i and CD-MATICS Recognizing the growing need among marketers for networked multimedia, Philips partnered in 1992 with Amsterdam-based CDMATICS to develop TeleCD-i (also TeleCD). In this concept, the CD-i player is connected to a network such as PSTN or Internet, enabling data-communication and rich media presentation. Dutch grocery chain Albert Heijn and mail-order company Neckermann were early adopters and introduced award-winning TeleCD-i applications for their home-shopping and home-delivery services. CDMATICS also developed the special Philips TeleCD-i Assistant and a set of software tools to help the worldwide multimedia industry to develop and implement TeleCD-i. TeleCD-i is the world's first networked multimedia application at the time of its introduction. In 1996, Philips acquired source code rights from CDMATICS. CD-Online Internet services on the CD-i devices were facilitated by the use of an additional hardware modem and "CD-Online" disc (renamed Web-i in the US), which Philips initially released in Britain in 1995 for $150 US. This service provided the CD-i with full internet access (with a 14.4k modem), including online shopping, email, and support for networked multiplayer gaming on select CD-i games. The service required a CD-i player with DV cartridge, and an "Internet Starter Kit" which initially retailed for £99.99. It was advertised as bringing "full Internet access to the living room on TV screens". Andy Stout, a writer for the official CD-i magazine, explained CD-Online: The CD-Online service went live in the UK on October 25, 1995 and in March 1996 in the Netherlands (for 399 guilders), and also released in Belgium. The system was reportedly scheduled to launch in the US as "Web-i" in August 1996. The domain cd-online.co.uk, which was used for the British CD-Online service, went offline in 2000. The Dutch domain cd-online.nl stopped updating too but remained online until 2007. Only one game was released that supported CD-Online, the first-person shooter game RAM Raid. Players from any country in the world could compete against each other as long as they had a copy of the game. Reception and market performance Philips had invested heavily in the CD-i format and system, and it was often compared with the Commodore CDTV as a single combination of computer, CD, and television. The product was touted as a single machine for home entertainment connected to a standard TV and controlled by a regular remote control – although the format was noted to have various non-entertainment business opportunities too, such as travel and tourism or the military. In 1990, Peugeot used CD-i for its point of sale application promoting its then-new 605 automobile, and it was also at the time used by fellow car manufacturer Renault for staff training programmes, and in Japan by the Ministry of Trade and Industry for an exhibition there. A Philips executive, Gaston Bastiaens, quoted in 1990 "CD-I will be 'the medium' for entertainment, education and information in the 90's.". Sony introduced its three portable CD-i players in June 1990, pitching them as "picture books with sound". The ambitious CD-i format had initially created much interest after its 1986 announcement, both in the west and in Japan, buoyed by the success of the CD. However, after repeated delays (hardware were first intended to be ready and shipped by Christmas 1987) interest was slowly lost. Electronic Arts for instance was enthusiastic about CD-i and formed a division for the development of video game titles on the format, but it was eventually halted with the intention of resuming when CD-i players would reach the market. The company eventually never resumed CD-i software development when it was released. The delay also gave more attention to the hyped Digital Video Interactive (DVI) in 1987, which demonstrated full screen, full motion video (FMV) using a compression chip on an IBM PC/AT computer. Amid the attention around its potential rival DVI, Philips and Sony decided to find a way to add full screen FMV abilities to the CD-i standard, causing further delay. Meanwhile, the Microsoft-backed CD-ROM standard was improving and solved certain video playback issues that were present on the CD-i – CD-ROM format products were already on the market by 1987. At the end, CD-ROM standard benefited from the CD-i and DVI mishaps, and by the time CD-i players for consumers were released in 1991, CD-ROM had already become known and established. Ron Gilbert commented in early 1990 "The CD-I specifications look great, but where are the machines? If they'd come out four years ago, they'd have been hot, but now they're behind the times." Another reason that led to fading interest pre-launch was the fact CD-i players would not launch with FMV but instead receive it later through a purchasable add-on cartridge (it was originally expected to come built-in) – as well as the obsolete Motorola processor, OS-9 software, and a launch price considered high. Although Philips had aggressively promoted their CD-i products in the U.S., by August 1993 Computer Gaming World reported that "skepticism persists about its long-term prospects" compared to other platforms like IBM PC compatibles, Apple Macintosh, and Sega Genesis. The magazine stated in January 1994 that despite Philips' new emphasis on games "CD-i is still not the answer for hardcore gamers", but the console "may yet surprise us all in the future". It recommended the CD-i with video cartridge for those needing to buy a new console as "The price is right and there is more software to support it", but 3DO Interactive Multiplayer was probably better for those who could wait a few months. The Electronic Entertainment August 1994 issue noted that the CD-i, along with the Atari Jaguar, neither have an "effective, let alone innovative" game library to compete against the then newly released Sega CD. After being outsold in the market by cheaper multimedia PCs, in 1994 Philips attempted to emphasize CD-i as a game playing machine, but this did not help the situation. An early 1995 review of the system in GamePro stated that "inconsistent game quality puts the CD-i at a disadvantage against other high-powered game producers." A late 1995 review in Next Generation criticized both Philips's approach to marketing the CD-i and the hardware itself ("The unit excels at practically nothing except FMV, and then only with the addition of a $200 digital video cartridge"). The magazine noted that while Philips had not yet officially discontinued the CD-i, it was dead for all intents and purposes, citing as evidence the fact that though Philips had a large booth at the 1995 Electronic Entertainment Expo, there was no CD-i hardware or software on display. Next Generation scored the console one out of five stars. Another trouble for Philips in 1995 was the formation of HDCD, which promised better quality video compared to Video CD's (VCD) MPEG-1 compression method – Philips had heavily promoted the CD-i's VCD playing capabilities. Philips Media consolidated its CD-i activities from its Los Angeles office in March 1996. It was reported in October 1996 that Philips was ready to "call it quits" in the American market. Sales In October 1994, Philips claimed an installed base of one million units for the CD-i worldwide. In 1996, The Wall Street Journal reported that total US sales amounted to 400,000 units. In the Netherlands, about 60,000 CD-i players were sold by the end of December 1994. Legacy Although extensively marketed by Philips, notably via infomercial, consumer interest in CD-i titles remained low. By 1994, sales of CD-i systems had begun to slow, and in 1998 the product line was dropped. Plans for a second generation CD-i system were certainly present and Argonaut Software was even designated to design chip sets for the successor to the CD-i. However, the then president Cor Boonstra saw no interest in the media area for Philips and so Philips sold everything, including the media subsidiary Polygram. The Dutch half of Philips Media was sold to Softmachine, which released The Lost Ride on the CD-i as the last product for the CD-i. Philips then also sold its French half of the gaming subsidiary, Philips Media BV, to French publisher Infogrames in 1997 along with the entire CD-i library. A CD-ROM add-on for the Super NES, which was announced for development with Nintendo in 1991, was never made. The last CD-i game was made by Infogrames, who released Solar Crusade in 1999. After its discontinuation, the CD-i was overwhelmingly panned by critics who blasted its graphics, games, and controls. Microsoft CEO Bill Gates admitted that initially he "was worried" about the CD-i due to Philips' heavy support for the device and its two-pronged attack on both the games console and PC markets, but that in retrospect, "It was a device that kind of basically got caught in the middle. It was a terrible game machine, and it was a terrible PC." The CD-i's various controllers were ranked the fifth worst video game controller by IGN editor Craig Harris. PC World ranked it as fourth on their list of "The 10 Worst Video Game Systems of All Time". Gamepro.com listed it as number four on their list of The 10 Worst-Selling Consoles of All Time. In 2008, CNET listed the system on its list of the worst game console(s) ever. In 2007, GameTrailers ranked the Philips CD-i as the fourth worst console of all time in its Top 10 Worst Console lineup. In later retrospective years, the CD-i has become (infamously) best known for its video games, particularly those from the Nintendo-licensed The Legend of Zelda series, considered by many to be of poor taste. Games that were most heavily criticized include Hotel Mario, Link: The Faces of Evil, Zelda: The Wand of Gamelon, and Zelda's Adventure. EGM's Seanbaby rated The Wand of Gamelon as one of the worst video games of all time. However, Burn:Cycle was positively received by critics and has often been held up as the standout title for the CD-i. See also CD-i Ready High Sierra Format 3DO Interactive Multiplayer MiniDisc CD-ROM Video CD Super NES CD-ROM Digital Video Interactive Commodore CDTV Pioneer LaserActive Sega CD FM Towns Tandy Video Information System NEC TurboDuo References External links Official Philips CD-I FAQ CD-i history | of Name That Tune) had Charlie O'Donnell as announcer. The Netherlands also released its version of Lingo on the CD-i in 1994. In 1993, American musician Todd Rundgren created the first music-only fully interactive CD, No World Order, for the CD-i. This application allows the user to completely arrange the whole album in their own personal way with over 15,000 points of customization. Dutch eurodance duo 2 Unlimited released a CD-i compilation album in 1994 called "Beyond Limits" which contains standard CD tracks as well as CD-i-exclusive media on the disc. CD-i has a series of learning games ("edutainment") targeted at children from infancy to adolescence. Those intended for a younger audience included Busytown, The Berenstain Bears and various others which usually had vivid cartoon-like settings accompanied by music and logic puzzles. By mid-1996 the U.S. market for CD-i software had dried up and Philips had given up on releasing titles there, but continued to publish CD-i games in Europe, where the system still held some popularity from a video gaming perspective. With the home market exhausted, Philips tried with some success to position the technology as a solution for kiosk applications and industrial multimedia. Some homebrew developers have released video games on the CD-i format in later years, such as Frog Feast (2005) and Super Quartet (2018). Player models CD-i compatible models were released (as of April 1995) in the U.S., Canada, Benelux, France, Germany, the UK, Japan, Singapore and Hong Kong. It was reported to be released further in Brazil, India and Australia in the "coming months", with plans to also introduce it in China, South Africa, Indonesia and the Philippines. Philips models In addition to consumer models, professional and development players were sold by Philips Interactive Media Systems and their VARs. The first CD-i system was produced by Philips in collaboration with Kyocera in 1988 – the Philips 180/181/182 modular system. Philips marketed several CD-i player models as shown below. The CD-i player 100 series, which consisted of the three-unit 180/181/182 professional system, first demonstrated at the CD-ROM Conference in March 1988. The CD-i player 200 series, which includes the 205, 210, and 220 models. Models in the 200 series were designed for general consumption, and were available at major home electronics outlets around the world. The Philips CDI 910 is the American version of the CDI 205, the most basic model in the series and the first Philips CD-i model, released in December 1991. Originally priced about $799, within a year's time the price dropped to $599. The CD-i player 300 series, which includes the 310, 350, 360, and 370 models. The 300 series consists of portable players designed for the professional market and not marketed to home consumers. A popular use was multimedia sales presentations such as those used by pharmaceutical companies to provide product information to physicians, as the devices could be easily transported by sales representatives. The CD-i player 400 series, which includes the 450, 470, 490 models. The 400 models are slimmed-down units aimed at console and educational markets. The CDI 450 player, for instance, is a budget model designed to compete with game consoles. In this version, an infrared remote controller is not standard but optional, as this model is more gaming-oriented. This series was introduced at CES Chicago in June 1994 and the 450 player retailed at ƒ 799 in the Netherlands. The CD-i player 500 series, which includes the 550 model, which was essentially the same as the 450 with an installed digital video cartridge. It was introduced at CES Chicago in June 1994. The CD-i player 600 series, which includes the 601, 602, 604, 605, 615, 660, and 670 models. The 600 series is designed for professional applications and software development. Units in this line generally include support for floppy disk drives, keyboards and other computer peripherals. Some models can also be connected to an emulator and have software testing and debugging features. The CD-I player 700 series, which consists of the 740 model, the most advanced player and featuring an RS-232 port. It was only released in limited quantities. There also exist a number of hard-to-categorize models, such as the FW380i, an integrated mini-stereo and CD-i player; the 21TCDi30, a television with a built-in CD-i device; the CD-i/PC 2.0, a CD-i module with an ISA interface for IBM-compatible 486 PCs. Gallery Other manufacturers In addition to Philips, several manufacturers produced CD-i players some of which were still on sale years after Philips itself abandoned the format. Manufacturers included: Magnavox (a Philips subsidiary) made rebranded players for the American market. GoldStar / LG Electronics, the LG GDI-700 (c. 1997) was a professional player with a Motorola 68341 processor, faster than the Philips model. GoldStar had a portable player, including another small one without an LCD screen. Digital Video Systems Memorex Grundig Kyocera made the portable Pro 1000S model Maspro Denkoh released a GPS car navigation system with a built-in CD-i player, released in Japan in 1992. Saab Electric Sony produced two models branded Intelligent Discman, a hybrid home/portable CD-i player released in 1990-1991 for professional use only. NBS International Interactive Media (I2m) released in 1995 a CD-i PCI expansion card for 486 PCs, Pentium PCs, 68k-based Macintosh and PowerPC-based Macintosh computers Vobis Highscreen Manna Space branded CD-i models (based on Magnavox's or GoldStar's version of Philips CDI 450) were made for a Japanese travel agency with the same name in 1995. Bang & Olufsen, who produced a high-end television with a built-in CD-i device (Beocenter AV5) on the market from 1997-2001. Before the actual commercial debut of the CD-i format, some other companies had interest in building players and some made prototypes, but were never released – this includes Panasonic (who were originally a major backer of the format), Pioneer, JVC, Toshiba, Epson, Ricoh, Fujitsu, Samsung and Yamaha. In addition, Sanyo showed a prototype portable CD-i player in 1992. Hardware specifications TeleCD-i and CD-MATICS Recognizing the growing need among marketers for networked multimedia, Philips partnered in 1992 with Amsterdam-based CDMATICS to develop TeleCD-i (also TeleCD). In this concept, the CD-i player is connected to a network such as PSTN or Internet, enabling data-communication and rich media presentation. Dutch grocery chain Albert Heijn and mail-order company Neckermann were early adopters and introduced award-winning TeleCD-i applications for their home-shopping and home-delivery services. CDMATICS also developed the special Philips TeleCD-i Assistant and a set of software tools to help the worldwide multimedia industry to develop and implement TeleCD-i. TeleCD-i is the world's first networked multimedia application at the time of its introduction. In 1996, Philips acquired source code rights from CDMATICS. CD-Online Internet services on the CD-i devices were facilitated by the use of an additional hardware modem and "CD-Online" disc (renamed Web-i in the US), which Philips initially released in Britain in 1995 for $150 US. This service provided the CD-i with full internet access (with a 14.4k modem), including online shopping, email, and support for networked multiplayer gaming on select CD-i games. The service required a CD-i player with DV cartridge, and an "Internet Starter Kit" which initially retailed for £99.99. It was advertised as bringing "full Internet access to the living room on TV screens". Andy Stout, a writer for the official CD-i magazine, explained CD-Online: The CD-Online service went live in the UK on October 25, 1995 and in March 1996 in the Netherlands (for 399 guilders), and also released in Belgium. The system was reportedly scheduled to launch in the US as "Web-i" in August 1996. The domain cd-online.co.uk, which was used for the British CD-Online service, went offline in 2000. The Dutch domain cd-online.nl stopped updating too but remained online until 2007. Only one game was released that supported CD-Online, the first-person shooter game RAM Raid. Players from any country in the world could compete against each other as long as they had a copy of the game. Reception and market performance Philips had invested heavily in the CD-i format and system, and it was often compared with the Commodore CDTV as a single combination of computer, CD, and television. The product was touted as a single machine for home entertainment connected to a standard TV and controlled by a regular remote control – although the format was noted to have various non-entertainment business opportunities too, such as travel and tourism or the military. In 1990, Peugeot used CD-i for its point of sale application promoting its then-new 605 automobile, and it was also at the time used by fellow car manufacturer Renault for staff training programmes, and in Japan by the Ministry of Trade and Industry for an exhibition there. A Philips executive, Gaston Bastiaens, quoted in 1990 "CD-I will be 'the medium' for entertainment, education and information in the 90's.". Sony introduced its three portable CD-i players in June 1990, pitching them as "picture books with sound". The ambitious CD-i format had initially created much interest after its 1986 announcement, both in the west and in Japan, buoyed by the success of the CD. However, after repeated delays (hardware were first intended to be ready and shipped by Christmas 1987) interest was slowly lost. Electronic Arts for instance was enthusiastic about CD-i and formed a division for the development of video game titles on the format, but it was eventually halted with the intention of resuming when CD-i players would reach the market. The company eventually never resumed CD-i software development when it was released. The delay also gave more attention to the hyped Digital Video Interactive (DVI) in 1987, which demonstrated full screen, full motion video (FMV) using a compression chip on an IBM PC/AT computer. Amid the attention around its potential rival DVI, Philips and Sony decided to find a way to add full screen FMV abilities to the CD-i standard, causing further delay. Meanwhile, the Microsoft-backed CD-ROM standard was improving and solved certain video playback issues that were present on the CD-i – CD-ROM format products were already on the market by 1987. At the end, CD-ROM standard benefited from |
moth (Biston betularia) is a temperate species of night-flying moth.It is mostly found in the northern hemisphere in places like Asia, Europe and North America. Peppered moth evolution is an example of population genetics and natural selection. The caterpillars of the peppered moth not only mimic the form but also the colour of a twig. Recent research indicates that the caterpillars can sense the twig's colour with their skin and match their body colour to the background to protect themselves from predators. Description The wingspan ranges from 45 mm to 62 mm (median 55 mm). It is relatively stout-bodied, with forewings relatively narrow-elongate. The wings are white, "peppered" with black, and with more-or-less distinct cross lines, also black. These transverse wing lines and "peppered" maculation (spotting) can also, in rare instances, be gray or brown; the spotting pattern, in particularly very rare cases, is sometimes a combination of brown and black/gray. The black speckling varies in amount, in some examples it is almost absent, whilst in others it is so dense that the wings appear to be black sprinkled with white. The antennae of males are strongly bipectinate. Prout (1912–16) gives an account of the forms and congeners. Distribution Biston betularia is found in China (Heilongjiang, Jilin, Inner Mongolia, Beijing, Hebei, Shanxi, Shandong, Henan, Shaanxi, Ningxia, Gansu, Qinghai, Xinjiang, Fujian, Sichuan, Yunnan, Tibet), Russia, Mongolia, Japan, North Korea, South Korea, Nepal, Kazakhstan, Kyrgyzstan, Turkmenistan, Georgia, Azerbaijan, Armenia, Europe and North America. Ecology and life cycle In Great Britain and Ireland, the peppered moth is univoltine (i.e., it has one generation per year), whilst in south-eastern North America it is bivoltine (two generations per year). The lepidopteran life cycle consists of four stages: ova (eggs), several larval instars (caterpillars), pupae, which overwinter live in the soil, and imagines (adults). During the day, the moths typically rest on trees, where they are preyed on by birds. The caterpillar is a twig mimic, varying in colour between green and brown. On a historical note, it was one of the first animals to be identified as being camouflaged with countershading to make it appear flat (shading being the main visual cue that makes things appear solid), in a paper by Edward Bagnall Poulton in 1887. Research indicates that the caterpillars can sense the twig's colour with their skin and match their body colour to the background to protect themselves from predators, an ability to camouflage themselves also found in cephalopods, chameleons and some fish, although this colour change is rather slower in the caterpillars. It goes into the soil late in the season, where it pupates in order to spend the winter. The imagines emerge from the pupae between late May and August, the males slightly before the females (this is common and expected from sexual selection). They emerge late in the day and dry their wings before flying that night. The males fly every night of their lives in search of females, whereas the females only fly on the first night. Thereafter, the females release pheromones to attract males. Since the pheromone is carried by the wind, males tend to travel up the concentration gradient, i.e., toward the source. During flight, they are subject to predation by bats. The males guard the female from other males until she lays the eggs. The female lays about 2,000 pale-green ovoid eggs about 1 mm in length into crevices in bark with her ovipositor. Resting behaviour A mating pair or a lone individual will spend the day hiding from predators, particularly birds. In the case of the former, the male stays with the female to ensure paternity. The best evidence for resting positions is given by data collected by the peppered moth researcher Michael Majerus, and it is given in the accompanying charts. These data were originally published in Howlett and Majerus (1987), and an updated version published in Majerus (1998), who concluded that the moths rest in the upper part of the trees. Majerus notes: Creationist critics of the peppered moth have often pointed to a statement made by Clarke et al. (1985): "... In 25 years we have only found two betularia on the tree trunks or walls adjacent to our traps, and none elsewhere". The reason now seems obvious. Few people spend their time looking for moths up in the trees. That is where peppered moths rest by day. From their original data, Howlett and Majerus (1987) concluded that peppered moths generally rest in unexposed positions, using three main types of site. Firstly, a few inches below a branch-trunk joint on a tree trunk where the moth is in shadow; secondly, on the underside of branches and thirdly on foliate twigs. The above data would appear to support this. Further support for these resting positions is given from experiments watching captive moths taking up resting positions in both males (Mikkola, 1979; 1984) and females (Liebert and Brakefield, 1987). Majerus, et al., (2000) have shown that peppered moths are cryptically camouflaged against their backgrounds when they rest in the boughs of trees. It is clear that in human visible wavelengths, typica are camouflaged against lichens and carbonaria against plain bark. However, birds are capable of seeing ultraviolet light that humans cannot see. Using an ultraviolet-sensitive video camera, Majerus et al. showed that | to a statement made by Clarke et al. (1985): "... In 25 years we have only found two betularia on the tree trunks or walls adjacent to our traps, and none elsewhere". The reason now seems obvious. Few people spend their time looking for moths up in the trees. That is where peppered moths rest by day. From their original data, Howlett and Majerus (1987) concluded that peppered moths generally rest in unexposed positions, using three main types of site. Firstly, a few inches below a branch-trunk joint on a tree trunk where the moth is in shadow; secondly, on the underside of branches and thirdly on foliate twigs. The above data would appear to support this. Further support for these resting positions is given from experiments watching captive moths taking up resting positions in both males (Mikkola, 1979; 1984) and females (Liebert and Brakefield, 1987). Majerus, et al., (2000) have shown that peppered moths are cryptically camouflaged against their backgrounds when they rest in the boughs of trees. It is clear that in human visible wavelengths, typica are camouflaged against lichens and carbonaria against plain bark. However, birds are capable of seeing ultraviolet light that humans cannot see. Using an ultraviolet-sensitive video camera, Majerus et al. showed that typica reflect ultraviolet light in a speckled fashion and are camouflaged against crustose lichens common on branches, both in ultraviolet and human-visible wavelengths. However, typica are not as well camouflaged against foliose lichens common on tree trunks; though they are camouflaged in human wavelengths, in ultraviolet wavelengths, foliose lichens do not reflect ultraviolet light. During an experiment in Cambridge over the seven years 2001–2007 Majerus noted the natural resting positions of peppered moths, and of the 135 moths examined over half were on tree branches, mostly on the lower half of the branch, 37% were on tree trunks, mostly on the north side, and only 12.6% were resting on or under twigs. Polymorphism Introduction on forms There are several melanic and non-melanic morphs of the peppered moth. These are controlled genetically. A particular colour morph can be indicated in a standard way by following the species name in the form "morpha morph name". The use of "form" in the method of Biston betularia f. formname in detailing these variations is also a widespread practice. These forms are often accidentally elevated to subspecies status when they appear in literature. Not adding the "f." (forma) or morpha implies that the taxon is a subspecies instead of a form, as in Biston betularia carbonaria instead of Biston betularia f. carbonaria. Rarely, forms have been elevated to species status, as in Biston carbonaria. Either of these two circumstances might lead to the erroneous belief that speciation was involved in the observed evolution of the peppered moth. This is not the case; individuals of each morph interbreed and produce fertile offspring with individuals of all other morphs; hence there is only one peppered moth species. By contrast, different subspecies of the same species can theoretically interbreed with one another and will produce fully fertile and healthy offspring, but in practice do not, as they live in different regions or reproduce in different seasons. Full-fledged species are either unable to produce fertile and healthy offspring, or do not recognize each other's courtship signals, or both. European breeding experiments have shown that in Biston betularia betularia, the allele for melanism producing morpha carbonaria is controlled by a single locus. The melanic allele is dominant to the non-melanic allele. This situation is, however, somewhat complicated by the presence of three other alleles that produce indistinguishable morphs of morpha medionigra. These are of intermediate dominance, but this is not complete (Majerus, 1998). Form names In continental Europe, there are three morphs: the white morph typica (syn. morpha/f. betularia), the dark melanistic morph carbonaria (syn. doubledayaria), and an intermediate form medionigra. In Britain, the typical white morph is known as typica, the melanic morph is carbonaria, and the intermediate phenotype is named insularia. In North America, the melanic black morph is morpha swettaria. In Biston betularia cognataria, the melanic allele (producing morpha swettaria) is similarly dominant to the non-melanic allele. There are also some intermediate morphs. In Japan, no melanic morphs have been recorded; they are all morpha typica. Evolution The evolution of the peppered moth over the last two hundred years has been studied in detail. At the start of this period, the vast majority of peppered moths had light coloured wing patterns which effectively camouflaged them against the light-coloured trees and lichens upon which they rested. However, due to widespread pollution during the Industrial Revolution in England, many of the lichens died out, and the trees which peppered moths rested on became blackened by soot, causing most of the light-coloured moths, or typica, to die off due to predation. At the same time, the dark-coloured, or melanic, moths, carbonaria, flourished because they could hide on the darkened trees. Since then, with improved environmental standards, light-coloured peppered moths have again become common, and the dramatic change in the peppered moth's population has remained a subject of much interest and study. This has led to the coining of the term "industrial melanism" to refer to the genetic darkening of species in response to pollutants. As a result of the relatively simple and easy-to-understand circumstances of the adaptation, the peppered moth has become a common example used in explaining or demonstrating natural selection to laypeople and classroom students through simulations. The first carbonaria morph was recorded by |
were a line of IBM RS/6000 workstations in September 1993. Many Macintosh application developers used these machines for development of the initial PowerPC ports of their products, as Macintosh-based PowerPC development tools were not ready. The PowerPC 603 (which focused on lowering power usage) and 604 (which focused on high performance) projects were also underway at the same time. In July 1992, the decision was made to scale back the ambition of the initial system software release; instead of attempting to create a completely new kernel, Apple focused on producing a version of System 7 where portions of the existing Macintosh Toolbox ROM were rewritten to use native PowerPC code instead of emulating a 680x0. This provided a significant performance boost for certain highly utilized parts of the operating system, particularly QuickDraw. The first public demonstration of the new Power Macintosh — specifically, a prototype of what would become the Power Macintosh 6100 – was at an Apple Pacific sales meeting in Hawaii in October 1992. The demo was a success, and in the following months, the product plan expanded to include three models: the entry-level 6100, a mid-range 7100 housed in the Macintosh IIvx's desktop case, and a high-end 8100 based on the Quadra 800's mini-tower case. A fourth project, the Macintosh Processor Upgrade Card, was started in July 1993 with the goal of providing a straightforward upgrade path to owners of Centris- and Quadra-based Macintosh computers. The importance of this was especially significant for the Quadra 700, 900 and 950, which were not going to receive full logic board replacements. Computers upgraded in this fashion received new names such as "Power Macintosh Q650" and "Power Macintosh 900". Release and reception (1994-1995) The original plan was to release the first Power Macintosh machine on January 24, 1994, exactly ten years after the release of the first Macintosh. Ian Diery, who was EVP and general manager of the Personal Computer Division at the time, moved the release date back to March 14 in order to give manufacturing enough time to build enough machines to fill the sales channels, and to ensure that the Macintosh Processor Upgrade Card would be available at the same time. This was a departure from prior practice at Apple; they had typically released upgrade packages months after the introduction of new Macintoshes. The Power Macintosh was formally introduced at the Lincoln Center for the Performing Arts in Manhattan on March 14. Pre-orders for the new Power Macintosh models were brisk, with an announced 150,000 machines already having been sold by that date. MacWorld's review of the 6100/60 noted that "Not only has Apple finally regained the performance lead it lost about eight years ago when PCs appeared using Intel's 80386 CPU, but it has pushed far ahead." Performance of 680x0 software is slower due to the emulation layer, but MacWorld's benchmarks showed noticeably faster CPU, disk, video and floating point performance than the Quadra 610 it replaced. By January 1995, Apple had sold 1 million Power Macintosh systems. Speed-bumped versions of the Power Macintosh line were introduced at the beginning of 1995, followed in April by the first PowerPC 603 models: an all-in-one model called the Power Macintosh 5200 LC and a replacement for the Quadra 630 called the Power Macintosh 6200. Performa variants of these machines were sold as well, continuing the practice of re-branding other Macintosh models for sale in department stores and big-box electronics retailers. While the 5200 LC was well-received by critics for its design, performance, and cost, both it and the 6200 suffered from stability issues (and in the case of the 5200, display issues as well) that could only be solved by bringing the machine to an Apple dealer for replacement parts. By mid-1995, the burgeoning Power Macintosh line had all but completely supplanted every prior Macintosh line, with only the high-end Quadra 950 and two low-cost education models (the all-in-one Macintosh LC 580 and desktop LC 630) remaining in production. The competitive marketplace for "accelerator cards" that had existed for earlier Macintosh systems largely disappeared due to the comparatively low price of Apple's Macintosh Processor Upgrade Card (US$600). DayStar Digital sold upgrade cards for the IIci and various Quadra models, and full motherboard replacements were available from Apple as well. Macintosh clones from companies like DayStar Digital and Power Computing were also coming to the market at this time, undercutting Apple's prices. Transition to standardized hardware (1995-1999) When the Power Macintosh was introduced, it included the same internal and external expansion connections as other Macintosh models, all of which (save for audio input and output) were either wholly proprietary to, or largely exclusive to Apple computers. Over the next five years, Apple replaced all these ports with industry-standard connectors. The first generation of Power Macintoshes had shipped with NuBus, but by the end of 1993 it was becoming clear that Intel's PCI bus was going to be the widely adopted future of internal expansion. Apple's position as a relatively small player in the larger personal computer market meant that few device manufacturers invested in creating both NuBus- and PCI-compatible versions of their cards. The first PCI-based system was the range-topping Power Macintosh 9500, introduced in May 1995. This was followed shortly afterwards by the introduction of the "Power Surge" line of second-generation Power Macintosh systems – the Power Macintosh 7200, 7500 and 8500. The 8500 and 9500 were built around the new PowerPC 604, offering speeds starting at 120 MHz. InfoWorld's review of the 8500 showed a speed improvement in their "business applications suite" benchmark from 10 minutes with the 8100/100, to 7:37 for the 8500/120. They also noted that the 8500 runs an average of 24 to 44 percent faster than a similarly-clocked Intel Pentium chip, increasing to double on graphics and publishing tasks. The transition to PCI continued into 1996, with the introduction of the all-in-one 5400, desktop 6300/160 (usually sold as a Performa 6360), and mini-tower 6400 models. The success of the Macintosh clone market also prompted Apple to produce its own inexpensive machine using parts and production techniques that were common in both the clone market and the Wintel desktop market at the time. The Power Macintosh 4400 (sold as a 7220 in Asia and Australia) employed bent sheet metal instead of plastic for its case internals, and included a standard ATX power supply. Alongside the transition to PCI, Apple began a gradual transition away from SCSI hard disks to IDE as a cost-saving measure, both for themselves and for users who wanted to upgrade their hard drives. The low-end 5200 and 6200 were the first to adopt IDE internal drives, though Apple's proprietary 25-pin external SCSI connector remained. The beige Power Macintosh G3 models being the last to include SCSI drives as standard, and it was the last Macintosh to include the external SCSI connector. When the Power Macintosh G3 (Blue and White) was introduced in early 1999, the port was replaced by two FireWire 400 ports. The Blue and White G3 was also the last Macintosh to include Apple Desktop Bus ports, a proprietary technology created by Steve Wozniak to connect keyboards, mice and software protection dongles such as those from Avid Technology. Two USB ports were also included, making this the only Power Macintosh to include both ADB and USB. Another port that was retired during this time is the Apple Attachment Unit Interface. This was | ports. The Blue and White G3 was also the last Macintosh to include Apple Desktop Bus ports, a proprietary technology created by Steve Wozniak to connect keyboards, mice and software protection dongles such as those from Avid Technology. Two USB ports were also included, making this the only Power Macintosh to include both ADB and USB. Another port that was retired during this time is the Apple Attachment Unit Interface. This was a proprietary version of the industry-standard Attachment Unit Interface connector for 10BASE5 Ethernet that Apple had created to avoid confusion with the 15-pin connector that Apple used for connecting external displays. The AAUI port required a costly external transceiver to connect to a network. By the early 1990s, the networking industry was coalescing around the 10BASE-T connector, leading Apple to include this port alongside AAUI in mid-1995, starting with the Power Macintosh 9500. The Power Macintosh G3 excluded the AAUI port. The Power Mac G4 (AGP Graphics) was released in the second half of 1999; it was the first Power Macintosh to include only industry-standard internal and external expansion. For some years afterwards, a number of third parties created dongles that provided backwards compatibility to users of newer Power Mac systems with old hardware. This included companies like Griffin Technology, MacAlly Peripherals, Rose Electronics and many others. In some cases, these companies produced adapters that matched the aesthetic design of the Power Mac. Industrial design and the Megahertz Myth (1999-2002) Shortly after Steve Jobs' return to Apple in 1997, Jony Ive was appointed senior vice president of industrial design. Building on the critical and commercial success of the iMac, Ive and his team created an entirely new case design for the Power Macintosh G3, combining many of the aesthetic principles of the iMac (curves, translucent plastics, use of color) with the ease-of-access characteristics of the company's popular "Outrigger" Macintosh models from previous years. The result was the Power Macintosh G3 (Blue and White), a machine that received considerable plaudits from reviewers, including PC Magazine's Technical Excellence Award for 1999. "The Power Mac provides the fastest access to the insides of a computer we've ever seen," they wrote. "Just lift a handle and a hinged door reveals everything inside." This case design, code-named "El Capitan", was retained through the entire lifetime of the Power Mac G4. The introduction of the Blue and White G3 mini-tower also marked the end of the desktop and all-in-one Power Macintosh case designs, the latter being replaced by the iMac. A second model called the Power Mac G4 Cube was introduced in 2000, which fitted the specifications of a mid-range Power Mac G4 into a cube less than 9" in each axis. This model was on sale for about a year before being discontinued, and was not considered a sales success (150,000 units were sold, about one-third of Apple's projections), but the distinctive design of both the computer and its accompanying Harman Kardon speakers prompted the Museum of Modern Art in New York City to retain them in their collection. The PowerPC chips in the G3 and G4 became a central part of Apple's branding and marketing for the Power Macintosh. For example, the Blue and White G3 features the letters "G3" on the side that are fully one-third the height of the entire case, a significant departure from the small labels typically used on prior Macintosh computers. And when the Power Mac G4 was introduced, print ads included pictures of the G4 chip and mentioned its AltiVec instruction set by its own marketing name, "Velocity Engine". A related element of Apple's marketing strategy, especially after mid-2001, was to highlight what they described as the "Megahertz myth", challenging the belief that a processor's clock speed is directly correlated with performance. This had become important with the introduction of Intel's Pentium 4, which featured significantly higher clock speeds than competing chips from Sun, IBM, and AMD, but without a corresponding performance benefit. The company's public presentations -- Stevenotes in particular—often featured lengthy segments pitting a high-powered Compaq or Dell computer against the Power Macintosh in a series of benchmarks and scripted tasks, usually in Adobe Photoshop. These presentations often showed the Power Macintosh besting Intel's Pentium chips by margins significantly exceeding 50%, but independent benchmarks did not bear this out. InfoWorld reviewer Jennifer Plonka reported that the 400 MHz G3 was 11% slower than a comparably-specced Pentium II-450 in an Office applications suite test, while Photoshop 5.0 was faster by 26%. And in 2003, Maximum PC ran a variety of gaming, Photoshop and LightWave 3D benchmarks, and reported that the Dual 1.25 GHz G4 system was about half the speed of a dual-processor Intel Xeon Prestonia 2.8 GHz system. A related criticism leveled at Power Mac systems from this time, particularly the G4 Mirrored Drive Doors, was the increased fan noise level compared to older systems. The Power Mac G5 and the end of Power (2003-2006) By the time the Power Mac G5 was unveiled at Apple's Worldwide Developers Conference in July 2003, Apple's desktop range had fallen significantly behind competing computers in performance. The G5 closed much of this gap by moving to the PowerPC 970 processor with clock speeds up to 2.0 GHz, and a full 64-bit architecture. It also introduced a significantly revised enclosure design, replacing the use of plastics with anodized aluminum alloy. Reviews were generally positive. InfoWorld described the G5 as "Apple's best work yet", and said it "delivers on the present need for rapid computing, deep multitasking, and responsive user interfaces — as well as the future need for mainstream computers that rapidly process and analyze massive data sets." PC Magazine again awarded the Power Mac G5 with its Award for Technical Excellence for 2003. However, the G5's heavy weight (10 pounds more than the previous year's Quicksilver Power Mac G4), limited internal expansion options, issues with ground loop, and noise in the single-processor models' power supply units resulted in significant criticism of the product. Apple also continued to make unsubstantiated performance claims about the new Power Mac. This resulted in the Advertising Standards Authority for the United Kingdom banning Apple from using the phrase "the world's fastest, most powerful personal computer" to describe the Power Mac G5 after independent tests carried out by the Broadcast Advertising Clearance Centre determined the claim to be false. Another claim made by Steve Jobs at the 2003 Worldwide Developers Conference was that the company would be selling a 3 GHz G5 by mid-2004; this never happened. Three generations of Power Mac G5 were released before it was discontinued during the Mac transition to Intel processors. The announcement of the transition came in mid-2005, but the third generation of G5 systems was introduced towards the end of 2005. Most notably in this generation was the introduction of a Quad-core 2.5 GHz system. Not only was this the first Apple computer with four processing cores, it was the first to incorporate PCI Express instead of PCI-X for internal expansion. It also required an IEC 60320 C19 power connector that was more common on rackmounted server hardware, instead of the industry-standard C13 connector used with personal computers. The official end to the Power Macintosh line came at the 2006 Worldwide Developers Conference, where Phil Schiller introduced its replacement, the Mac Pro. The G5's enclosure design was retained for the Mac Pro and continued to be used for seven more years, making it among the longest-lived designs in Apple's history. Models The Power Macintosh models can be broadly classified into two categories, depending on whether they were released before or after Apple introduced its "four quadrant" product strategy in 1998. Before the introduction of the Power Macintosh G3 (Blue and White) in 1999, Apple had shipped Power Macintosh-labelled machines in nine different form factors, some of which were carry-overs from pre-PowerPC product lines, such as the Quadra/Centris 610 and the IIvx. This was reduced to one model in the new product strategy, with the exception of the Power Mac G4 Cube in 2000 and 2001. 1994-1997 Apple named Power Macintosh models from this period after the first pre-PowerPC model of Macintosh to use a particular form factor, followed by a slash and the speed of the CPU. For example, the Power Macintosh 6300/120 uses the Quadra 630's form factor and has a CPU. Machines with "AV" in their name denote variants that include extended audio-video capabilities. Machines with "PC Compatible" in their name include a separate card with an x86-compatible CPU; these models are therefore capable of running MS-DOS and Microsoft Windows applications, typically Windows 3.1. Machines with "MP" in their name denote machines that include two CPUs. These early models had two distinct generations. The first generation uses the PowerPC 601 and 603 processors and used the old NuBus expansion slots, while the second generation uses the faster 603e, 604 and 604e chips as well as industry-standard PCI expansion slots. The second generation also makes use of Open Firmware, allowing them to more easily boot alternate operating systems (including OS X via XPostFacto), though use of various hacks was still necessary. Power Macintosh 4400 The Power Macintosh 4400 is a desktop case with a height of , suitable for horizontal placement with a monitor on top. Power Macintosh 4400/160, 200, 200 (PC Compatible) (Marketed as the Power Macintosh 7220 in some regions) Power Macintosh 5200 The Power Macintosh 5200 is an all-in-one form factor with specifications and internal designs similar to the Quadra 630. Collectively these machines are sometimes referred to as the "Power Macintosh/Performa 5000 series". Power Macintosh 5200/75 LC Power Macintosh 5260/100, 120 Power Macintosh 5300/100 LC Power Macintosh 5400/120, 180, 200 Power Macintosh 5500/225, 250 Centris 610 The Centris 610 form factor is a low-profile "pizza-box" design with a height of , intended to be placed on a desktop with a monitor on top. Power Macintosh 6100/60, 60AV, 66, 66AV, 66 (DOS Compatible) Quadra 630 The Quadra 630 form factor is a horizontally-oriented design with a height of , suitable for placing a monitor on top. Power Macintosh 6200/75 Power Macintosh 6300/120, 160 Performa 6400 The Performa 6400 form factor is a mini-tower design, suitable for being placed beside a monitor. Power Macintosh 6400/180, 200 Power Macintosh 6500/225, 250, 275, 300 Power Macintosh 7100 The IIvx form factor is a horizontally-oriented desktop form factor with a height of , suitable for placing a monitor on top. Power Macintosh 7100/66, 66AV, 80, 80AV Power Macintosh 7500 The Power Macintosh 7500 form factor is a horizontally-oriented desktop design with a height of , suitable for placing a monitor on top. Power Macintosh 7200/75, 90, 120 (PC), 200 (PC) Power Macintosh 7300/166, 180 (PC), 200 Power Macintosh 7500/100 Power Macintosh 7600/120, 132, 200 Quadra 800 The Quadra 800 form factor is a mini-tower design, with a width of . Power Macintosh 8100/80, 80AV, 100, 100AV, 110, 110AV Power Macintosh 8115/110 Power Macintosh 8200/100, 120 Power Macintosh 8500/120, 132, 150, 180 Power Macintosh 8515/120 Power Macintosh 9600 The Power Macintosh 9600 form factor is a mini-tower design with a width of . Power Macintosh 8600/200, 250, 300 Power Macintosh 9500/120, 132, 150, 180MP, 200 Power Macintosh 9515/132 Power Macintosh 9600/200, 200MP, 233, 300, 350 1997-2006 Starting with the Power Macintosh G3, Apple changed its product naming to include the generation of PowerPC CPU, with the name of the form factor or a key feature afterwards in brackets. The all-in-one models would eventually be spun off into the iMac line, whilst the compact form factor models would be spun off into the Mac Mini. Power Macintosh G3 Power Macintosh G3 Desktop Power Macintosh G3 Mini Tower Power Macintosh G3 All-In-One Power Macintosh G3 Blue and White Power Mac G4 Power Mac G4 PCI Graphics Power Mac G4 AGP Graphics Power Mac G4 Gigabit Ethernet Power Mac G4 Digital Audio Power Mac G4 Quicksilver Power Mac G4 Quicksilver 2002 Power Mac G4 Mirrored Drive Doors Power Mac G4 Mirrored Drive Doors FW800 Power Mac G4 Mirrored Drive Doors 2003 Power Mac G4 Cube Power Mac G5 The Power Mac G5's name was changed to incorporate the time period in which the model was released. Power Mac G5 (original) Power Mac G5 June 2004 Power Mac G5 Late 2004 Power Mac G5 Early 2005 Power Mac G5 Late 2005 Naming The Power Mac brand name was used for Apple's high-end tower style computers, targeted primarily at businesses and creative professionals, in differentiation to their more compact "iMac" line (intended for home use) and the "eMac" line (for the |
islands, and a symmetry around the midpoint of dominant Cs and As on one side and Gs and Ts on the other. A motif with the consensus sequence of TCTCGCGAGA, also called the CGCG element, was recently shown to drive PolII-driven bidirectional transcription in CpG islands. CCAAT boxes are common, as they are in many promoters that lack TATA boxes. In addition, the motifs NRF-1, GABPA, YY1, and ACTACAnnTCCC are represented in bidirectional promoters at significantly higher rates than in unidirectional promoters. The absence of TATA boxes in bidirectional promoters suggests that TATA boxes play a role in determining the directionality of promoters, but counterexamples of bidirectional promoters do possess TATA boxes and unidirectional promoters without them indicates that they cannot be the only factor. Although the term "bidirectional promoter" refers specifically to promoter regions of mRNA-encoding genes, luciferase assays have shown that over half of human genes do not have a strong directional bias. Research suggests that non-coding RNAs are frequently associated with the promoter regions of mRNA-encoding genes. It has been hypothesized that the recruitment and initiation of RNA polymerase II usually begins bidirectionally, but divergent transcription is halted at a checkpoint later during elongation. Possible mechanisms behind this regulation include sequences in the promoter region, chromatin modification, and the spatial orientation of the DNA. Subgenomic A subgenomic promoter is a promoter added to a virus for a specific heterologous gene, resulting in the formation of mRNA for that gene alone. Many positive-sense RNA viruses produce these subgenomic mRNAs (sgRNA) as one of the common infection techniques used by these viruses and generally transcribe late viral genes. Subgenomic promoters range from 24 nucleotide (Sindbis virus) to over 100 nucleotides (Beet necrotic yellow vein virus) and are usually found upstream of the transcription start. Detection A wide variety of algorithms have been developed to facilitate detection of promoters in genomic sequence, and promoter prediction is a common element of many gene prediction methods. A promoter region is located before the -35 and -10 Consensus sequences. The closer the promoter region is to the consensus sequences the more often transcription of that gene will take place. There is not a set pattern for promoter regions as there are for consensus sequences. Evolutionary change Changes in promoter sequences are critical in evolution as indicated by the relatively stable number of genes in many lineages. For instance, most vertebrates have roughly the same number of protein-coding genes (about 20,000) which are often highly conserved in sequence, hence much of evolutionary change must come from changes in gene expression. De novo origin of promoters Given the short sequences of most promoter elements, promoters can rapidly evolve from random sequences. For instance, in E. coli, ~60% of random sequences can evolve expression levels comparable to the wild-type lac promoter with only one mutation, and that ~10% of random sequences can serve as active promoters even without evolution. Binding The initiation of the transcription is a multistep sequential process that involves several mechanisms: promoter location, initial reversible binding of RNA polymerase, conformational changes in RNA polymerase, conformational changes in DNA, binding of nucleoside triphosphate (NTP) to the functional RNA polymerase-promoter complex, and nonproductive and productive initiation of RNA synthesis. The promoter binding process is crucial in the understanding of the process of gene expression. Location Although RNA polymerase holoenzyme shows high affinity to non-specific sites of the DNA, this characteristic does not allow us to clarify the process of promoter location. This process of promoter location has been attributed to the structure of the holoenzyme to DNA and sigma 4 to DNA complexes. Diseases associated with aberrant function Most diseases are heterogeneous in cause, meaning that one "disease" is often many different diseases at the molecular level, though symptoms exhibited and response to treatment may be identical. How diseases of different molecular origin respond to treatments is partially addressed in the discipline of pharmacogenomics. Not listed here are the many kinds of cancers involving aberrant transcriptional regulation owing to creation of chimeric genes through pathological chromosomal translocation. Importantly, intervention in the number or structure of promoter-bound proteins is one key to treating a disease without affecting expression of unrelated genes sharing elements with the target gene. Some genes whose change is not desirable are capable of influencing the potential of a cell to become cancerous. CpG islands in promoters In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island. CpG islands are generally 200 to 2000 base pairs long, have a C:G base pair content >50%, and have regions of DNA where a cytosine nucleotide is followed by a guanine nucleotide and this occurs frequently in the linear sequence of bases along its 5' → 3' direction. Distal promoters also frequently contain CpG islands, such as the promoter of the DNA repair gene ERCC1, where the CpG island-containing promoter is located about 5,400 nucleotides upstream of the coding region of the ERCC1 gene. CpG islands also occur frequently in promoters for functional noncoding RNAs such as microRNAs. Methylation of | or can have a function in and of itself, such as tRNA or rRNA. Promoters are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand). Promoters can be about 100–1000 base pairs long, the sequence of which is highly dependent on the gene and product of transcription, type or class of RNA polymerase recruited to the site, and species of organism. Overview For transcription to take place, the enzyme that synthesizes RNA, known as RNA polymerase, must attach to the DNA near a gene. Promoters contain specific DNA sequences such as response elements that provide a secure initial binding site for RNA polymerase and for proteins called transcription factors that recruit RNA polymerase. These transcription factors have specific activator or repressor sequences of corresponding nucleotides that attach to specific promoters and regulate gene expression. In bacteria The promoter is recognized by RNA polymerase and an associated sigma factor, which in turn are often brought to the promoter DNA by an activator protein's binding to its own DNA binding site nearby. In eukaryotes The process is more complicated, and at least seven different factors are necessary for the binding of an RNA polymerase II to the promoter. Promoters represent critical elements that can work in concert with other regulatory regions (enhancers, silencers, boundary elements/insulators) to direct the level of transcription of a given gene. A promoter is induced in response to changes in abundance or conformation of regulatory proteins in a cell, which enable activating transcription factors to recruit RNA polymerase. Identification of relative location As promoters are typically immediately adjacent to the gene in question, positions in the promoter are designated relative to the transcriptional start site, where transcription of DNA begins for a particular gene (i.e., positions upstream are negative numbers counting back from -1, for example -100 is a position 100 base pairs upstream). Relative location in the cell nucleus In the cell nucleus, it seems that promoters are distributed preferentially at the edge of the chromosomal territories, likely for the co-expression of genes on different chromosomes. Furthermore, in humans, promoters show certain structural features characteristic for each chromosome. Elements Bacterial In bacteria, the promoter contains two short sequence elements approximately 10 (Pribnow Box) and 35 nucleotides upstream from the transcription start site. The sequence at -10 (the -10 element) has the consensus sequence TATAAT. The sequence at -35 (the -35 element) has the consensus sequence TTGACA. The above consensus sequences, while conserved on average, are not found intact in most promoters. On average, only 3 to 4 of the 6 base pairs in each consensus sequence are found in any given promoter. Few natural promoters have been identified to date that possess intact consensus sequences at both the -10 and -35; artificial promoters with complete conservation of the -10 and -35 elements have been found to transcribe at lower frequencies than those with a few mismatches with the consensus. The optimal spacing between the -35 and -10 sequences is 17 bp. Some promoters contain one or more upstream promoter element (UP element) subsites (consensus sequence 5'-AAAAAARNR-3' when centered in the -42 region; consensus sequence 5'-AWWWWWTTTTT-3' when centered in the -52 region; W = A or T; R = A or G; N = any base). The above promoter sequences are recognized only by RNA polymerase holoenzyme containing sigma-70. RNA polymerase holoenzymes containing other sigma factors recognize different core promoter sequences. <-- upstream downstream --> 5'-XXXXXXXPPPPPPXXXXXXPPPPPPXXXXGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGXXXX-3' -35 -10 Gene to be transcribed Probability of occurrence of each nucleotide for -10 sequence T A T A A T 77% 76% 60% 61% 56% 82% for -35 sequence T T G A C A 69% 79% 61% 56% 54% 54% Eukaryotic Eukaryotic promoters are diverse and can be difficult to characterize, however, recent studies show that they are divided in more than 10 classes. Gene promoters are typically located upstream of the gene and can have regulatory elements several kilobases away from the transcriptional start site (enhancers). In eukaryotes, the transcriptional complex can cause the DNA to bend back on itself, which allows for placement of regulatory sequences far from the actual site of transcription. Eukaryotic RNA-polymerase-II-dependent promoters can contain a TATA box (consensus sequence TATAAA), which is recognized by the general transcription factor TATA-binding protein (TBP); and a B recognition element (BRE), which is recognized by the general transcription factor TFIIB. The TATA element and BRE typically are located close to the transcriptional start site (typically within 30 to 40 base pairs). Eukaryotic promoter regulatory sequences typically bind proteins called transcription factors that are involved in the formation of the transcriptional complex. An example is the E-box (sequence CACGTG), which binds transcription factors in the basic helix-loop-helix (bHLH) family (e.g. BMAL1-Clock, cMyc). Some promoters that are targeted by multiple transcription factors might achieve a hyperactive state, leading to increased transcriptional activity. Core promoter – the minimal portion of the promoter required to properly initiate transcription Includes the transcription start site (TSS) and elements directly upstream A binding site for RNA polymerase RNA polymerase I: transcribes genes encoding 18S, 5.8S and 28S ribosomal RNAs RNA polymerase II: transcribes genes encoding messenger RNA and certain small nuclear RNAs and microRNA RNA polymerase III: transcribes genes encoding transfer RNA, 5s ribosomal RNAs and other small RNAs General transcription factor binding sites, e.g. TATA box, B recognition element. Many other elements/motifs may be present. There is no such thing as a set of "universal elements" found in every core promoter. Proximal promoter – the proximal sequence upstream of the gene that tends to contain primary regulatory elements Approximately 250 base pairs upstream of the start site Specific transcription factor binding sites Distal promoter – the distal sequence upstream of the gene that may contain additional regulatory elements, often with a weaker influence than the proximal promoter Anything further upstream (but not an enhancer or other regulatory region whose influence is positional/orientation independent) Specific transcription factor binding sites Mammalian promoters Up-regulated expression of genes in mammals is initiated when signals are transmitted to the promoters associated with the genes. Promoter DNA sequences may include different elements such as CpG islands (present in about 70% of promoters), a TATA box (present in about 24% of promoters), initiator (Inr) (present in about 49% of promoters), upstream and downstream TFIIB recognition elements (BREu and BREd) (present in about 22% of promoters), and downstream core promoter element (DPE) (present in about 12% of promoters). The presence of multiple methylated CpG sites in CpG islands of promoters causes stable silencing of genes. However, experiments by Weingarten-Gabbay et al. showed that the presence or absence of the other elements have relatively small effects on |
A PSD file has a maximum height and width of 30,000 pixels, and a length limit of two gigabytes. PhotoShop from the beginning could save files in other formats, including TIF, JPEG, and GIF. These files are smaller than PSD files because they lack the editable features of a PSD file. These formats are required to use the file in publications or on the web. The discontinued program PageMaker required TIF format. FaceBook requires JPG. Photoshop files sometimes have the file extension .PSB, which stands for "Photoshop Big" (also known as "large document format"). A PSB file extends the PSD file format, increasing the maximum height and width to 300,000 pixels and the length limit to around 4 Exabytes. The dimension limit was apparently chosen arbitrarily by Adobe, not based on computer arithmetic constraints (it is not close to a power of two, as is 30,000) but for ease of software testing. PSD and PSB formats are documented. Because of Photoshop's popularity, PSD files are widely used and supported to some extent by most competing software, including GIMP and Affinity Photo. The .PSD file format can be exported to and from Adobe's other apps, such as Adobe Illustrator, Adobe Premiere Pro, and After Effects. Plugins Photoshop functionality can be extended by add-on programs called Photoshop plugins (or plug-ins). Adobe creates some plugins, such as Adobe Camera Raw, but third-party companies develop most plugins, according to Adobe's specifications. Some are free and some are commercial software. Most plugins work with only Photoshop or Photoshop-compatible hosts, but a few can also be run as standalone applications. There are various types of plugins, such as filter, export, import, selection, color correction, and automation. The most popular plugins are the filter plugins (also known as a 8bf plugins), available under the Filter menu in Photoshop. Filter plugins can either modify the current image or create content. Below are some popular types of plugins, and some well-known companies associated with them: Color correction plugins (Alien Skin Software, Nik Software, OnOne Software, Topaz Labs Software, The Plugin Site, etc.) Special effects plugins (Alien Skin Software, Auto FX Software, AV Bros., Flaming Pear Software, etc.) 3D effects plugins (Andromeda Software, Strata, etc.) Adobe Camera Raw (also known as ACR and Camera Raw) is a special plugin, supplied free by Adobe, used primarily to read and process raw image files so that the resulting images can be processed by Photoshop. It can also be used from within Adobe Bridge. Photoshop tools Upon loading Photoshop, a sidebar with a variety of tools with multiple image-editing functions appears to the left of the screen. These tools typically fall under the categories of drawing; painting; measuring and navigation; selection; typing; and retouching. Some tools contain a small triangle in the bottom right of the toolbox icon. These can be expanded to reveal similar tools. While newer versions of Photoshop are updated to include new tools and features, several recurring tools that exist in most versions are discussed below. In some newer versions hovering along the tools gives a small Video glimpse of the tool. Pen tool Photoshop includes a few versions of the pen tool. The pen tool creates precise paths that can be manipulated using anchor points. The free form pen tool allows the user to draw paths freehand, and with the magnetic pen tool, the drawn path attaches closely to outlines of objects in an image, which is useful for isolating them from a background. Clone stamp tool The Clone Stamp tool duplicates one part of an image to another part of the same image by way of a brush. The duplication is either in full or in part depending on the mode. The user can also clone part of one layer to another layer. The Clone Stamp tool is useful for duplicating objects or removing a defect in an image. Shape tools Photoshop provides an array of shape tools including rectangles, rounded rectangles, ellipses, polygons and lines. These shapes can be manipulated by the pen tool, direct selection tool etc. to make vector graphics. In addition, Photoshop provides its own shapes like animals, signs and plants. Measuring and navigation The eyedropper tool selects a color from an area of the image that is clicked, and samples it for future use. The hand tool navigates an image by moving it in any direction, and the zoom tool enlarges the part of an image that is clicked on, allowing for a closer view. Selection tools Selection tools are used to select all or any part of a picture to perform cut, copy, edit, or retouching operations. Cropping The crop tool can be used to select a particular area of an image and discard the portions outside the chosen section. This tool assists in creating a focus point on an image and unnecessary or excess space. Cropping allows enhancement of a photo's composition while decreasing the file size. The crop tool is in the tools palette, which is located on the right side of the document. By placing the cursor over the image, the user can drag the cursor to the desired area. Once the Enter key is pressed, the area outside the rectangle will be cropped. The area outside the rectangle is the discarded data, which allows for the file size to be decreased. The crop tool can alternatively be used to extend the canvas size by clicking and dragging outside the existing image borders. Slicing The slice and slice select tools, like the crop tool, are used in isolating parts of images. The slice tool can be used to divide an image into different sections, and these separate parts can be used as pieces of a web page design once HTML and CSS are applied. The slice select tool allows sliced sections of an image to be adjusted and shifted. Moving The move tool can be used to drag the entirety of a single layer or more if they are selected. Alternatively, once an area of an image is highlighted, the move tool can be used to manually relocate the selected piece to anywhere on the canvas. Marquee The marquee is a tool that can make selections that are a single row, single column, rectangular and elliptical. An area that has been selected can be edited without affecting the rest of the image. This tool can also crop an image; it allows for better control. In contrast to the crop tool, the marquee tool allows for more adjustments to the selected area before cropping. The only marquee tool that does not allow cropping is the elliptical. Although the single row and column marquee tools allow for cropping, they are not ideal, because they only crop a line. The rectangular marquee tool is the preferred option. Once the tool has been selected, dragging the tool across the desired area will select it. The selected area will be outlined by dotted lines, referred to as "marching ants". To set a specific size or ratio, the tool options bar provides these settings. Before selecting an area, the desired size or ratio must be set by adjusting the width and height. Any changes such as color, filters, location, etc. should be made before cropping. To crop the selection, the user must go to the image tab and select crop. Lasso The lasso tool is similar to the marquee tool, however, the user can make a custom selection by drawing it freehand. There are three options for the lasso tool – regular, polygonal, and magnetic. The regular lasso tool allows the user to have drawing capabilities. Photoshop will complete the selection once the mouse button is released. The user may also complete the selection by connecting the end point to the starting point. The "marching ants" will indicate if a selection has been made. The polygonal lasso tool will draw only straight lines, which makes it an ideal choice for images with many straight lines. Unlike the regular lasso tool, the user must continually click around the image to outline the shape. To complete the selection, the user must connect the end point to the starting point just like the regular lasso tool. Magnetic lasso tool are considered the smart tool. It can do the same as the other two, but it can also detect the edges of an image once the user selects a starting point. It detects by examining the color pixels as the cursor moves over the desired area. Closing the selection is the same as the other two, which should also should display the "marching ants" once the selection has been closed. The quick selection tool selects areas based on edges, similarly to the magnetic lasso tool. The difference between this tool and the lasso tool is that there is no starting and ending point. For this reason, the selected area can be added onto as much as possible without starting over. By dragging the cursor over the desired area, the quick selection tool detects the edges of the image. The "marching ants" allow the user to know what is currently being selected. Once the user is done, the selected area can be edited without affecting the rest of the image. One of the features that makes this tool especially user friendly is that the SHIFT key is not needed to add more to the selection; by default, extra mouse clicks will be added to the selection rather than creating a new selection. Magic wand The magic wand tool selects areas based on pixels of similar values. One click will select all neighboring pixels of similar value within a tolerance level set by the user. If the eyedropper tool is selected in the options bar, then the magic wand can determine the value needed to evaluate the pixels; this is based on the sample size setting in the eyedropper tool. This tool is inferior to the quick selection tool which works much the same but with much better results and more intuitive controls. The user must decide what settings to use or if the image is right for this tool. Eraser The Eraser tool erases content based on the active layer. If the user is on the text layer, then any text across which the tool is dragged will be erased. The eraser will convert the pixels to transparent, unless the background layer is selected. The size and style of the eraser can be selected in the options bar. This tool is unique in that it can take the form of the paintbrush and pencil tools. In addition to the straight eraser tool, there are two more available options – background eraser and magic eraser. The background eraser deletes any part of the image that is on the edge of an object. This tool is often used to extract objects from the background. The magic eraser tool deletes based on similar colored pixels. It is very similar to the magic wand tool. This tool is ideal for deleting areas with the same color or tone that contrasts with the rest of the image. Video editing In Adobe CS5 Extended | renamed the program ImagePro, but the name was already taken. Later that year, Thomas renamed his program Photoshop and worked out a short-term deal with scanner manufacturer Barneyscan to distribute copies of the program with a slide scanner; a "total of about 200 copies of Photoshop were shipped" this way. During this time, John traveled to Silicon Valley and gave a demonstration of the program to engineers at Apple and Russell Brown, art director at Adobe. Both showings were successful, and Adobe decided to purchase the license to distribute in September 1988. While John worked on plug-ins in California, Thomas remained in Ann Arbor writing code. Photoshop 1.0 was released on February 19, 1990, for Macintosh exclusively. The Barneyscan version included advanced color editing features that were stripped from the first Adobe shipped version. The handling of color slowly improved with each release from Adobe and Photoshop quickly became the industry standard in digital color editing. At the time Photoshop 1.0 was released, digital retouching on dedicated high-end systems (such as the Scitex) cost around $300 an hour for basic photo retouching. The list price of Photoshop 1.0 for Macintosh in 1990 was $895. Photoshop was initially only available on Macintosh. In 1993, Adobe chief architect Seetharaman Narayanan ported Photoshop to Microsoft Windows. The Windows port led to Photoshop reaching a wider mass market audience as Microsoft's global reach expanded within the next few years. On March 31, 1995, Adobe purchased the rights for Photoshop from Thomas and John Knoll for $34.5 million so Adobe would no longer need to pay a royalty for each copy sold. File format Photoshop files have default file extension as .PSD, which stands for "Photoshop Document". A PSD file stores an image with support for all features of Photoshop; these include layers with masks, transparency, text, alpha channels and spot colors, clipping paths, and duotone settings. This is in contrast to many other file formats (e.g., .JPG or .GIF) that restrict content to provide streamlined, predictable functionality. A PSD file has a maximum height and width of 30,000 pixels, and a length limit of two gigabytes. PhotoShop from the beginning could save files in other formats, including TIF, JPEG, and GIF. These files are smaller than PSD files because they lack the editable features of a PSD file. These formats are required to use the file in publications or on the web. The discontinued program PageMaker required TIF format. FaceBook requires JPG. Photoshop files sometimes have the file extension .PSB, which stands for "Photoshop Big" (also known as "large document format"). A PSB file extends the PSD file format, increasing the maximum height and width to 300,000 pixels and the length limit to around 4 Exabytes. The dimension limit was apparently chosen arbitrarily by Adobe, not based on computer arithmetic constraints (it is not close to a power of two, as is 30,000) but for ease of software testing. PSD and PSB formats are documented. Because of Photoshop's popularity, PSD files are widely used and supported to some extent by most competing software, including GIMP and Affinity Photo. The .PSD file format can be exported to and from Adobe's other apps, such as Adobe Illustrator, Adobe Premiere Pro, and After Effects. Plugins Photoshop functionality can be extended by add-on programs called Photoshop plugins (or plug-ins). Adobe creates some plugins, such as Adobe Camera Raw, but third-party companies develop most plugins, according to Adobe's specifications. Some are free and some are commercial software. Most plugins work with only Photoshop or Photoshop-compatible hosts, but a few can also be run as standalone applications. There are various types of plugins, such as filter, export, import, selection, color correction, and automation. The most popular plugins are the filter plugins (also known as a 8bf plugins), available under the Filter menu in Photoshop. Filter plugins can either modify the current image or create content. Below are some popular types of plugins, and some well-known companies associated with them: Color correction plugins (Alien Skin Software, Nik Software, OnOne Software, Topaz Labs Software, The Plugin Site, etc.) Special effects plugins (Alien Skin Software, Auto FX Software, AV Bros., Flaming Pear Software, etc.) 3D effects plugins (Andromeda Software, Strata, etc.) Adobe Camera Raw (also known as ACR and Camera Raw) is a special plugin, supplied free by Adobe, used primarily to read and process raw image files so that the resulting images can be processed by Photoshop. It can also be used from within Adobe Bridge. Photoshop tools Upon loading Photoshop, a sidebar with a variety of tools with multiple image-editing functions appears to the left of the screen. These tools typically fall under the categories of drawing; painting; measuring and navigation; selection; typing; and retouching. Some tools contain a small triangle in the bottom right of the toolbox icon. These can be expanded to reveal similar tools. While newer versions of Photoshop are updated to include new tools and features, several recurring tools that exist in most versions are discussed below. In some newer versions hovering along the tools gives a small Video glimpse of the tool. Pen tool Photoshop includes a few versions of the pen tool. The pen tool creates precise paths that can be manipulated using anchor points. The free form pen tool allows the user to draw paths freehand, and with the magnetic pen tool, the drawn path attaches closely to outlines of objects in an image, which is useful for isolating them from a background. Clone stamp tool The Clone Stamp tool duplicates one part of an image to another part of the same image by way of a brush. The duplication is either in full or in part depending on the mode. The user can also clone part of one layer to another layer. The Clone Stamp tool is useful for duplicating objects or removing a defect in an image. Shape tools Photoshop provides an array of shape tools including rectangles, rounded rectangles, ellipses, polygons and lines. These shapes can be manipulated by the pen tool, direct selection tool etc. to make vector graphics. In addition, Photoshop provides its own shapes like animals, signs and plants. Measuring and navigation The eyedropper tool selects a color from an area of the image that is clicked, and samples it for future use. The hand tool navigates an image by moving it in any direction, and the zoom tool enlarges the part of an image that is clicked on, allowing for a closer view. Selection tools Selection tools are used to select all or any part of a picture to perform cut, copy, edit, or retouching operations. Cropping The crop tool can be used to select a particular area of an image and discard the portions outside the chosen section. This tool assists in creating a focus point on an image and unnecessary or excess space. Cropping allows enhancement of a photo's composition while decreasing the file size. The crop tool is in the tools palette, which is located on the right side of the document. By placing the cursor over the image, the user can drag the cursor to the desired area. Once the Enter key is pressed, the area outside the rectangle will be cropped. The area outside the rectangle is the discarded data, which allows for the file size to be decreased. The crop tool can alternatively be used to extend the canvas size by clicking and dragging outside the existing image borders. Slicing The slice and slice select tools, like the crop tool, are used in isolating parts of images. The slice tool can be used to divide an image into different sections, and these separate parts can be used as pieces of a web page design once HTML and CSS are applied. The slice select tool allows sliced sections of an image to be adjusted and shifted. Moving The move tool can be used to drag the entirety of a single layer or more if they are selected. Alternatively, once an area of an image is highlighted, the move tool can be used to manually relocate the selected piece to anywhere on the canvas. Marquee The marquee is a tool that can make selections that are a single row, single column, rectangular and elliptical. An area that has been selected can be edited without affecting the rest of the image. This tool can also crop an image; it allows for better control. In contrast to the crop tool, the marquee tool allows for more adjustments to the selected area before cropping. The only marquee tool that does not allow cropping is the elliptical. Although the single row and column marquee tools allow for cropping, they are not ideal, because they only crop a line. The rectangular marquee tool is the preferred option. Once the tool has been selected, dragging the tool across the desired area will select it. The selected area will be outlined by dotted lines, referred to as "marching ants". To set a specific size or ratio, the tool options bar provides these settings. Before selecting an area, the desired size or ratio must be set by adjusting the width and height. Any changes such as color, filters, location, etc. should be made before cropping. To crop the selection, the user must go to the image tab and select crop. Lasso The lasso tool is similar to the marquee tool, however, the user can make a custom selection by drawing it freehand. There are three options for the lasso tool – regular, polygonal, and magnetic. The regular lasso tool allows the user to have drawing capabilities. Photoshop will complete the selection once the mouse button is released. The user may also complete the selection by connecting the end point to the starting point. The "marching ants" will indicate if a selection has been made. The polygonal lasso tool will draw only straight lines, which makes it an ideal choice for images with many straight lines. Unlike the regular lasso tool, the user must continually click around the image to outline the shape. To complete the selection, the user must connect the end point to the starting point just like the regular lasso tool. Magnetic lasso tool are considered the smart tool. It can do the same as the other two, but it can also detect the edges of an image once the user selects a starting point. It detects by examining the color pixels as the cursor moves over the desired area. Closing the selection is the same as the other two, which should also should display the "marching ants" once the selection has been closed. The quick selection tool selects areas based on edges, similarly to the magnetic lasso tool. The difference between this tool and the lasso tool is that there is no starting and ending point. For this reason, the selected area can be added onto as much as possible without starting over. By dragging the cursor over the desired area, the quick selection tool detects the edges of the image. The "marching ants" allow the user to know what is currently being selected. Once the user is done, the selected area can be edited without affecting the rest of the image. One of the features that makes this tool especially user friendly is that the SHIFT key is not needed to add more to the selection; by default, extra mouse clicks will be added to the selection rather than creating a new selection. Magic wand The magic wand tool selects areas based on pixels of similar values. One click will select all neighboring pixels of similar value within a tolerance level set by the user. If the eyedropper tool is selected in the options bar, then the magic wand can determine the value needed to evaluate the pixels; this is based on the sample size setting in the eyedropper tool. This tool is inferior to the quick selection tool which works much the same but with much better results and more intuitive controls. The user must decide what settings to use or if the image is right for this tool. Eraser The Eraser tool erases content based on the active layer. If the user is on the text layer, then any text across which the tool is dragged will be erased. The eraser will convert the pixels to transparent, unless the background layer is selected. The size and style of the eraser can be selected in the options bar. This tool is unique in that it can take the form of the paintbrush and pencil tools. In addition to the straight eraser tool, there are two more available options – background eraser and magic eraser. The background eraser deletes any part of the image that is on the edge of an object. This tool is often used to extract objects from the background. The magic eraser tool deletes based on similar colored pixels. It is very similar to the magic wand tool. This tool is ideal for deleting areas with the same color or tone that contrasts with the rest of the image. Video editing In Adobe CS5 Extended edition, video editing is comprehensive and efficient with a broad compatibility of video file formats such as MOV, AVI and MPEG-4 formats and easy workflow. Using simple combinations of keys video layers can easily be modified, with other features such as adding text and creating animations using single images. 3D extrusion With the Extended version of Photoshop CS5, 2D elements of an artwork can easily become three-dimensional with the click of a button. Extrusions of texts, an available library of materials for three-dimensional, and even wrapping two-dimensional images around 3D geometry. Mobile integration Third-party plugins have also been added to the most recent version of Photoshop where technologies such as the iPad have integrated the software with different types of applications. Applications like the Adobe Eazel painting app allows the user to easily create paintings with their fingertips and use an array of different paint from dry to wet in order to create rich color blending. In October 2018, it was announced that the full Photoshop engine will be released for iPad next year. The program will feature cloud syncing with other devices and a simpler interface than the desktop version. Camera raw With the Camera Raw plug-in, raw images can be processed without the use of Adobe Photoshop Lightroom, along with other image file formats such as JPEG, TIFF, or PNG. The plug-in allows users to remove noise without the side-effect of over-sharpening, add grain, and even perform post-crop vignetting. 3D printing tools From version 14.1, users can create and edit designs for 3D printing. Artists can add color, adjust the shape or rotate the angles of imported models, or design original 3D models from scratch. Color replacement tool The Color Replacement Tool allows the user to change the color, while maintaining the highlights and shadows of the original image, of pieces of the image. By selecting Brushes and right clicking, the Color Replacement Tool is the third option down. What is important to note with this tool is the foreground color. The foreground color is what will be applied when painting along the chosen part of the image with the Color Replacement tool. Cultural impact Photoshop and derivatives such as Photoshopped (or just Shopped) have become verbs that are sometimes used to refer to images edited by Photoshop, or any image manipulation program. The same happens not only in English but as the Portuguese Wikipedia entry for image manipulation attests, even in that language, with the trademark being followed by the Portuguese verb termination -ar, yielding the word "photoshopar" (to photoshop). Such derivatives are discouraged by Adobe because, in order to maintain validity and protect the trademark from becoming generic, trademarks must be used as proper nouns. Version history Older versions Photoshop's naming scheme was initially based on version numbers, from version 0.07 (codename "Bond"; double-oh-seven), through version 0.87 (codename "Seurat" which was the first commercial version, sold as "Barneyscan XP"), version 1.0 (February 1990) all the way to version 7.0.1. Adobe published 7 major and many minor versions before the October 2003 introduction of version 8.0 which brought with it the Creative Suite branding. Notable milestone features would be: Filters, Colour Separation, Virtual Memory (1.0), Paths, CMYK color (2.0), 16-bits-per-channel support, availability on Microsoft Windows (2.5), Layers, tabbed Palettes (3.0), Adjustments, Actions, Freeform Transform, PNG support (4.0), Editable Type, Magnetic Lasso and Pen, Freeform Pen, Multiple Undo, Layer Effects (5.0), Save For Web (5.5), Vector Shapes, revised User Interface (6.0), Vector Text, Healing Brush, Spell Check (7.0), Camera RAW (7.0.1). In February 2013 Adobe donated the source code of the 1990 1.0.1 version of Photoshop to the Computer History Museum. CS (version 8) The first Photoshop CS was commercially released in October 2003 as the eighth major version of Photoshop. Photoshop CS increased user control with a reworked file browser augmenting search versatility, sorting and sharing capabilities and the Histogram Palette which monitors changes in the image as they are made to the document. Match Color was also introduced in CS, which reads color data to achieve a uniform expression throughout a series of pictures. CS2 (version 9) Photoshop CS2, released in May 2005, expanded on its predecessor with a new set of tools and features. It included an upgraded Spot Healing Brush, which is mainly used for handling common photographic problems such as blemishes, red-eye, noise, blurring and lens distortion. One of the most significant inclusions in CS2 was the implementation of Smart Objects, which allows users to scale and transform images and vector illustrations without losing image quality, as well as create linked duplicates of embedded graphics so that a single edit updates across multiple iterations. Adobe responded to feedback from the professional media industry by implementing non-destructive editing as well as the producing and modifying of 32-Bit High Dynamic Range (HDR) images, which are optimal for 3D rendering and advanced compositing. FireWire Previews could also be viewed on a monitor via a direct export feature. Photoshop CS2 brought the Vanishing Point and Image Warping tools. Vanishing Point makes tedious graphic and photo retouching endeavors much simpler by letting users clone, paint and transform image objects while maintaining visual perspective. Image Warping makes it easy to digitally distort an image into a shape by choosing on-demand presets or by dragging control points. The File Browser was upgraded to Adobe Bridge, which functioned as a hub for productivity, imagery and creativity, providing multi-view file browsing and smooth cross-product integration across Adobe Creative Suite 2 software. Adobe Bridge also provided access to Adobe Stock Photos, a new stock photography service that offered users one-stop shopping across five elite stock image providers to deliver high-quality, royalty-free images for layout and design. Camera Raw version 3.0 was a new addition in CS2, and it allowed settings for multiple raw files to be modified simultaneously. In addition, processing multiple raw files to other formats including JPEG, TIFF, DNG or PSD, could be done in the background without executing |
Pro X6 Ultimate includes Athentech Imaging's Perfectly Clear and Reallusion's FaceFilter3 Standard. PaintShop Pro X7 Ultimate includes those same two items. The bundled extras cannot be installed unless that version of the PaintShop program is already installed. However, once a bundled extra such as a plugin has been installed, the installed files can be copied to other versions, e.g., a plugin installed under X5 can be copied to X6 and even if X5 is then uninstalled, the plugin will continue to work under X6. Corel releases a new X version roughly annually, so this ability to copy means PSP users do not have to choose between updating or continued use of Ultimate add-ons from previous versions. Other related versions and products Paint Shop Pro Personal is a version of JASC Paint Shop Pro 9 for the Japanese market, published by Sourcenext Corporation. Paint Shop Photo Album is a simplified version of Paint Shop Pro designed to enhance, organize, and share digital photos. The Corel version was released as version 5. Corel Paint Shop Pro Album Personal is a version of Corel Paint Shop Pro Album 5 Deluxe for Japanese market, published by Sourcenext Corporation. Corel Photo Album is the successor of Jasc Paint Shop Photo Album. First release was version 6. Corel PaintShop Photo Express is the successor of Corel Photo Album. First release was PaintShop Photo Express 2010. Paint Shop Pro Photo Studio is a site launched as part of Corel Paint Shop Pro Photo X2 | to X3), PaintShop Pro was marketed as "Corel Paint Shop Pro Photo". Having dropped the "Photo" part of the name in version X4, Paintshop Pro X5 was derived from Ulead Photo Explorer after Corel's acquisition of Ulead. On November 28, 2007, Corel announced that the office in Eden Prairie, Minnesota, where Paint Shop Pro was created, would be shut down, with development moving to offices in California and China. Version history JASC Paint Shop releases: 1990?–1993 In the table below, italicized dates are approximate, based on the earliest file timestamp on JASC or Corel's FTP server. Non-italicized dates are sourced from official press releases or notifications posted on JASC's web site. JASC Paint Shop Pro releases: 1990–2004 Corel Paint Shop Pro releases: 2005 Corel Paint Shop Pro Photo releases: 2006–2008 Corel PaintShop Photo Pro releases: 2010-2011 Corel PaintShop Pro releases: 2011–present Picture tubes Picture tubes are graphic images with no background. They are often used as a starting point for complex images; that is, they are combined with other image elements to produce a final work. Tubes can also be regarded as graphic brushes based on a pre-created image; this was their original use. Instead of leaving a trace of color on the canvas, they would leave a trail of images. Popular tube subjects include alphabets, humans (also known as dollz), animal and toy figures, flowers, love messages and seasonal symbols. The tube system originated with PSP Pro version 5. Native tube files may be in .tub, .psp, .pspimage, and .psptube formats. XnView, IrfanView, and TubeEx are separate graphics programs that can convert tube files (.tub) to .png. Ultimate edition PaintShop Pro Photo X2 Ultimate was released towards the end of life of PaintShop Pro Photo X2, in September 2008. It included 150 additional picture frames and Picture Tubes, the programs Background Remover, Corel Painter Photo Essentials 4, and Photorecovery, as well as RAW support for 250 cameras and a 2GB flash drive. Subsequent Ultimate editions were released contemporaneously with the basic version. PaintShop Pro X4 Ultimate included Nik Color Efex Pro 3.0, a voucher for 21 images from Fotolia at high quality, and additional Picture Tubes. X5 Ultimate included Reallusion FaceFilter Studio 2.0, NIK Color Efex Pro 3.0, and "over 100 unique brushes, textures and royalty-free backgrounds". PaintShop Pro X6 Ultimate includes Athentech Imaging's Perfectly Clear and Reallusion's FaceFilter3 Standard. PaintShop Pro X7 Ultimate includes those same two items. The bundled extras cannot be installed unless that version of the PaintShop program is already installed. However, once a bundled extra such as a plugin has |
and Carlsmith's 1959 experiment in which participants were asked to complete a very dull task for an hour. Some were paid $20, while others were paid $1, and afterwards they were instructed to tell the next waiting participants that the experiment was fun and exciting. Those who were paid $1 were much more likely to convince the next participants that the experiment really was enjoyable than those who received $20. This is because $20 is enough reason to participate in a dull task for an hour, so there is no dissonance. Those who received $1 experienced great dissonance, so they had to truly convince themselves that the task actually was enjoyable to avoid feeling taken advantage of, and therefore reduce their dissonance. Elaboration likelihood model Persuasion has traditionally been associated with two routes. Central route: Whereby an individual evaluates information presented to them based on the pros and cons of it and how well it supports their values Peripheral route: Change is mediated by how attractive the source of communication is and by bypassing the deliberation process. The Elaboration likelihood model (ELM) forms a new facet of the route theory. It holds that the probability of effective persuasion depends on how successful the communication is at bringing to mind a relevant mental representation, which is the elaboration likelihood. Thus if the target of the communication is personally relevant, this increases the elaboration likelihood of the intended outcome and would be more persuasive if it were through the central route. Communication which does not require careful thought would be better suited to the peripheral route. Functional theories Functional theorists attempt to understand the divergent attitudes individuals have towards people, objects or issues in different situations. There are four main functional attitudes: Adjustment function: A main motivation for individuals is to increase positive external rewards and minimize the costs. Attitudes serve to direct behavior towards the rewards and away from punishment. Ego Defensive function: The process by which an individual protects their ego from being threatened by their own negative impulses or threatening thoughts. Value-expressive: When an individual derives pleasure from presenting an image of themselves which is in line with their self-concept and the beliefs that they want to be associated with. Knowledge function: The need to attain a sense of understanding and control over one's life. An individual's attitudes therefore serve to help set standards and rules which govern their sense of being. When communication targets an underlying function, its degree of persuasiveness influences whether individuals change their attitude after determining that another attitude would more effectively fulfill that function. Inoculation theory A vaccine introduces a weak form of a virus that can easily be defeated to prepare the immune system should it need to fight off a stronger form of the same virus. In much the same way, the theory of inoculation suggests that a certain party can introduce a weak form of an argument that is easily thwarted in order to make the audience inclined to disregard a stronger, full-fledged form of that argument from an opposing party. This often occurs in negative advertisements and comparative advertisements—both for products and political causes. An example would be a manufacturer of a product displaying an ad that refutes one particular claim made about a rival's product, so that when the audience sees an ad for said rival product, they refute the product claims automatically. Narrative transportation theory Narrative transportation theory proposes that when people lose themselves in a story, their attitudes and intentions change to reflect that story. The mental state of narrative transportation can explain the persuasive effect of stories on people, who may experience narrative transportation when certain contextual and personal preconditions are met, as Green and Brock postulate for the transportation-imagery model. Narrative transportation occurs whenever the story receiver experiences a feeling of entering a world evoked by the narrative because of empathy for the story characters and imagination of the story plot. Social judgment theory Social judgment theory suggests that when people are presented with an idea or any kind of persuasive proposal, their natural reaction is to immediately seek a way to sort the information subconsciously and react to it. We evaluate the information and compare it with the attitude we already have, which is called the initial attitude or anchor point. When trying to sort incoming persuasive information, an audience evaluates whether it lands in their latitude of acceptance, latitude of non-commitment or indifference, or the latitude of rejection. The size of these latitudes varies from topic to topic. Our "ego-involvement" generally plays one of the largest roles in determining the size of these latitudes. When a topic is closely connected to how we define and perceive ourselves, or deals with anything we care passionately about, our latitudes of acceptance and non-commitment are likely to be much smaller and our attitude of rejection much larger. A person's anchor point is considered to be the center of his latitude of acceptance, the position that is most acceptable to him. An audience is likely to distort incoming information to fit into their unique latitudes. If something falls within the latitude of acceptance, the subject tends to assimilate the information and consider it closer to his anchor point than it really is. Inversely, if something falls within the latitude of rejection, the subject tends to contrast the information and convince himself the information is farther away from his anchor point than it really is. When trying to persuade an individual target or an entire audience, it is vital to first learn the average latitudes of acceptance, non-commitment, and rejection of your audience. It is ideal to use persuasive information that lands near the boundary of the latitude of acceptance if the goal is to change the audience's anchor point. Repeatedly suggesting ideas on the fringe of the acceptance latitude makes people gradually adjust their anchor points, while suggesting ideas in the rejection latitude or even the non-commitment latitude does not change the audience's anchor point. Methods Persuasion methods are also sometimes referred to as persuasion tactics or persuasion strategies. Use of force There is the use of force in persuasion, which does not have any scientific theories, except for its use to make demands. The use of force is then a precedent to the failure of less direct means of persuasion. Application of this strategy can be interpreted as a threat since the persuader does not give options to his or her request. Weapons of influence Robert Cialdini, in Influence, his book on persuasion, defined six "influence cues or weapons of influence": Influence is the process of changing. Reciprocity The principle of reciprocity states that when a person provides us with something, we attempt to repay him or her in kind. Reciprocation produces a sense of obligation, which can be a powerful tool in persuasion. The reciprocity rule is effective because it can be overpowering and instill in us a sense of obligation. Generally, we have a dislike for individuals who neglect to return a favor or provide payment when offered a free service or gift. As a result, reciprocation is a widely | performance (planning?), self-talk, focus on past success, comparison of outcomes via persuasive argument, pros/cons and comparative imaging of future outcomes, identification of self as role model, self-affirmation, reframing, cognitive dissonance, reattribution, (increasing salience of) antecedents A typical instantiations of these techniques in therapy isexposure / response prevention for OCD. Conditioning theories Conditioning plays a huge part in the concept of persuasion. It is more often about leading someone into taking certain actions of their own, rather than giving direct commands. In advertisements for example, this is done by attempting to connect a positive emotion to a brand/product logo. This is often done by creating commercials that make people laugh, using a sexual undertone, inserting uplifting images and/or music etc. and then ending the commercial with a brand/product logo. Great examples of this are professional athletes. They are paid to connect themselves to things that can be directly related to their roles; sport shoes, tennis rackets, golf balls, or completely irrelevant things like soft drinks, popcorn poppers and panty hose. The important thing for the advertiser is to establish a connection to the consumer. This conditioning is thought to affect how people view certain products, knowing that most purchases are made on the basis of emotion. Just like you sometimes recall a memory from a certain smell or sound, the objective of some ads is solely to bring back certain emotions when you see their logo in your local store. The hope is that repeating the message several times makes consumers more likely to purchase the product because they already connect it with a good emotion and positive experience. Stefano DellaVigna and Matthew Gentzkow did a comprehensive study on the effects of persuasion in different domains. They discovered that persuasion has little or no effect on advertisement; however, there was a substantial effect of persuasion on voting if there was face-to-face contact. Cognitive dissonance theory Leon Festinger originally proposed the theory of cognitive dissonance in 1957. He theorized that human beings constantly strive for mental consistency. Our cognition (thoughts, beliefs, or attitudes) can be in agreement, unrelated, or in disagreement with each other. Our cognition can also be in agreement or disagreement with our behaviors. When we detect conflicting cognition, or dissonance, it gives us a sense of incompleteness and discomfort. For example, a person who is addicted to smoking cigarettes but also suspects it could be detrimental to his health suffers from cognitive dissonance. Festinger suggests that we are motivated to reduce this dissonance until our cognition is in harmony with itself. We strive for mental consistency. There are four main ways we go about reducing or eliminating our dissonance: changing our minds about one of the facets of cognition reducing the importance of a cognition increasing the overlap between the two, and re-evaluating the cost/reward ratio. Revisiting the example of the smoker, he can either quit smoking, reduce the importance of his health, convince himself he is not at risk, or that the reward of smoking is worth the cost of his health. Cognitive dissonance is powerful when it relates to competition and self-concept. The most famous example of how cognitive dissonance can be used for persuasion comes from Festinger and Carlsmith's 1959 experiment in which participants were asked to complete a very dull task for an hour. Some were paid $20, while others were paid $1, and afterwards they were instructed to tell the next waiting participants that the experiment was fun and exciting. Those who were paid $1 were much more likely to convince the next participants that the experiment really was enjoyable than those who received $20. This is because $20 is enough reason to participate in a dull task for an hour, so there is no dissonance. Those who received $1 experienced great dissonance, so they had to truly convince themselves that the task actually was enjoyable to avoid feeling taken advantage of, and therefore reduce their dissonance. Elaboration likelihood model Persuasion has traditionally been associated with two routes. Central route: Whereby an individual evaluates information presented to them based on the pros and cons of it and how well it supports their values Peripheral route: Change is mediated by how attractive the source of communication is and by bypassing the deliberation process. The Elaboration likelihood model (ELM) forms a new facet of the route theory. It holds that the probability of effective persuasion depends on how successful the communication is at bringing to mind a relevant mental representation, which is the elaboration likelihood. Thus if the target of the communication is personally relevant, this increases the elaboration likelihood of the intended outcome and would be more persuasive if it were through the central route. Communication which does not require careful thought would be better suited to the peripheral route. Functional theories Functional theorists attempt to understand the divergent attitudes individuals have towards people, objects or issues in different situations. There are four main functional attitudes: Adjustment function: A main motivation for individuals is to increase positive external rewards and minimize the costs. Attitudes serve to direct behavior towards the rewards and away from punishment. Ego Defensive function: The process by which an individual protects their ego from being threatened by their own negative impulses or threatening thoughts. Value-expressive: When an individual derives pleasure from presenting an image of themselves which is in line with their self-concept and the beliefs that they want to be associated with. Knowledge function: The need to attain a sense of understanding and control over one's life. An individual's attitudes therefore serve to help set standards and rules which govern their sense of being. When communication targets an underlying function, its degree of persuasiveness influences whether individuals change their attitude after determining that another attitude would more effectively fulfill that function. Inoculation theory A vaccine introduces a weak form of a virus that can easily be defeated to prepare the immune system should it need to fight off a stronger form of the same virus. In much the same way, the theory of inoculation suggests that a certain party can introduce a weak form of an argument that is easily thwarted in order to make the audience inclined to disregard a stronger, full-fledged form of that argument from an opposing party. This often occurs in negative advertisements and comparative advertisements—both for products and political causes. An example would be a manufacturer of a product displaying an ad that refutes one particular claim made about a rival's product, so that when the audience sees an ad for said rival product, they refute the product claims automatically. Narrative transportation theory Narrative transportation theory proposes that when people lose themselves in a story, their attitudes and intentions change to reflect that story. The mental state of narrative transportation can explain the persuasive effect of stories on people, who may experience narrative transportation when certain contextual and personal preconditions are met, as Green and Brock postulate for the transportation-imagery model. Narrative transportation occurs whenever the story receiver experiences a feeling of entering a world evoked by the narrative because of empathy for the story characters and imagination of the story plot. Social judgment theory Social judgment theory suggests that when people are presented with an idea or any kind of persuasive proposal, their natural reaction is to immediately seek a way to sort the information subconsciously and react to it. We evaluate the information and compare it with the attitude we already have, which is called the initial attitude or anchor point. When trying to sort incoming persuasive information, an audience evaluates whether it lands in their latitude of acceptance, latitude of non-commitment or indifference, or the latitude of rejection. The size of these latitudes varies from topic to topic. Our "ego-involvement" generally plays one of the largest roles in determining the size of these latitudes. When a topic is closely connected to how we define and perceive ourselves, or deals with anything we care passionately about, our latitudes of acceptance and non-commitment are likely to be much smaller and our attitude of rejection much larger. A person's anchor point is considered to be the center of his latitude of acceptance, the position that is most acceptable to him. An audience is likely to distort incoming information to fit into their unique latitudes. If something falls within the latitude of acceptance, the subject tends to assimilate the information and consider it closer to his anchor point than it really is. Inversely, if something falls within the latitude of rejection, the subject tends to contrast the information and convince himself the information is farther away from his anchor point than it really is. When trying to persuade an individual target or an entire audience, it is vital to first learn the average latitudes of acceptance, non-commitment, and rejection of your audience. It is ideal to use persuasive information that lands near the boundary of the latitude of acceptance if the goal is to change the audience's anchor point. Repeatedly suggesting ideas on the fringe of the acceptance latitude makes people gradually adjust their anchor points, while suggesting ideas in the rejection latitude or even the non-commitment latitude does not change the audience's anchor point. Methods Persuasion methods are also sometimes referred to as persuasion tactics or persuasion strategies. Use of force There is the use of force in persuasion, which does not have any scientific theories, except for its use to make demands. The use of force is then a precedent to the failure of less direct means of persuasion. Application of this strategy can be interpreted as a threat since the persuader does not give options to his or her request. Weapons of influence Robert Cialdini, in Influence, his book on persuasion, defined six "influence cues or weapons of influence": Influence is the process of changing. Reciprocity The principle of reciprocity states that when a person provides us with something, we attempt to repay him or her in kind. Reciprocation produces a sense of obligation, which can be a powerful tool in persuasion. The reciprocity rule is effective because it can be overpowering and instill in us a sense of obligation. Generally, we have a dislike for individuals who neglect to return a favor or provide payment when offered a free service or gift. As a result, reciprocation is a widely held principle. This societal standard makes reciprocity extremely powerful persuasive technique, as it can result in unequal exchanges and can even apply to an uninvited first favor. Reciprocity applies to the marketing field because of its use as a powerful persuasive technique. The marketing tactic of "free samples" demonstrates the reciprocity rule because of the sense of obligation that the rule produces. This sense of obligation comes from the desire to repay the marketer for the gift of a "free sample." Commitment and consistency Consistency is an important aspect of persuasion because it: is highly valued by society, results in a beneficial approach to daily life, and provides a valuable shortcut through the complicated nature of modern existence. Consistency allows us to more effectively make decisions and process information. The concept of consistency states that someone who commits to something, orally or in writing, is more likely to honor that commitment. This is especially true for written commitments, as they appear psychologically more concrete and can create hard proof. Someone who commits to a stance tends to behave according to that |
Eshkol became the first prime minister to die in office. He was temporarily replaced by Yigal Allon, whose stint lasted less than a month, as the party persuaded Golda Meir to return to political life and become prime minister in March 1969. Meir was Israel's first woman prime minister, and the third in the world (after Sirimavo Bandaranaike and Indira Gandhi). Meir resigned in 1974 after the Agranat Commission published its findings on the Yom Kippur War, even though it had absolved her of blame. Yitzhak Rabin took over, though he also resigned towards the end of the eighth Knesset's term following a series of scandals. Those included the suicide of Housing Minister Avraham Ofer after police began investigating allegations that he had used party funds illegally, and the affair involving Asher Yadlin (the governor-designate of the Bank of Israel), who was sentenced to five years in prison for having accepted bribes. Rabin's wife, Leah, was also found to have had an overseas bank account, which was illegal in Israel at the time. Menachem Begin became the first right-wing prime minister when his Likud won the 1977 elections, and retained the post in the 1981 elections. He resigned in 1983 for health reasons, passing the reins of power to Yitzhak Shamir. After the 1984 elections had proved inconclusive with neither the Alignment nor Likud able to form a government, a national unity government was formed with a rotating prime ministership – Shimon Peres took the first two years, and was replaced by Shamir midway through the Knesset term. Although the 1988 elections produced another national unity government, Shamir was able to take the role alone. Peres made an abortive bid to form a left-wing government in 1990, but failed, leaving Shamir in power until 1992. Rabin became prime minister for the second time when he led Labour to victory in the 1992 elections. After his assassination on 4 November 1995, Peres took over as prime minister. Direct election During the thirteenth Knesset (1992–1996) it was decided to hold a separate ballot for prime minister modeled after American presidential elections. This system was instituted in part because the Israeli electoral system makes it all but impossible for one party to win a majority. While only two parties—Mapai/Labour and Likud—had ever led governments, the large number of parties or factions in a typical Knesset usually prevents one party from winning the 61 seats needed for a majority. In 1996, when the first such election took place, the outcome was a surprise win for Benjamin Netanyahu after election polls predicted that Peres was the winner. However, in the Knesset election held at the same time, Labour won more votes than any other party (27%). Thus Netanyahu, despite his theoretical position of power, needed the support of the religious parties to form a viable government. Ultimately Netanyahu failed to hold the government together, and early elections for both prime minister and the Knesset were called in 1999. Although five candidates intended to run, the three representing minor parties (Benny Begin of Herut – The National Movement, Azmi Bishara of Balad and Yitzhak Mordechai of the Centre Party) dropped out before election day, and Ehud Barak beat Netanyahu in the election. However, the new system again appeared to have failed; although Barak's One Israel alliance (an alliance of Labour, Gesher and Meimad) won more votes than any other party in the Knesset election, they garnered only 26 seats, the lowest ever by a winning party or alliance. Barak needed to form a coalition with six smaller parties to form a government. In early 2001, Barak resigned following the outbreak of the al-Aqsa Intifada. However, the government was not brought down, and only elections for prime minister were necessary. In the election itself, Ariel Sharon of Likud comfortably beat Barak, taking 62.4% of the vote. However, because Likud only had 21 seats in the Knesset, Sharon had to form a national unity government. Following Sharon's victory, it was decided to do away with separate elections for prime minister and return to the previous system. 2003 onwards The 2003 elections were carried out in the same manner as prior to 1996. Likud won 38 seats, the highest by a party for over a decade, and as party leader Sharon was duly appointed Prime Minister. However, towards the end of his term and largely as a result of the deep divisions within Likud over Israel's unilateral disengagement plan, Sharon broke away from his party to form Kadima, managing to maintain his position as prime minister and also becoming the first prime minister not to be a member of either Labour or Likud (or their predecessors). However, he suffered a stroke in January 2006, in the midst of election season, leading Ehud Olmert to become acting prime minister in the weeks leading to the elections. He was voted by the cabinet to be interim prime minister just after the 2006 elections, when Sharon had reached 100 days of incapacitation. He thus became Israel's third interim prime minister, only days before forming his own new government as the official Prime Minister of Israel. In 2008, amid accusations of corruption and challenges from his own party, Olmert resigned. However his successor Tzipi Livni was unable to form a coalition government. In the election in the following year, while Kadima won the most seats, it was the Likud leader Benjamin Netanyahu who was given the task of forming a government. He was able to do so, thus beginning his second term as Prime Minister of Israel. In the 2013 election, the Likud Yisrael Beiteinu alliance emerged as the largest faction. After forming a coalition, Netanyahu secured his third prime ministership. In 2015, Netanyahu managed to stay in power. Multiple disagreements with his coalition members led to the 2019–2021 Israeli political crisis. In 2021, Naftali Bennett became prime minister. Order of succession If the prime minister dies in office, the cabinet chooses an interim prime minister to run the government until a new government is placed in power. Yigal Allon served as interim prime minister following Levi Eshkol's death, as did Shimon Peres following the assassination of Yitzhak Rabin. According to Israeli law, if a prime minister is temporarily incapacitated rather than dies (as was the case following Ariel Sharon's stroke in early 2006), power is transferred to the acting prime minister, until the prime minister recovers (Ehud Olmert took over from Sharon), for | to win a majority. While only two parties—Mapai/Labour and Likud—had ever led governments, the large number of parties or factions in a typical Knesset usually prevents one party from winning the 61 seats needed for a majority. In 1996, when the first such election took place, the outcome was a surprise win for Benjamin Netanyahu after election polls predicted that Peres was the winner. However, in the Knesset election held at the same time, Labour won more votes than any other party (27%). Thus Netanyahu, despite his theoretical position of power, needed the support of the religious parties to form a viable government. Ultimately Netanyahu failed to hold the government together, and early elections for both prime minister and the Knesset were called in 1999. Although five candidates intended to run, the three representing minor parties (Benny Begin of Herut – The National Movement, Azmi Bishara of Balad and Yitzhak Mordechai of the Centre Party) dropped out before election day, and Ehud Barak beat Netanyahu in the election. However, the new system again appeared to have failed; although Barak's One Israel alliance (an alliance of Labour, Gesher and Meimad) won more votes than any other party in the Knesset election, they garnered only 26 seats, the lowest ever by a winning party or alliance. Barak needed to form a coalition with six smaller parties to form a government. In early 2001, Barak resigned following the outbreak of the al-Aqsa Intifada. However, the government was not brought down, and only elections for prime minister were necessary. In the election itself, Ariel Sharon of Likud comfortably beat Barak, taking 62.4% of the vote. However, because Likud only had 21 seats in the Knesset, Sharon had to form a national unity government. Following Sharon's victory, it was decided to do away with separate elections for prime minister and return to the previous system. 2003 onwards The 2003 elections were carried out in the same manner as prior to 1996. Likud won 38 seats, the highest by a party for over a decade, and as party leader Sharon was duly appointed Prime Minister. However, towards the end of his term and largely as a result of the deep divisions within Likud over Israel's unilateral disengagement plan, Sharon broke away from his party to form Kadima, managing to maintain his position as prime minister and also becoming the first prime minister not to be a member of either Labour or Likud (or their predecessors). However, he suffered a stroke in January 2006, in the midst of election season, leading Ehud Olmert to become acting prime minister in the weeks leading to the elections. He was voted by the cabinet to be interim prime minister just after the 2006 elections, when Sharon had reached 100 days of incapacitation. He thus became Israel's third interim prime minister, only days before forming his own new government as the official Prime Minister of Israel. In 2008, amid accusations of corruption and challenges from his own party, Olmert resigned. However his successor Tzipi Livni was unable to form a coalition government. In the election in the following year, while Kadima won the most seats, it was the Likud leader Benjamin Netanyahu who was given the task of forming a government. He was able to do so, thus beginning his second term as Prime Minister of Israel. In the 2013 election, the Likud Yisrael Beiteinu alliance emerged as the largest faction. After forming a coalition, Netanyahu secured his third prime ministership. In 2015, Netanyahu managed to stay in power. Multiple disagreements with his coalition members led to the 2019–2021 Israeli political crisis. In 2021, Naftali Bennett became prime minister. Order of succession If the prime minister dies in office, the cabinet chooses an interim prime minister to run the government until a new government is placed in power. Yigal Allon served as interim prime minister following Levi Eshkol's death, as did Shimon Peres following the assassination of Yitzhak Rabin. According to Israeli law, if a prime minister is temporarily incapacitated rather than dies (as was the case following Ariel Sharon's stroke in early 2006), power is transferred to the acting prime minister, until the prime minister recovers (Ehud Olmert took over from Sharon), for up to 100 days. If the prime minister is declared permanently incapacitated, or that period expires, the president of Israel oversees the process of assembling a new governing coalition, and in the meantime the acting prime minister or other incumbent minister is appointed by the cabinet to serve as interim prime minister. In the case of Sharon, elections were already due to occur within 100 days of the beginning of his coma; thus, the post-election coalition-building process pre-empted the emergency provisions for the selection of a new prime minister. Nevertheless, Olmert was appointed interim prime minister on 16 April 2006, after the elections, just days before he formed a government on 4 May 2006, becoming the official prime minister. Acting, vice and deputy prime minister Aside from the position of Acting Prime Minister, there are also vice prime ministers and deputy prime ministers. Interim government Interim prime minister The interim prime minister (, Rosh HaMemshala Ba-foal lit. "prime minister de facto") is appointed by the government if the incumbent is dead or permanently incapacitated, or if his tenure was ended due to a criminal conviction. Israeli law distinguishes the terms acting prime minister (מלא מקום ראש הממשלה), filling in for the incumbent prime minister, temporarily, and acting in the incumbent's office, while the incumbent is in office, and an interim prime minister in office. Only if the incumbent prime minister becomes temporarily incapacitated will the acting prime minister act in the incumbent's office and will be standing in for him for up to 100 consecutive days, while the incumbent is in office. Legally, the "100 consecutive days" limit, in the language of the law, only stipulates that the incumbent then is deemed to be permanently incapacitated and that the limited time for an acting prime minister to act in the incumbent's office is over. The 1968 law (prior to the 1992 and 2001 basic laws of government) did not impose a time limit on a "temporarily incapacitation" period of the incumbent prime minister, but rather pending the return of the incumbent to resume his duties, and separately addressed only the event of death of the incumbent for appointing an interim prime minister, while failing to address Permanent incapacitation or criminal conviction of the incumbent prime minister. Separately, the law of 2001 stipulates that in any event where the incumbent prime minister becomes permanently incapacitated (either declared as such or "100 consecutive days" limit expired or else), or if the incumbent died or ceased being prime minister due to a criminal conviction, the government that is "deemed to have resigned", to become an interim government, continues to govern until a new government is placed in power, and in the absence of a prime minister in office, they then must vote on one of their incumbent ministers (either the acting prime minister or else) to fully assume office as the interim Prime minister, if he or she meet the requirements. While the acting prime minister must be a Knesset member to meet the requirements, the interim prime minister must be a member of the prime minister's party as well. Until the 2001 basic law: the government, both the acting and interim prime ministers were only required to be a Knesset member in addition to being a member of the Government. However, before and after the 2001 law, an interim prime minister would not be appointed unless the government would be voting on one of their members (either the acting prime minister or else) to be the interim prime minister until a new government is placed in power. In 2006, Ehud Olmert, after standing in for Prime Minister Sharon for 100 consecutive days, as acting prime minister, did not automatically assume office as an interim prime minister. The government voted to appoint him, and in addition, he was also a member of prime minister's party, which enabled them to appoint him to the role. An interim prime minister does not have to form a majority coalition in the Knesset, in order to get their approval vote (as a prime minister is required to do), and can assume office immediately, until a new government is placed in power. Shimon Peres was the foreign minister when Prime Minister Yitzhak Rabin was assassinated, and was voted unanimously to assume office as an interim prime minister until a new Government would be placed in power (that he later formed by himself). Yigal Allon was also voted to be the interim prime minister after Prime Minister Levi Eshkol suddenly died and served until Golda Meir formed her government. Both the interim and acting prime ministers' authorities are identical to those of a prime minister, with the exception of not having the authority to dissolve the Knesset. There are other cases (all other), not pending the situation of the incumbent prime minister's ability to continue to serve, where the Government becomes an interim government, while the incumbent prime minister is in office. In these cases, the incumbent prime minister is commonly referred as an "interim" prime minister, as a reference to change of the legal status only of the government under him. However, legally he is the prime minister, and only the government under him is legally an interim government (see interim government below). Main provision Basic Law: the Government (2001): 30. (b) If the prime minister has died, or is permanently incapacitated, from carrying out his duties, or if his tenure was ended because of an offense, the Government shall designate another of the Ministers who is a member of the Knesset and of the prime minister's faction to be interim prime minister pending the constitution of the new Government. List of interim prime ministers Interim government An 'interim government' (, Memshelet Ma'avar lit. "transitional government") is the same government, having been changed in their legal status, after the death, resignation, permanent incapacitation, or criminal conviction of the prime minister, as well as after the prime minister's request to dissolve the Knesset (Israeli parliament) was published through the president's decree, or after it was defeated by a motion of no confidence (these actions are regarded by the law as "the Government shall be deemed to have resigned"), or after election and before the forming of a new government (legally, "Newly elected Knesset" period), and in all the cases above, it continues to govern as an interim government, until a new government is placed in power, accordingly to the principle of "government continuity", in order to prevent a government void. If the incumbent prime minister can no longer serve (died, permanent incapacitation or criminal conviction), when the government is "deemed to have resigned" to become an interim government, they appoint a different person from their own government to the role of an interim prime minister (either the acting prime minister or else) until a new government is placed in power. This is a legal reference both to the change of a prime minister in office and in same government, a change |
can also lessen or suppress criminal sentences. This was of crucial importance when France still operated the death penalty: criminals sentenced to death would generally request that the president commute their sentence to life imprisonment. All decisions of the president must be countersigned by the prime minister, except dissolving the French National Assembly, choice of prime minister, and other dispositions referred to in Article 19. Detailed constitutional powers The constitutional attributions of the president are defined in Title II of the Constitution of France. Article 5: The president of the republic shall see that the Constitution is observed. He shall ensure, by his arbitration, the proper functioning of the public authorities and the continuity of the State. He shall be the guarantor of national independence, territorial integrity and observance of treaties. Article 8: The president of the republic shall appoint the prime minister. He shall terminate the appointment of the prime minister when the latter tenders the resignation of the Government. On the proposal of the prime minister, he shall appoint the other members of the Government and terminate their appointments. Article 9: The president of the republic shall preside over the Council of Ministers. Article 10: The president of the republic shall promulgate acts of parliament within fifteen days following the final adoption of an act and its transmission to the Government. He may, before the expiry of this time limit, ask Parliament to reconsider the act or sections of the Act. Reconsideration shall not be refused. While the President has to sign all acts adopted by parliament into law, he cannot refuse to do so and exercise a kind of right of veto; his only power in that matter is to ask for a single reconsideration of the law by parliament and this power is subject to countersigning by the Prime minister. Article 11: The president could submit laws to the people in a referendum with advice and consent of the cabinet. Article 12: The president of the republic may, after consulting the prime minister and the presidents of the assemblies, declare the National Assembly dissolved. A general election shall take place not less than twenty days and not more than forty days after the dissolution. The National Assembly shall convene as of right on the second Thursday following its election. Should it so convene outside the period prescribed for the ordinary session, a session shall be called by right for a fifteen-day period. No further dissolution shall take place within a year following this election. Article 13: The president of the republic shall sign the ordinances and decrees deliberated upon in the Council of Ministers. He shall make appointments to the civil and military posts of the State. [...] Article 14: The president of the republic shall accredit ambassadors and envoys extraordinary to foreign powers; foreign ambassadors and envoys extraordinary shall be accredited to him. Article 15: The president of the republic shall be commander-in-chief of the armed forces. He shall preside over the higher national defence councils and committees. Article 16: Where the institutions of the republic, the independence of the nation, the integrity of its territory or the fulfilment of its international commitments are under serious and immediate threat, and where the proper functioning of the constitutional public authorities is interrupted, the president of the republic shall take the measures required by these circumstances, after formally consulting the prime minister, the presidents of the assemblies and the Constitutional Council. He shall inform the nation of these measures in a message. The measures must stem from the desire to provide the constitutional public authorities, in the shortest possible time, with the means to carry out their duties. The Constitutional Council shall be consulted with regard to such measures. Parliament shall convene as of right. The National Assembly shall not be dissolved during the exercise of the emergency powers. Article 16, allowing the president a limited form of rule by decree for a limited period of time in exceptional circumstance, has been used only once, by Charles de Gaulle during the Algerian War, from 23 April to 29 September 1961. Article 17: The president of the republic has the right to grant pardon. Article 18: The president of the republic shall communicate with the two assemblies of Parliament by means of messages, which he shall cause to be read and which shall not be the occasion for any debate. He can also give an address in front of the Congress of France in Versailles. Outside sessions, Parliament shall be convened especially for this purpose. Article 19: Acts of the president of the republic, other than those provided for under articles 8 (first paragraph), 11, 12, 16, 18, 54, 56 and 61, shall be countersigned by the prime minister and, where required, by the appropriate ministers. Presidential amnesties Before the 2008 constitutional reform, forbidding them, there was a tradition of so-called "presidential amnesties", which are something of a misnomer: after the election of a president, and of a National Assembly of the same party, parliament would traditionally vote a law granting amnesty for some petty crimes (it was also a way of reducing jail overpopulation). This practice had been increasingly criticized, particularly because it was believed to inspire people to commit traffic offences in the months preceding the election. Such an amnesty law would also authorize the president to designate individuals who have committed certain categories of crimes to be offered amnesty, if certain conditions are met. Such individual measures have been criticized for the political patronage that they allow. The difference between an amnesty and a presidential pardon is that the former clears all subsequent effects of the sentencing, as though the crime had not been committed, while pardon simply relieves the sentenced individual from part or all of the remainder of the sentence. Criminal responsibility and impeachment Articles 67 and 68 organize the regime of criminal responsibility of the president. They were reformed by a 2007 constitutional act in order to clarify a situation that previously resulted in legal controversies. The president of the Republic enjoys immunity during their | measures. Parliament shall convene as of right. The National Assembly shall not be dissolved during the exercise of the emergency powers. Article 16, allowing the president a limited form of rule by decree for a limited period of time in exceptional circumstance, has been used only once, by Charles de Gaulle during the Algerian War, from 23 April to 29 September 1961. Article 17: The president of the republic has the right to grant pardon. Article 18: The president of the republic shall communicate with the two assemblies of Parliament by means of messages, which he shall cause to be read and which shall not be the occasion for any debate. He can also give an address in front of the Congress of France in Versailles. Outside sessions, Parliament shall be convened especially for this purpose. Article 19: Acts of the president of the republic, other than those provided for under articles 8 (first paragraph), 11, 12, 16, 18, 54, 56 and 61, shall be countersigned by the prime minister and, where required, by the appropriate ministers. Presidential amnesties Before the 2008 constitutional reform, forbidding them, there was a tradition of so-called "presidential amnesties", which are something of a misnomer: after the election of a president, and of a National Assembly of the same party, parliament would traditionally vote a law granting amnesty for some petty crimes (it was also a way of reducing jail overpopulation). This practice had been increasingly criticized, particularly because it was believed to inspire people to commit traffic offences in the months preceding the election. Such an amnesty law would also authorize the president to designate individuals who have committed certain categories of crimes to be offered amnesty, if certain conditions are met. Such individual measures have been criticized for the political patronage that they allow. The difference between an amnesty and a presidential pardon is that the former clears all subsequent effects of the sentencing, as though the crime had not been committed, while pardon simply relieves the sentenced individual from part or all of the remainder of the sentence. Criminal responsibility and impeachment Articles 67 and 68 organize the regime of criminal responsibility of the president. They were reformed by a 2007 constitutional act in order to clarify a situation that previously resulted in legal controversies. The president of the Republic enjoys immunity during their term: they cannot be requested to testify before any jurisdiction, they cannot be prosecuted, etc. However, the statute of limitation is suspended during their term, and enquiries and prosecutions can be restarted, at the latest one month after they leave office. The president is not deemed personally responsible for their actions in their official capacity, except where their actions are indicted before the International Criminal Court (France is a member of the ICC and the president is a French citizen as another following the Court's rules) or where impeachment is moved against them. Impeachment can be pronounced by the Republican High Court, a special court convened from both houses of Parliament on the proposal of either House, should the president have failed to discharge their duties in a way that evidently precludes the continuation of their term. Succession and incapacity Upon the death in office, removal, or resignation of the president, the Senate's president takes over as acting president. Alain Poher is the only person to have served in this temporary position, and has done so twice: the first time in 1969 after Charles de Gaulle's resignation and a second time in 1974 after Georges Pompidou's death while in office. In this situation, the president of the Senate becomes Acting President of the Republic; they do not become the new president of the Republic as elected and therefore do not have to resign from their position as President of the Senate. In spite of his title as Acting President of the Republic, Poher is listed in the presidents' gallery on the official presidential website. This is in contrast to acting presidents from the Third Republic. The first round of a new presidential election must be organized no sooner than twenty days and no later than thirty-five days following the vacancy of the presidency. Fifteen days can separate the first and second rounds of a presidential election; this means that the president of the Senate can only act as President of the Republic for a maximum period of fifty days. During this interim period, acting presidents are not allowed to dismiss the national assembly, nor are they allowed to call for a referendum or initiate any constitutional changes. If there is no president of the Senate, the powers of the president of the republic are exercised by the , meaning the Cabinet. This has been interpreted by some constitutional academics as meaning first the prime minister and, if he is himself not able to act, the members of the cabinet in the order of the list of the decree that nominated them. This is in fact unlikely to happen, because if the president of the Senate is not able to act, the Senate will normally name a new president of the Senate, who will act as President of the Republic. During the Third French Republic the president of the Council of Ministers acted as president whenever the office was vacant. According to article 7 of the Constitution, if the presidency becomes vacant for any reason, or if the president becomes incapacitated, upon the request of the , the Constitutional Council may rule, by a majority vote, that the presidency is to be temporarily assumed by the president of the Senate. If the Council rules that the incapacity is permanent, the same procedure as for the resignation is applied, as described above. If the president cannot attend meetings, including meetings of the Council of Ministers, he can ask the prime minister to attend in his stead (Constitution, article 21). This clause has been applied by presidents travelling abroad, ill, or undergoing surgery. During the Second French Republic, there was a vice president. The only person to ever hold the position was Henri Georges Boulay de la Meurthe. Death in office Four French presidents have died in office: Sadi Carnot, who was assassinated by Sante Geronimo Caserio on 25 June 1894, aged 56. Félix Faure, who died on 16 February 1899, aged 58. Paul Doumer, who was assassinated by Paul Gorguloff on 7 May 1932, aged 75, the oldest to die in office. Georges Pompidou, who died on 2 April 1974, aged 62. Pay and official residences The president of the Republic is paid a salary according to a pay grade defined in comparison to the pay grades of the most senior members of the French Civil Service ("out of scale", hors échelle, those whose pay grades are known as letters and not as numeric indices). In addition he is paid a residence stipend of 3%, and a function stipend of 25% on top of the salary and residence indemnity. This gross salary and these indemnities are the same as those of the prime minister, and are 50% higher than the highest paid to other members of the government, which is itself defined as twice the average of the highest (pay grade G) and the lowest (pay grade A1) salaries in the "out of scale" pay grades. Using the 2008 "out of scale" pay grades, it amounts to a monthly pay of 20,963 euros, which fits the 19,000 euros quoted to the press in early 2008. Using the pay grades starting from 1 July 2009, this amounts to a gross monthly pay of €21,131. The salary and the residence stipend are taxable for income tax. The official residence and office of the president is the Élysée Palace in Paris. Other presidential residences include: the Hôtel de Marigny, standing next to the Élysée Palace, houses foreign official guests; the Château de Rambouillet is normally open to visitors when not used for (rare) official meetings; the Domaine national de Marly is normally open to visitors when not used for (rare) official meetings; the Fort de Brégançon, in Southeastern France, is the official presidential vacation residence. In 2013, it became a national monument and is opened to the public some moments since 2014. The French president's private quarters there are still available for his use. La Lanterne became an official presidential vacation residence in 2007. Latest election Pension and benefits According to French law, former presidents of the Republic have guaranteed lifetime pension defined according to the pay grade of the Councillors of State, a courtesy diplomatic passport, and, according to the French Constitution (Article 56), membership of the Constitutional Council. They also get personnel, an apartment and/or office, and other amenities, though the legal basis for these is disputed. The current system for providing personnel and other amenities to the former French presidents was devised in 1981 by Michel Charasse, then advisor to President François Mitterrand, in order to care for former president Valéry Giscard d'Estaing and the widow of former President Georges Pompidou. In 2008, according to an answer by the services of the prime minister to a question from René Dosière, a member of the National Assembly, the facilities comprised: a security detail, a car with a chauffeur, first class train tickets and an office or housing space, as well as a two people who service the space. In addition, funds are available for seven permanent assistants. President Hollande announced a reform of the system in 2016. Former presidents of France will no longer receive a car with chauffeur and the personnel in their living space was cut as well. Additionally, the number of assistants available for their use has been reduced, but a state flat or house remains available for former officeholders. Train tickets are also available if the trip is justified by the office of the former officeholder as part of official business. The security personnel around former presidents of France remained unchanged. Lists relating to the presidents of France List of French non-presidential heads of state by tenure List of presidents of France List of presidents of France by tenure References Further reading How Powerful Is France's |
samples of explosive brought to the U.S. by the Tizard Mission had already been packaged by the SOE ready for dropping via parachute container to the French Resistance and were therefore labeled in French, as Explosif Plastique. It is still referred to by this name in France and also by some Americans. Types Composition C The British used a plastic explosive during World War II as a demolition charge. The specific explosive, Composition C, was 88.3% RDX and 11.7% non-oily, non-explosive plasticizer. The material was plastic between 0 and 40 degrees C, but was brittle at colder temperatures and gummy at higher temperatures. Composition C was superseded by Composition C2, which used a mixture of 80% RDX and 20% plasticizer. Composition C2 had a wider temperature range at which it remained plastic, from −30 to 52 degrees C. Composition C2 was replaced by Composition C3, which was a mixture of 77% RDX and 23% explosive plasticizer. C3 was effective but proved to be too brittle in cold weather and was replaced with C4. There are three classes of C4, with varying amounts of RDX and polyisobutylene. List of plastic explosives Australia: PE4, PE4-MC Austria: KNAUERIT SPEZIAL Czech Republic: Semtex-1H (orange-colored), Semtex 1A (red-colored), Semtex 10 (also called Pl Np 10; black-colored), Pl Hx 30 (gray-colored) Finland: PENO France: Hexomax, Composition C-4 PLASTRITE (FORMEX P1, Pla Np 87) Germany: Sprengkörper DM12, P8301, Seismoplast 1 (Sprengmasse, formbar) Netherlands: Knaverit S1 (light orange-colored) Greece: C3, C4 India: PEK-1 Israel: Semtex Italy: T-4 Plastico Norway: NM91 (HMX), C4, DPX10 (PE8) Pakistan: PE-3A Poland: PMW, NITROLIT Russia: PVV-5A Plastic Explosive Slovakia: CHEMEX (Composition C-4 equivalent), TVAREX 4A, Pl Hx 30 South Africa: PE9 (Composition C-4 equivalent) Sweden: Sprängdeg m/46, NSP711 (PETN-based), NSH711 (cyclonite-based) Switzerland: PLASTEX produced by SSE Turkey: Composition C-4 United Kingdom MOD explosives: PE2 (sheet explosive, superseded by SX2), PE3A (superseded by PE4), PE4 (pure to off-white slab, block, or stick, superseded by PE7 and PE8 in MOD usage), SX2 (sheet explosive, superseded by SX4), PE7 (pure to off-white slab or block, Hexomax variant), PE8 (pure to off-white slab or block, current in-service slab charge), SX4 (sheet explosive), | the art" either what he meant by plasticity or why it may be advantageous, as he only explains why his plastic explosive is superior to others of that type. One of the simplest plastic explosives was Nobel's Explosive No. 808, also known as Nobel 808 (often just called Explosive 808 in the British Armed Forces during the Second World War), developed by the British company Nobel Chemicals Ltd well before World War II. It had the appearance of green plasticine with a distinctive smell of almonds. During World War II it was extensively used by the British Special Operations Executive (SOE) at Aston House for sabotage missions. It is also the explosive used in HESH anti-tank shells and was an essential factor in the devising of the Gammon grenade. Captured SOE-supplied Nobel 808 was the explosive used in the failed 20 July plot assassination attempt on Adolf Hitler in 1944. During and after World War II a number of new RDX-based explosives were developed, including Compositions C, C2, and eventually C3. Together with RDX, these incorporate various plasticizers to decrease sensitivity and make the composition plastic. The origin of the obsolete term "plastique" dates back to the Nobel 808 explosive introduced to the U.S. by the British in 1940. The samples of explosive brought to the U.S. by the Tizard Mission had already been packaged by the SOE ready for dropping via parachute container to the French Resistance and were therefore labeled in French, as Explosif Plastique. It is still referred to by this name in France and also by some Americans. Types Composition C The British used a plastic explosive during World War II as a demolition charge. The specific explosive, Composition C, was 88.3% RDX and 11.7% non-oily, non-explosive plasticizer. The material was plastic between 0 and 40 degrees C, but was brittle at colder temperatures and gummy at higher temperatures. Composition C was superseded by Composition C2, which used a mixture of 80% RDX and 20% plasticizer. Composition C2 had a wider temperature range at which it remained plastic, from −30 to 52 degrees C. Composition C2 was replaced by Composition C3, which was a mixture of 77% RDX and 23% explosive plasticizer. C3 was effective but proved to be too brittle in cold weather and was replaced with C4. There are three classes of C4, with varying amounts of RDX and polyisobutylene. List of plastic explosives Australia: PE4, PE4-MC Austria: |
a metalanguage by which it is explained, and therefore deconstruction itself is in danger of becoming a metalanguage, thus exposing all languages and discourse to scrutiny. Barthes' other works contributed deconstructive theories about texts. Derrida's lecture at Johns Hopkins The occasional designation of post-structuralism as a movement can be tied to the fact that mounting criticism of Structuralism became evident at approximately the same time that Structuralism became a topic of interest in universities in the United States. This interest led to a colloquium at Johns Hopkins University in 1966 titled "The Languages of Criticism and the Sciences of Man", to which such French philosophers as Jacques Derrida, Roland Barthes, and Jacques Lacan were invited to speak. Derrida's lecture at that conference, "Structure, Sign, and Play in the Human Sciences", was one of the earliest to propose some theoretical limitations to Structuralism, and to attempt to theorize on terms that were clearly no longer structuralist. The element of "play" in the title of Derrida's essay is often erroneously interpreted in a linguistic sense, based on a general tendency towards puns and humour, while social constructionism as developed in the later work of Michel Foucault is said to create play in the sense of strategic agency by laying bare the levers of historical change. Many see the importance of Foucault's work to be in its synthesis of this social/historical account of the operation of power. Criticism Some observers from outside of the post-structuralist camp have questioned the rigour and legitimacy of the field. American philosopher John Searle suggested in 1990: "The spread of 'poststructuralist' literary theory is perhaps the best-known example of a silly but non-catastrophic phenomenon." Similarly, physicist Alan Sokal in 1997 criticized "the postmodernist/poststructuralist gibberish that is now hegemonic in some sectors of the American academy." Literature scholar Norman Holland in 1992 saw post-structuralism as flawed due to reliance on Saussure's linguistic model, which was seriously challenged by the 1950s and was soon abandoned by linguists:Saussure's views are not held, so far as I know, by modern linguists, only by literary critics and the occasional philosopher. [Strict adherence to Saussure] has elicited wrong film and literary theory on a grand scale. One can find dozens of books of literary theory bogged down in signifiers and signifieds, but only a handful that refers to Chomsky."David Foster Wallace | occasional designation of post-structuralism as a movement can be tied to the fact that mounting criticism of Structuralism became evident at approximately the same time that Structuralism became a topic of interest in universities in the United States. This interest led to a colloquium at Johns Hopkins University in 1966 titled "The Languages of Criticism and the Sciences of Man", to which such French philosophers as Jacques Derrida, Roland Barthes, and Jacques Lacan were invited to speak. Derrida's lecture at that conference, "Structure, Sign, and Play in the Human Sciences", was one of the earliest to propose some theoretical limitations to Structuralism, and to attempt to theorize on terms that were clearly no longer structuralist. The element of "play" in the title of Derrida's essay is often erroneously interpreted in a linguistic sense, based on a general tendency towards puns and humour, while social constructionism as developed in the later work of Michel Foucault is said to create play in the sense of strategic agency by laying bare the levers of historical change. Many see the importance of Foucault's work to be in its synthesis of this social/historical account of the operation of power. Criticism Some observers from outside of the post-structuralist camp have questioned the rigour and legitimacy of the field. American philosopher John Searle suggested in 1990: "The spread of 'poststructuralist' literary theory is perhaps the best-known example of a silly but non-catastrophic phenomenon." Similarly, physicist Alan Sokal in 1997 criticized "the postmodernist/poststructuralist gibberish that is now hegemonic in some sectors of the American academy." Literature scholar Norman Holland in 1992 saw post-structuralism as flawed due to reliance on Saussure's linguistic model, which was seriously challenged by the 1950s and was soon abandoned by linguists:Saussure's views are not held, so far as I know, by modern linguists, only by literary critics and the occasional philosopher. [Strict adherence to Saussure] has elicited wrong film and literary theory on a grand scale. One can find dozens of books of literary theory bogged down in signifiers and signifieds, but only a handful that refers to Chomsky."David Foster Wallace wrote: See also Authors The following are often said to be post-structuralists, or to have had a post-structuralist period: Kathy Acker Jean Baudrillard Roland Barthes Wendy Brown Judith Butler Rey Chow Hélène Cixous Gilles Deleuze Jacques Derrida Umberto Eco John Fiske Michel Foucault René Girard Félix Guattari Luce Irigaray Julia Kristeva Teresa de Lauretis Sarah Kofman Philippe Lacoue-Labarthe Jean-François Lyotard Chantal Mouffe Jean-Luc Nancy Avital Ronell Bernard Stiegler |
used broadly, with their meanings defined in terms of the phases of various peace process mechanisms blurring and overlapping in practice. Institutions The construction of international institutions, especially during the twentieth century, has to a large degree been motivated by the desire to provide a broad global context of peacebuilding. This includes the League of Nations and the United Nations, and regional institutions such as the European Union. Institutions involved in encouraging or overseeing some of the steps in specific peace processes include the United Nations Department of Peace Operations. Mechanisms Many specific peace mechanisms can comprise the elements of peace processes. The Peace Accords Matrix of the Kroc Institute for International Peace Studies at the University of Notre Dame, United States, lists some of these as amnesties, ceasefires, arms embargoes, truth and reconciliation commissions and reforms of the constitution, or of military, police, judicial or educational institutions or of the media. Other mechanisms include prisoner exchanges, confidence-building measures, humanitarian corridors, peace treaties and transitional justice. Criticism Edward Luttwak argues that conventional wars should not be interrupted before they could burn themselves out and the preconditions for a long-lasting peace are established. Stable peace settlement is possible only with the exhaustion of belligerents or the decisive victory of one side. "Hopes for military success must fade for accommodation to become more attractive than further combat," but premature ceasefires prevent belligerents from exhaustion and let them rearm their forces. That in turn prolongs war and leads to the | Military methods by globally organised military forces of stopping a local armed conflict are typically classed as peace enforcement. Reorganisation The prevention of the repeat of a solved conflict (as well as the preventing of an armed conflict from occurring at all) is usually classed as peacebuilding. UNDPO defines peacebuilding to include "measures [that] address core issues that effect the functioning of society and the State". The use of neutral military forces to sustain ceasefires during this phase, typically by United Nations peacekeeping forces, can be referred to as peacekeeping. Overlapping definitions The terms peacemaking, peacekeeping and peacebuilding tend to be used broadly, with their meanings defined in terms of the phases of various peace process mechanisms blurring and overlapping in practice. Institutions The construction of international institutions, especially during the twentieth century, has to a large degree been motivated by the desire to provide a broad global context of peacebuilding. This includes the League of Nations and the United Nations, and regional institutions such as the European Union. Institutions involved in encouraging or overseeing some of the steps in specific peace processes include the United Nations Department of Peace |
he was responsible for defending actions taken by the governor. Randolph left for London, over the objections of Governor Dinwiddie, and was replaced for a short time as attorney general by George Wythe. Randolph resumed his post on his return at the behest of Wythe as well as officials in London, who also recommended the governor drop the new fee. In 1765, Randolph found himself at odds with a freshman burgess, Patrick Henry, over the matter of a response to the Stamp Act. The House appointed Randolph to draft objections to the act, but his more conservative plan was trumped when Henry obtained passage of five of his seven Virginia Stamp Act Resolutions. This was accomplished at a meeting of the House in which most of the members were absent and over which Randolph was presiding in the absence of the speaker. Randolph resigned as king's attorney (attorney general) in 1766, as fellow Burgesses elected him as their speaker upon the death of his relative, the powerful Speaker John Robinson. Sitting as the General Court, they also appointed Randolph one of the executors (along with Wythe and Edmund Pendleton) of the former speaker's estate, which was a major financial scandal. As friction between Britain and the colonies progressed, Randolph grew to favor independence. In 1769 the House of Burgesses was dissolved by Governor Norborne Berkeley, 4th Baron Botetourt, in response to its actions against the Townshend Acts. In 1773, Randolph chaired the Virginia committee of correspondence. The next governor, John Murray, 4th Earl of Dunmore, also dissolved the House of Burgesses in 1774 when it showed solidarity with Boston, Massachusetts, following the Boston Port Act. Randolph chaired meetings of the first of five Virginia Conventions of former House members, principally at a Williamsburg tavern, which worked toward responses to the unwelcome tax measures imposed by the British government. On March 21, 1775, he was president of the Second Virginia Convention in Richmond that debated independence (the setting of Patrick Henry's famous "Give me liberty, or give me death!" speech). In April, Randolph negotiated with Lord Dunmore for gunpowder removed from the Williamsburg arsenal during the Gunpowder Incident, which was a confrontation between the governor's forces and Virginia militia, led by Henry. The House of Burgesses was called back by Lord Dunmore one last time in June 1775 to address British Prime Minister Lord North's Conciliatory Resolution. Randolph, who was a delegate to the Continental Congress, returned to Williamsburg to take his place as Speaker. Randolph indicated that the resolution had not been sent to the Congress (it had instead been sent to each colony individually in an attempt to divide them and bypass the Continental Congress). The House of Burgesses rejected the proposal, which was also later rejected by the Continental Congress. Randolph was thus the last speaker of the House of Burgesses (their role was replaced by the Virginia Conventions and later the House of Delegates in 1776). Randolph also | a response to the Stamp Act. The House appointed Randolph to draft objections to the act, but his more conservative plan was trumped when Henry obtained passage of five of his seven Virginia Stamp Act Resolutions. This was accomplished at a meeting of the House in which most of the members were absent and over which Randolph was presiding in the absence of the speaker. Randolph resigned as king's attorney (attorney general) in 1766, as fellow Burgesses elected him as their speaker upon the death of his relative, the powerful Speaker John Robinson. Sitting as the General Court, they also appointed Randolph one of the executors (along with Wythe and Edmund Pendleton) of the former speaker's estate, which was a major financial scandal. As friction between Britain and the colonies progressed, Randolph grew to favor independence. In 1769 the House of Burgesses was dissolved by Governor Norborne Berkeley, 4th Baron Botetourt, in response to its actions against the Townshend Acts. In 1773, Randolph chaired the Virginia committee of correspondence. The next governor, John Murray, 4th Earl of Dunmore, also dissolved the House of Burgesses in 1774 when it showed solidarity with Boston, Massachusetts, following the Boston Port Act. Randolph chaired meetings of the first of five Virginia Conventions of former House members, principally at a Williamsburg tavern, which worked toward responses to the unwelcome tax measures imposed by the British government. On March 21, 1775, he was president of the Second Virginia Convention in Richmond that debated independence (the setting of Patrick Henry's famous "Give me liberty, or give me death!" speech). In April, Randolph negotiated with Lord Dunmore for gunpowder removed from the Williamsburg arsenal during the Gunpowder Incident, which was a confrontation between the governor's forces and Virginia militia, led by Henry. The House of Burgesses was called back by Lord Dunmore one last time in June 1775 to address British Prime Minister Lord North's Conciliatory Resolution. Randolph, who was a delegate to the Continental Congress, returned to Williamsburg to take his place as Speaker. Randolph indicated that the resolution had not been sent to the Congress (it had instead been sent to each colony individually in an attempt to divide them and bypass the Continental Congress). The House of Burgesses rejected the proposal, which was also later rejected by the Continental Congress. Randolph was thus the last speaker of the House of Burgesses (their role was replaced by the Virginia Conventions and later the House of Delegates in 1776). Randolph also served as the president of the Third Virginia Convention in July 1775, which as a legislative body elected a committee of safety to act as the colony's executive since Lord Dunmore had abandoned the capital and took refuge on a British warship. Pendleton succeeded Randolph as president of the later conventions. Continental Congress Virginia selected Randolph as one of its delegates to the Continental Congress in Philadelphia in 1774 and 1775. Fellow delegates elected him their president (speaker) of both the First Continental Congress (which requested that King George III repeal the Coercive Acts and passed the Continental Association) as well as Second Continental Congress (which extended the Olive Branch Petition as a final attempt at reconciliation). However, Randolph fell ill during each term. Henry Middleton of South Carolina succeeded him as president from his resignation on October 22, 1774, two days after presiding over the passage and signing of the Continental Association, until his return on May 10, 1775. He was again elected president of Congress, but Randolph left for Virginia four |
that a map is continuous if and only if is continuous for all In many cases it is easier to check that the component functions are continuous. Checking whether a map is continuous is usually more difficult; one tries to use the fact that the are continuous in some way. In addition to being continuous, the canonical projections are open maps. This means that any open subset of the product space remains open when projected down to the The converse is not true: if is a subspace of the product space whose projections down to all the are open, then need not be open in (consider for instance ) The canonical projections are not generally closed maps (consider for example the closed set whose projections onto both axes are ). Suppose is a product of arbitrary subsets, where for every If all are then is a closed subset of the product space if and only if every is a closed subset of More generally, the closure of the product of arbitrary subsets in the product space is equal to the product of the closures: Any product of Hausdorff spaces is again a Hausdorff space. Tychonoff's theorem, which is equivalent to the axiom of choice, states any product of compact spaces is a compact space. A specialization of Tychonoff's theorem that requires only the ultrafilter lemma (and not the full strength of the axiom of choice) states that any product of compact Hausdorff spaces is a compact space. If is fixed then the set is a dense subset of the product space . Relation to other topological notions Separation Every product of T0 spaces is T0. Every product of T1 spaces is T1. Every product of Hausdorff spaces is Hausdorff. Every product of regular spaces is regular. Every product of Tychonoff spaces is Tychonoff. A product of normal spaces be normal. Compactness Every product of compact spaces is compact (Tychonoff's theorem). A product of locally compact spaces be locally compact. However, an arbitrary product of locally compact spaces where all but finitely many are compact locally compact (This condition is sufficient and necessary). Connectedness Every product of connected (resp. path-connected) spaces is connected (resp. path-connected). Every product of hereditarily disconnected spaces is hereditarily disconnected. Metric spaces Countable products of metric spaces are metrizable spaces. Axiom of choice One of many ways to express the axiom of choice is to say that it is equivalent to the statement that the Cartesian product of a collection of non-empty sets is non-empty. The proof that this is equivalent to the statement of the axiom in terms of choice functions is immediate: one needs only to pick an element from each set to find a representative in the product. Conversely, a representative of the product is a set which contains exactly one element from each component. The axiom of choice occurs again in the study of (topological) product spaces; for example, Tychonoff's theorem on compact sets is a more complex and subtle example of a statement that requires the axiom | commutes. This shows that the product space is a product in the category of topological spaces. It follows from the above universal property that a map is continuous if and only if is continuous for all In many cases it is easier to check that the component functions are continuous. Checking whether a map is continuous is usually more difficult; one tries to use the fact that the are continuous in some way. In addition to being continuous, the canonical projections are open maps. This means that any open subset of the product space remains open when projected down to the The converse is not true: if is a subspace of the product space whose projections down to all the are open, then need not be open in (consider for instance ) The canonical projections are not generally closed maps (consider for example the closed set whose projections onto both axes are ). Suppose is a product of arbitrary subsets, where for every If all are then is a closed subset of the product space if and only if every is a closed subset of More generally, the closure of the product of arbitrary subsets in the product space is equal to the product of the closures: Any product of Hausdorff spaces is again a Hausdorff space. Tychonoff's theorem, which is equivalent to the axiom of choice, states any product of compact spaces is a compact space. A specialization of Tychonoff's theorem that requires only the ultrafilter lemma (and not the full strength of the axiom of choice) states that any product of compact Hausdorff spaces is a compact space. If is fixed then the set is a dense subset of the product space . Relation to other topological notions Separation Every product of T0 spaces is T0. Every product of T1 spaces is T1. Every product of Hausdorff spaces is Hausdorff. Every product of regular spaces is regular. Every product of Tychonoff spaces is Tychonoff. A product of normal spaces be normal. Compactness Every product of compact spaces is compact (Tychonoff's theorem). A product of locally compact |
- Shin Saiyajin Zetsumetsu Keikaku Uchū Hen - [BAPD-11] 1995 (16 titles) 01/24 - Norimono Banzai!! - Kuruma Daishūgō!! - [BAPD-12] 01/24 - Norimono Banzai!! - Densha Daishūgō!! - [BAPD-13] 03/22 - Ie Naki Ko - Suzu no Sentaku - [VPRJ-09722] 03/22 - Gamera - The Time Adventure - [BAPD-15] 06/22 - Elements Voice Series vol.1 Mika Kanai - Wind&Breeze - [BAPD-18] 06/22 - Elements Voice Series vol.2 Rica Fukami - Private Step - [BAPD-19] 06/22 - Elements Voice Series vol.3 Aya Hisakawa - Forest Sways - [BAPD-20] 07/28 - Bishōjo Senshi Sailor Moon SuperS - Sailor Moon to Hiragana Lesson! - [BAPD-21] 07/28 - Ultraman - Hiragana Dai Sakusen - [BAPD-22] 07/28 - Ultraman - Alphabet TV e Yōkoso - [BAPD-23] 08/24 - Bishōjo Senshi Sailor Moon SS - Sailor Moon to Hajimete no Eigo - [BAPD-24] 08/24 - Bishōjo Senshi Sailor Moon SS - Yōkoso! Sailor Yōchien - [BAPD-25] 08/24 - Ultraman - Oide yo! Ultra Yōchien - [BAPD-26] 10/20 - Chōgōkin Selections - [BKPD-01] 11/16 - Elements Voice Series vol.4 Yuri Shiratori - Rainbow Harmony - [BKPD-02] 12/15 - Soreike! Anpanman - Picnic de Obenkyō - [BAPD-27] 1996 (6 titles) 03/22 - Ultraman - Sūji de Asobō Ultra Land - [BAPD-28] 03/22 - Ultraman - Ultraman Chinō UP Dai Sakusen - [BAPD-29] 03/27 - Elements Voice Series vol.5 Mariko Kouda - Welcome to the Marikotown! - [BKPD-03] 04/24 - Nintama Rantarō - Gungun Nobiru Chinō Hen - [BKPD-04] 05/15 - Nintama Rantarō - Hajimete Oboeru Chishiki Hen - [BKPD-05] 06/26 - Gekisou Sentai Carranger - Tatakae! Hiragana Racer - [BKPD-06] Not for sale (6 titles) Yumi to Tokoton Playdia - [BS-003] Go! Go! Ackman Planet - | support this console (except for VAP who published Ie Naki Ko - Suzu no Sentaku instead of Bandai). Playdia title complete list 1994 (11 titles) 09/23 - Dragon Ball Z - Shin Saiyajin Zetsumetsu Keikaku Chikyū Hen - [BAPD-01] 09/23 - Bishōjo Senshi Sailor Moon S - Quiz Taiketsu! Sailor Power Kesshū!! - [BAPD-02] 09/23 - SD Gundam Daizukan - [BAPD-03] 09/28 - Ultraman Powered - Kaijū Gekimetsu Sakusen - [BAPD-04] 09/28 - Hello Kitty - Yume no Kuni Daibōken - [BAPD-05] 11/25 - Aqua Adventure - Blue Lilty - [BAPD-06] 11/25 - Newton museum - Kyōryū Nendaiki Zenpen - [BAPD-07] 11/25 - Newton museum - Kyōryū Nendaiki Kōhen - [BAPD-08] 12/08 - Shuppatsu! Dōbutsu Tankentai - [BAPD-09] 12/16 - Ultra Seven - Chikyū Bōei Sakusen - [BAPD-10] 12/16 - Dragon Ball Z - Shin Saiyajin Zetsumetsu Keikaku Uchū Hen - [BAPD-11] 1995 (16 titles) 01/24 - Norimono Banzai!! - Kuruma Daishūgō!! - [BAPD-12] 01/24 - Norimono Banzai!! - Densha Daishūgō!! - [BAPD-13] 03/22 - Ie Naki Ko - Suzu no Sentaku - [VPRJ-09722] 03/22 - Gamera - The Time Adventure - [BAPD-15] 06/22 - Elements Voice Series vol.1 Mika Kanai - Wind&Breeze - [BAPD-18] 06/22 - Elements Voice Series vol.2 Rica Fukami - Private Step - [BAPD-19] 06/22 - Elements Voice Series vol.3 Aya Hisakawa - Forest Sways - [BAPD-20] 07/28 - Bishōjo Senshi Sailor Moon SuperS - Sailor Moon to Hiragana Lesson! - [BAPD-21] 07/28 - Ultraman - Hiragana Dai Sakusen - [BAPD-22] 07/28 - Ultraman - Alphabet TV e Yōkoso - [BAPD-23] 08/24 - Bishōjo Senshi Sailor Moon SS - Sailor Moon to Hajimete no Eigo - [BAPD-24] 08/24 |
Lack of morphophonemic variation Lack of tones, such as those found in Niger-Congo, Austroasiatic and Sino-Tibetan language families and in various families of the indigenous languages of the Americas Lack of grammatical tense; use of separate words to indicate tense, usually preceding the verb Lack of conjugation, declension or agreement Lack of grammatical gender or number, commonly supplanted by reduplication to represent plurals and superlatives, and other parts of speech that represent the concept being increased and clear indication of the gender of animated objects. Lack of clear parts of speech or word categorization; common use and derivation of new vocabulary through conversion, e.g. nominalization, verbification, adjectivization etc. Development The initial development of a pidgin usually requires: prolonged, regular contact between the different language communities a need to communicate between them an absence of (or absence of widespread proficiency in) a widespread, accessible interlanguage Keith Whinnom (in ) suggests that pidgins need three languages to form, with one (the superstrate) being clearly dominant over the others. Linguists sometimes posit that pidgins can become creole languages when a generation of children learn a pidgin as their first language, a process that regularizes speaker-dependent variation in grammar. Creoles can then replace the existing mix of languages to become the native language of a community (such as the Chavacano language in the Philippines, Krio in Sierra Leone, and Tok Pisin in Papua New Guinea). However, not all pidgins become creole languages; a pidgin may die out before this phase would occur (e.g. the Mediterranean Lingua Franca). Other scholars, such as Salikoko Mufwene, argue that pidgins and creoles arise independently under different circumstances, and that a pidgin need not always precede a creole nor a creole evolve from a pidgin. Pidgins, according to Mufwene, emerged among trade colonies among "users who preserved their native vernaculars for their day-to-day interactions". Creoles, meanwhile, developed in settlement colonies in which speakers of a European language, often indentured servants whose language would be far from the standard in the first place, interacted extensively with non-European slaves, absorbing certain words and features from the slaves' non-European native languages, resulting in a heavily basilectalized version of the original language. These servants and slaves would come to use the creole as an everyday vernacular, rather than merely in situations in which contact with a speaker of the superstrate was necessary. Examples The following pidgins have Wikipedia articles or sections in articles. Many of these languages are commonly referred to by their speakers as "Pidgin". List of English-based pidgins Algonquian–Basque pidgin Arafundi-Enga Pidgin Bamboo English Barikanchi Pidgin Basque–Icelandic pidgin Bimbashi Arabic Bislama (creolized) Borgarmålet Bozal Spanish Broken Oghibbeway Broken Slavey and Loucheux Jargon Broome Pearling Lugger Pidgin Camtho Cameroonian Pidgin English (creolized) Cocoliche Chinook Jargon Duvle-Wano Pidgin Eskimo Trade Jargon Ewondo Populaire Fanagalo (Pidgin Zulu) Français Tirailleur Haflong | and often drawn from several languages. It is most commonly employed in situations such as trade, or where both groups speak languages different from the language of the country in which they reside (but where there is no common language between the groups). Linguists do not typically consider pidgins as full or complete languages. Fundamentally, a pidgin is a simplified means of linguistic communication, as it is constructed impromptu, or by convention, between individuals or groups of people. A pidgin is not the native language of any speech community, but is instead learned as a second language. A pidgin may be built from words, sounds, or body language from a multitude of languages as well as onomatopoeia. As the lexicon of any pidgin will be limited to core vocabulary, words with only a specific meaning in the lexifier language may acquire a completely new (or additional) meaning in the pidgin. Pidgins have historically been considered a form of patois, unsophisticated simplified versions of their lexifiers, and as such usually have low prestige with respect to other languages. However, not all simplified or "unsophisticated" forms of a language are pidgins. Each pidgin has its own norms of usage which must be learned for proficiency in the pidgin. A pidgin differs from a creole, which is the first language of a speech community of native speakers that at one point arose from a pidgin. Unlike pidgins, creoles have fully developed vocabulary and patterned grammar. Most linguists believe that a creole develops through a process of nativization of a pidgin when children of acquired pidgin-speakers learn and use it as their native language. Etymology Pidgin derives from a Chinese pronunciation of the English word business, and all attestations from the first half of the nineteenth century given in the third edition of the Oxford English Dictionary mean "business; an action, occupation, or affair" (the earliest being from 1807). The term pidgin English ("business English"), first attested in 1855, shows the term in transition to referring to language, and by the 1860s the term pidgin alone could refer to Pidgin |
to: Polishing, the process of creating a smooth and shiny surface by rubbing or chemical action French polishing, polishing wood to a high gloss finish Nail polish Shoe polish Polish (screenwriting), improving a script in smaller ways than in | Poles, people from Poland or of Polish descent Polish chicken Polish brothers (Mark Polish and Michael Polish, born 1970), American twin screenwriters Polish may refer to: Polishing, the process of creating a smooth and shiny |
and C-602 anti-ship missiles. PLAN Marine Corps The PLAN Marine Corps was originally established in the 1950s and then re-established in 1979 under PLAN organisation. It consists of around 20,000 marines, and is based in the South China Sea with the South Sea Fleet. The Marine Corps are considered elite troops, and are rapid deployment forces trained primarily in amphibious warfare and sometimes as paratroopers to establish a beachhead or act as a spearhead during assault operations against enemy targets. The marines are equipped with the standard Type 95 assault rifles as well as other small arms and personnel equipment, and a blue/littoral camouflage uniform as standard. The marines are also equipped with amphibious armoured fighting vehicles (including amphibious light tanks such as the Type 63, assault vehicles such as the ZTD-05 and IFVs such as ZBD-05), helicopters, naval artillery, anti-aircraft weapon systems and short range surface-to-air missiles. With the PLAN's accelerating efforts to expand its capabilities beyond territorial waters, it would be likely for the Marine Corps to play a greater role in terms of being an offshore expeditionary force similar to the USMC and Royal Marines. PLA Naval Air Force The People's Liberation Army Naval Air Force (PLANAF) is the naval aviation branch of the PLAN and has a strength of around 25,000 personnel and 690 aircraft. It operates similar hardwares to the People's Liberation Army Air Force, including fighter aircraft, bombers, attack aircraft, tankers, reconnaissance/early warning aircraft, electronic warfare aircraft, maritime patrol aircraft, transport aircraft and helicopters of various roles. The PLA Naval Air Force has traditionally operated from coastal air bases, and received older aircraft than the PLAAF with less ambitious steps towards mass modernization. Advancements in new technologies, weaponry and aircraft acquisition were made after 2000. With the introduction of China's first aircraft carrier, Liaoning, in 2012, the Naval Air Force is conducting carrier-based operations for the first time with the goal of building carrier battle group-focused blue water capabilities. The PLANAF naval air bases include: North Sea Fleet: Dalian, Qingdao, Jinxi, Jiyuan, Laiyang, Jiaoxian, Xingtai, Laishan, Anyang, Changzhi, Liangxiang and Shan Hai Guan East Sea Fleet: Danyang, Daishan, Shanghai (Dachang), Ningbo, Luqiao, Feidong and Shitangqiao South Sea Fleet: Foluo, Haikou, Lingshui, Sanya, Guiping, Jialaishi and Lingling Relationship with other maritime organizations of China The PLAN is complemented by paramilitary maritime services such as the China Coast Guard. The Chinese Coast Guard was previously not under an independent command, considered part of the People's Armed Police, under the local (provincial) border defense command, prior to its reorganization and consolidation as an unified service. It was formed from the integration of several formerly separate services such as China Marine Surveillance (CMS), General Administration of Customs, Armed Police, China Fishery Law Enforcement and local maritime militia. The CMS performed mostly coastal and ocean search and rescue or patrols, and received quite a few large patrol ships that significantly enhanced their operations; while Customs, militia, Armed Police and Fishery Law Enforcement operated hundreds of small patrol craft. For maritime patrol services, these craft are usually quite well armed with machine guns and 37mm antiaircraft guns. In addition, these services operated their own small aviation fleets to assist their maritime patrol capabilities, with Customs and CMS operating a handful of Harbin Z-9 helicopters, and a maritime patrol aircraft based on the Harbin Y-12 STOL transport. Every coastal province has 1 to 3 Coast Guard squadrons: 3 Squadrons: Fujian, Guangdong 2 Squadrons: Liaoning, Shandong, Zhejiang, Hainan, Guangxi 1 Squadron: Heibei, Tianjin, Jiangsu, Shanghai Ranks The ranks in the People's Liberation Army Navy are similar to those of the People's Liberation Army Ground Force. The current system of officer ranks and insignia dates from 1988 and is a revision of the ranks and insignia used from 1955 to 1965. The rank of Hai Jun Yi Ji Shang Jiang (First Class Admiral) was never held and was abolished in 1994. With the official introduction of the Type 07 uniforms all officer insignia are on either shoulders or sleeves depending on the type of uniform used. The current system of enlisted ranks and insignia dates from 1998. Commanders Xiao Jinguang (January 1950 − January 1980) Ye Fei (January 1980 – August 1982) Liu Huaqing (August 1982 – January 1988) Zhang Lianzhong (January 1988 – November 1996) Shi Yunsheng (November 1996 – June 2003) Zhang Dingfa (June 2003 – August 2006) Wu Shengli (August 2006 – January 2017) Shen Jinlong (January 2017 – September 2021) Dong Jun (September 2021 – present) Today Strategy, plans, priorities The People's Liberation Army Navy has become more prominent in recent years owing to a change in Chinese strategic priorities. The new strategic threats include possible conflict with the United States and/or a resurgent Japan in areas such as the Taiwan Strait or the South China Sea. As part of its overall program of naval modernization, the PLAN has a long-term plan of developing a blue water navy. Robert D. Kaplan has said that it was the collapse of the Soviet Union that allowed China to transfer resources from its army to its navy and other force projection assets. China is constructing a major underground nuclear submarine base near Sanya, Hainan. In December 2007 the first Type 094 submarine was moved to Sanya. The Daily Telegraph on 1 May 2008 reported that tunnels were being built into hillsides which could be capable of hiding up to 20 nuclear submarines from spy satellites. According to the Western news media the base is reportedly to help China project seapower well into the Pacific Ocean area, including challenging United States naval power. During a 2008 interview with the BBC, Major General Qian Lihua, a senior Chinese defense official, stated that the PLAN aspired to possess a small number of aircraft carriers to allow it to expand China's air defense perimeter. According to Qian the important issue was not whether China had an aircraft carrier, but what it did with it. On 13 January 2009, Adm. Robert F. Willard, head of the U.S. Pacific Command, called the PLAN's modernization "aggressive," and that it raised concerns in the region. On 15 July 2009, Senator Jim Webb of the Senate Foreign Relations Committee declared that only the "United States has both the stature and the national power to confront the obvious imbalance of power that China brings" to situations such as the claims to the Spratly and Paracel islands. Ronald O'Rourke of the Congressional Research Service wrote in 2009 that the PLAN "continues to exhibit limitations or weaknesses in several areas, including capabilities for sustained operations by larger formations in distant waters, joint operations with other parts of China’s military, C4ISR systems, anti-air warfare (AAW), antisubmarine warfare (ASW), MCM, and a dependence on foreign suppliers for certain key ship components." In 1998 China purchased the discarded Ukrainian ship Varyag and began retrofitting it for naval deployment. On 25 September 2012, the People's Liberation Army Navy took delivery of China's first aircraft carrier, the Liaoning. The 60,000-ton ship can accommodate 33 fixed wing aircraft. It is widely speculated that these aircraft will be the J15 fighter (the Chinese version of Russia's SU-33). In September 2015, satellite images showed that China may have started constructing its first indigenous Type 002 aircraft carrier. At the time, the layout suggested to be displacement of 50,000 tons and a hull to have a length of about 240 m and a beam of about 35 m. The incomplete bow suggests a length of at least 270 m for the completed hull. In April 2017 the carrier was launched. Japan has raised concerns about the PLAN's growing capability and the lack of transparency as its naval strength keeps on expanding. China has reportedly entered into service the world's first anti-ship ballistic missile called DF-21D. The potential threat from the DF-21D against U.S. aircraft carriers has reportedly caused major changes in U.S. strategy. In June 2017 China launched a new type of large destroyer, the Type 055 destroyer. The new destroyer is, with its dimension of 180 meter and over 12,000 tons fully loaded, the second largest destroyer class in the world after the American Zumwalt-class destroyer. Territorial disputes Spratly Islands dispute The Spratly Islands dispute is a territorial dispute over the ownership of the Spratly Islands, a group of islands located in the South China Sea. States staking claims to various islands are Brunei, Malaysia, the Philippines, Taiwan, Vietnam, and People's Republic of China. All except Brunei occupy some of the islands in dispute. The People's Republic of China conducted naval patrols in the Spratly Islands and established a permanent base. On 14 March 1988, Chinese and Vietnamese naval forces clashed over Johnson South Reef in the Spratly Islands, which involved three PLAN frigates/ In February 2011, the Chinese frigate Dongguan fired three shots at Philippine fishing boats in the vicinity of Jackson atoll. The shots were fired after the frigate instructed the fishing boats to leave, and one of those boats experienced trouble removing its anchor. In May 2011, the Chinese patrol boats attacked and cut the cable of Vietnamese oil exploration ships near Spratly islands. The incidence sparked several anti-China protests in Vietnam. In June 2011, the Chinese navy conducted three days of exercises, including live fire drills, in the disputed waters. This was widely seen as a warning to Vietnam, which had also conducted live fire drills near the Spratly Islands. Chinese patrol boats fired repeated rounds at a target on an apparently uninhabited island, as twin fighter jets | (PLANAF) is the naval aviation branch of the PLAN and has a strength of around 25,000 personnel and 690 aircraft. It operates similar hardwares to the People's Liberation Army Air Force, including fighter aircraft, bombers, attack aircraft, tankers, reconnaissance/early warning aircraft, electronic warfare aircraft, maritime patrol aircraft, transport aircraft and helicopters of various roles. The PLA Naval Air Force has traditionally operated from coastal air bases, and received older aircraft than the PLAAF with less ambitious steps towards mass modernization. Advancements in new technologies, weaponry and aircraft acquisition were made after 2000. With the introduction of China's first aircraft carrier, Liaoning, in 2012, the Naval Air Force is conducting carrier-based operations for the first time with the goal of building carrier battle group-focused blue water capabilities. The PLANAF naval air bases include: North Sea Fleet: Dalian, Qingdao, Jinxi, Jiyuan, Laiyang, Jiaoxian, Xingtai, Laishan, Anyang, Changzhi, Liangxiang and Shan Hai Guan East Sea Fleet: Danyang, Daishan, Shanghai (Dachang), Ningbo, Luqiao, Feidong and Shitangqiao South Sea Fleet: Foluo, Haikou, Lingshui, Sanya, Guiping, Jialaishi and Lingling Relationship with other maritime organizations of China The PLAN is complemented by paramilitary maritime services such as the China Coast Guard. The Chinese Coast Guard was previously not under an independent command, considered part of the People's Armed Police, under the local (provincial) border defense command, prior to its reorganization and consolidation as an unified service. It was formed from the integration of several formerly separate services such as China Marine Surveillance (CMS), General Administration of Customs, Armed Police, China Fishery Law Enforcement and local maritime militia. The CMS performed mostly coastal and ocean search and rescue or patrols, and received quite a few large patrol ships that significantly enhanced their operations; while Customs, militia, Armed Police and Fishery Law Enforcement operated hundreds of small patrol craft. For maritime patrol services, these craft are usually quite well armed with machine guns and 37mm antiaircraft guns. In addition, these services operated their own small aviation fleets to assist their maritime patrol capabilities, with Customs and CMS operating a handful of Harbin Z-9 helicopters, and a maritime patrol aircraft based on the Harbin Y-12 STOL transport. Every coastal province has 1 to 3 Coast Guard squadrons: 3 Squadrons: Fujian, Guangdong 2 Squadrons: Liaoning, Shandong, Zhejiang, Hainan, Guangxi 1 Squadron: Heibei, Tianjin, Jiangsu, Shanghai Ranks The ranks in the People's Liberation Army Navy are similar to those of the People's Liberation Army Ground Force. The current system of officer ranks and insignia dates from 1988 and is a revision of the ranks and insignia used from 1955 to 1965. The rank of Hai Jun Yi Ji Shang Jiang (First Class Admiral) was never held and was abolished in 1994. With the official introduction of the Type 07 uniforms all officer insignia are on either shoulders or sleeves depending on the type of uniform used. The current system of enlisted ranks and insignia dates from 1998. Commanders Xiao Jinguang (January 1950 − January 1980) Ye Fei (January 1980 – August 1982) Liu Huaqing (August 1982 – January 1988) Zhang Lianzhong (January 1988 – November 1996) Shi Yunsheng (November 1996 – June 2003) Zhang Dingfa (June 2003 – August 2006) Wu Shengli (August 2006 – January 2017) Shen Jinlong (January 2017 – September 2021) Dong Jun (September 2021 – present) Today Strategy, plans, priorities The People's Liberation Army Navy has become more prominent in recent years owing to a change in Chinese strategic priorities. The new strategic threats include possible conflict with the United States and/or a resurgent Japan in areas such as the Taiwan Strait or the South China Sea. As part of its overall program of naval modernization, the PLAN has a long-term plan of developing a blue water navy. Robert D. Kaplan has said that it was the collapse of the Soviet Union that allowed China to transfer resources from its army to its navy and other force projection assets. China is constructing a major underground nuclear submarine base near Sanya, Hainan. In December 2007 the first Type 094 submarine was moved to Sanya. The Daily Telegraph on 1 May 2008 reported that tunnels were being built into hillsides which could be capable of hiding up to 20 nuclear submarines from spy satellites. According to the Western news media the base is reportedly to help China project seapower well into the Pacific Ocean area, including challenging United States naval power. During a 2008 interview with the BBC, Major General Qian Lihua, a senior Chinese defense official, stated that the PLAN aspired to possess a small number of aircraft carriers to allow it to expand China's air defense perimeter. According to Qian the important issue was not whether China had an aircraft carrier, but what it did with it. On 13 January 2009, Adm. Robert F. Willard, head of the U.S. Pacific Command, called the PLAN's modernization "aggressive," and that it raised concerns in the region. On 15 July 2009, Senator Jim Webb of the Senate Foreign Relations Committee declared that only the "United States has both the stature and the national power to confront the obvious imbalance of power that China brings" to situations such as the claims to the Spratly and Paracel islands. Ronald O'Rourke of the Congressional Research Service wrote in 2009 that the PLAN "continues to exhibit limitations or weaknesses in several areas, including capabilities for sustained operations by larger formations in distant waters, joint operations with other parts of China’s military, C4ISR systems, anti-air warfare (AAW), antisubmarine warfare (ASW), MCM, and a dependence on foreign suppliers for certain key ship components." In 1998 China purchased the discarded Ukrainian ship Varyag and began retrofitting it for naval deployment. On 25 September 2012, the People's Liberation Army Navy took delivery of China's first aircraft carrier, the Liaoning. The 60,000-ton ship can accommodate 33 fixed wing aircraft. It is widely speculated that these aircraft will be the J15 fighter (the Chinese version of Russia's SU-33). In September 2015, satellite images showed that China may have started constructing its first indigenous Type 002 aircraft carrier. At the time, the layout suggested to be displacement of 50,000 tons and a hull to have a length of about 240 m and a beam of about 35 m. The incomplete bow suggests a length of at least 270 m for the completed hull. In April 2017 the carrier was launched. Japan has raised concerns about the PLAN's growing capability and the lack of transparency as its naval strength keeps on expanding. China has reportedly entered into service the world's first anti-ship ballistic missile called DF-21D. The potential threat from the DF-21D against U.S. aircraft carriers has reportedly caused major changes in U.S. strategy. In June 2017 China launched a new type of large destroyer, the Type 055 destroyer. The new destroyer is, with its dimension of 180 meter and over 12,000 tons fully loaded, the second largest destroyer class in the world after the American Zumwalt-class destroyer. Territorial disputes Spratly Islands dispute The Spratly Islands dispute is a territorial dispute over the ownership of the Spratly Islands, a group of islands located in the South China Sea. States staking claims to various islands are Brunei, Malaysia, the Philippines, Taiwan, Vietnam, and People's Republic of China. All except Brunei occupy some of the islands in dispute. The People's Republic of China conducted naval patrols in the Spratly Islands and established a permanent base. On 14 March 1988, Chinese and Vietnamese naval forces clashed over Johnson South Reef in the Spratly Islands, which involved three PLAN frigates/ In February 2011, the Chinese frigate Dongguan fired three shots at Philippine fishing boats in the vicinity of Jackson atoll. The shots were fired after the frigate instructed the fishing boats to leave, and one of those boats experienced trouble removing its anchor. In May 2011, the Chinese patrol boats attacked and cut the cable of Vietnamese oil exploration ships near Spratly islands. The incidence sparked several anti-China protests in Vietnam. In June 2011, the Chinese navy conducted three days of exercises, including live fire drills, in the disputed waters. This was widely seen as a warning to Vietnam, which had also conducted live fire drills near the Spratly Islands. Chinese patrol boats fired repeated rounds at a target on an apparently uninhabited island, as twin fighter jets streaked in tandem overhead. 14 vessels participated in the maneuvers, staging antisubmarine and beach landing drills aimed at "defending atolls and protecting sea lanes." In May 2013, the Chinese navy's three operational fleets deployed together for the first time since 2010. This combined naval maneuvers in the South China Sea coincided with the ongoing Spratly Islands dispute between China and the Philippines as well as deployment of the U.S. Navy's Carrier Strike Group Eleven to the U.S. Seventh Fleet. Senkaku Islands (Diaoyu) dispute The Senkaku Islands dispute concerns a territorial dispute over a group of uninhabited islands known as the Diaoyu Islands in China, the Senkaku Islands in Japan, and Tiaoyutai Islands in Taiwan. Aside from a 1945 to 1972 period of administration by the United States, the archipelago has been controlled by Japan since 1895. The People's Republic of China disputed the proposed U.S. handover of authority to Japan in 1971 and has asserted its claims to the islands since that time. Taiwan also has claimed these islands. The disputed territory is close to key shipping lanes and rich fishing grounds, and it may have major oil reserves in the area. On some occasions, ships and planes from various Mainland Chinese and Taiwanese government and military agencies have entered the disputed area. In addition to the cases where they escorted fishing and activist vessels, there have been other incursions. In an eight-month period in 2012, over forty maritime incursions and 160 aerial incursions occurred. For example, in July 2012, three Chinese patrol vessels entered the disputed waters around the islands. Military escalation continued in 2013. In February, Japanese Defense Minister Itsunori Onodera claimed that a Chinese frigate had locked weapons-targeting radar onto a Japanese destroyer and helicopter on two occasions in January. A Chinese Jiangwei II class frigate and a Japanese destroyer were three kilometers apart, and the crew of the latter vessel went to battle stations. The Chinese state media responded that their frigates had been engaged in routine training at the time. In late February 2013, U.S. intelligence detected China moving road-mobile ballistic missiles closer to the coast near the disputed islands, including medium-range DF-16 anti-ship ballistic missiles. In May, a flotilla of Chinese warships from its North Sea Fleet deployed from Qingdao for training exercises western North Pacific Ocean. It is not known if this deployment is related to the ongoing islands dispute between China and Japan. Other incidents On 22 July 2011, following its Vietnam port-call, the Indian amphibious assault vessel was reportedly contacted 45 nautical miles from the Vietnamese coast in the disputed South China Sea by a party identifying itself as the Chinese Navy and stating that the Indian warship was entering Chinese waters. According to a spokesperson for the Indian Navy, since there were no Chinese ships or aircraft were visible, the INS Airavat proceeded on her onward journey as scheduled. The Indian Navy further clarified that "[t]here was no confrontation involving the INS Airavat. India supports freedom of navigation in international waters, including in the South China Sea, and the right of passage in accordance with accepted principles of international law. These principles should be respected by all." On 11 July 2012, the Chinese frigate Dongguan ran aground on Hasa Hasa Shoal (pictured) located 60 nmi west of Rizal, which was within the Philippines' 200 nmi-EEZ. By 15 July, the frigate had been refloated and was returning to port with no injuries and only minor damage. During this incident, the 2012 ASEAN summit took place in Phnom Penh, Cambodia, amid the rising regional tensions. 2008 anti-piracy operations On 18 December 2008, Chinese authorities deployed People's Liberation Army Navy vessels to escort Chinese shipping in the Gulf of Aden. This deployment came after a series of attacks and attempted hijackings on Chinese vessels by Somali pirates. Reports suggest two destroyers (Type 052C 171 Haikou and Type 052B 169 Wuhan) and a supply ship are the ones being used. This move was welcomed by the international community as the warships complement a multinational fleet already operating along the coast of Africa. Since this operation PLAN has sought the leadership of the ‘Shared Awareness and Deconfliction (SHADE)' body, which would require an increase in the number of ships contributing to the anti-piracy fleet. This is the first time Chinese warships have deployed outside the Asia-Pacific region for a military operation since Zheng He's expeditions in the 15th century. Since then more than 30 People's Liberation Army Navy ships has deployed to the Gulf of Aden in 18 Escort Task Groups. Libyan civil war In the lead-up to the Libyan Civil War, the Xuzhou (530) was deployed from anti-piracy operations in the Gulf of Aden to help evacuate Chinese nationals from Libya. Yemen Conflict During the Yemen conflict in 2015, the Chinese Navy diverted their frigates carrying out anti-piracy operations in Somalia to evacuate at least 600 Chinese and 225 foreign citizens working in Yemen. The majority of non-Chinese evacuees were 176 Pakistani citizens, although there were smaller numbers from other countries, such as Ethiopia, Singapore, the UK, Italy and Germany. Despite the evacuations the Chinese embassy in Yemen continued to operate. Equipment As of 2018, the Chinese navy operates over 496 combat ships and 232 various auxiliary vessels and counts 255,000 seamen in its ranks. The Chinese Navy also employ more than 710 naval aircraft including fighters, bombers and electronic warfare aircraft. China has large amount of artillery, torpedoes, and missiles included in their combat assets. Ships and submarines All ships and submarines currently in commission with the People's Liberation Army Navy were built in China, with the exception of the Sovremenny-class destroyers, Kilo-class submarines and the aircraft carrier Liaoning. Those vessels were either imported from, or originated from Russia or Ukraine. As of 2008, English-language official Chinese state media no longer uses the term "People's Liberation Army Navy", instead the term "Chinese Navy" along with the usage of the unofficial prefix "CNS" for "Chinese Navy Ship" is now employed. China employs a wide range of Navy combatants including aircraft carriers, amphibious warfare ships and destroyers. The Chinese Navy is undergoing modernization rapidly with nearly half of Chinese Navy combat ships built after 2010. China's state-owned shipyards have built 83 ships in just eight years with unprecedented speed. China has its own independent maritime missile defense and naval combat system similar to US Aegis. Aircraft China operates carrier-based fighter aircraft to secure land, air and sea targets. The Chinese Navy also operates a wide range of helicopters for battlefield logistics, reconnaissance, patrol and medical evacuation. Naval weaponry The unique QBS-06 is an underwater assault rifle with 5.8x42 DBS-06, and is used by Naval frogmen. It is based on the Soviet APS. In early February 2018, pictures of what is claimed to be a Chinese railgun were published online. In the pictures the gun is mounted on the bow of a Type 072III-class landing ship Haiyangshan. Media is suggesting that the system is or soon will be ready for testing. In March 2018, it was reported that China had confirmed that it had begun testing its electromagnetic rail gun at sea. Future of the People's Liberation Army Navy The PLAN's ambitions include operating out to the first and second island chains, as far as the South Pacific near Australia, and spanning to the Aleutian islands, and operations extending to the Straits of Malacca near the Indian Ocean. The future PLAN fleet will be composed of a balance of combatant assets aimed at maximising the PLAN's fighting effectiveness. On the high end, there would be modern destroyers, such as stealth guided missile destroyers equipped with long-range air defense missiles and anti-submarine capabilities (Type 055). There would be modern destroyers equipped with long-range air defense missiles (Type 052B, Type 052C, Type 052D and Type 051C, and destroyers armed with supersonic anti-ship missiles (Sovremenny class). There would be advanced nuclear-powered attack and ballistic missile submarines (Type 093, Type 095, Type 094, Type 096), advanced conventional attack submarines (Kilo and Yuan classes), aircraft carriers (Type 001, Type 002 and Type 003), and helicopter carriers (Type 075) and large amphibious warfare vessels (Type 071) capable of mobilizing troops at long distances. On the medium and low end, there would be more economical multi-role capable frigates and destroyers (Luhu, Jiangwei II and Jiangkai classes), corvettes (Jiangdao class), fast littoral missile attack craft (Houjian, Houxin and Houbei classes), various landing ships and light craft, and conventionally powered coastal patrol submarines (Song class). The obsolete combat ships (based on 1960s designs) will be phased out in the coming decades as more modern designs enter full production. It may take a decade for the bulk of these older ships to be retired. Until then, they will serve principally on the low end, as multi-role patrol/escort platforms. Their |
were there from very early on: the slightly mysterious demeanour and, increasingly, the light, suave, flirting tone with ladies (and always with his female assistants). Finally, from the episodes with Blackman onwards, the trademark bowler hat and umbrella completed the image. Although it traditionally was associated with London "city gents", the ensemble of suit, umbrella and bowler had developed in the post-war years as mufti for ex-servicemen attending Armistice Day ceremonies. Steed's sartorial style may also have been drawn from Macnee's father. Macnee, alongside designer Pierre Cardin, adapted the look into a style all his own, and he went on to design several outfits himself for Steed based on the same basic theme. Steed was also the central character of The New Avengers (1976–77), in which he was teamed with agents named Purdey (Joanna Lumley) and Mike Gambit (Gareth Hunt). Macnee insisted on, and was proud of, almost never carrying a gun in the original series; when asked why, he explained, "I'd just come out of a World War in which I'd seen most of my friends blown to bits." Lumley later said she did all the gun-slinging in The New Avengers for the same reason. When asked in June 1982 which Avengers female lead was his favourite, Macnee declined to give a specific answer. "Well, I'd rather not say. To do so would invite trouble," he told TV Week magazine. Macnee did provide his evaluation of the female leads. Of Honor Blackman he said, "She was wonderful, presenting the concept of a strong-willed, independent and liberated woman just as that sort of woman was beginning to emerge in society." Diana Rigg was "One of the world's great actresses. A superb comedienne. I'm convinced that one day she'll be Dame Diana" (his prediction came true in 1994). Linda Thorson was "one of the sexiest women alive" while Joanna Lumley was "superb in the role of Purdey. An actress who is only now realising her immense potential." Macnee co-wrote two original novels based upon The Avengers during the 1960s, titled Dead Duck and Deadline. He hosted the documentary The Avengers: The Journey Back (1998), directed by Clyde Lucas. For the critically lambasted film version of The Avengers (1998), he lent his voice in a cameo as Invisible Jones. The character John Steed was taken over by Ralph Fiennes. Later roles Macnee's other significant roles included playing Sir Godfrey Tibbett opposite Roger Moore in the James Bond film A View to a Kill (1985), as Major Crossley in The Sea Wolves (again with Moore), guest roles in Encounter, Alias Smith and Jones (for Glen A. Larson), Magnum, P.I., Hart to Hart, Murder, She Wrote and The Love Boat. Although his best known role was heroic, many of his television appearances were as villains; among them were his roles of both the demonic Count Iblis and his provision of the character voice of the Cylons' Imperious Leader in Battlestar Galactica, also for Glen A. Larson, for which he also supplied the show's introductory voiceover. He also presented the American paranormal series Mysteries, Magic and Miracles. Macnee appeared on Broadway as the star of Anthony Shaffer's mystery Sleuth in 1972–73. He subsequently headlined the national tour of that play. Macnee reunited with Diana Rigg in her short-lived sitcom Diana (1973) in a single episode. Other television appearances include a guest appearance on Columbo in the episode "Troubled Waters" (1975); and playing Major Vickers in For the Term of his Natural Life (1983). He had recurring roles in the crime series Gavilan with Robert Urich and in the short-lived satire on big business, Empire (1984), as Dr. Calvin Cromwell. Macnee was known for narrating various James Bond Documentaries on Special Edition DVD. He also narrated the documentary Ian Fleming: 007's Creator (2000). Macnee featured prominently in two editions of the long-running British television series This Is Your | before graduating to credited roles in such films as Scrooge (US: A Christmas Carol, 1951), as young Jacob Marley, the Gene Kelly vehicle Les Girls (1957), as an Old Bailey barrister, and the war film The Battle of the River Plate (1956). Between these occasional movie roles, Macnee spent the better part of the 1950s working in dozens of small roles in American and Canadian television and theatre, including an appearance in an episode of One Step Beyond ("Night of April 14th") and The Twilight Zone ("Judgment Night") in 1959. Disappointed in his limited career development, in the late 1950s Macnee was daily smoking 80 cigarettes and drinking a bottle of whisky. Not long before his career-making role in The Avengers, Macnee took a break from acting and served as one of the London-based producers for the classic documentary series The Valiant Years, based on the Second World War memoirs of Winston Churchill. The Avengers While working in London on the Churchill series, Macnee was offered the role in The Avengers (1961–69), (originally intended to be known as Jonathan Steed), for which he became best known. The series was conceived as a vehicle for Ian Hendry, who played the lead role of Dr. David Keel in a sequel to an earlier series, Police Surgeon (1960), while John Steed was his assistant. Macnee, though, became the lead after Hendry's departure at the end of the first season. Macnee played opposite a succession of glamorous female partners: Honor Blackman, Diana Rigg and Linda Thorson. Of the 161 completed episodes, Macnee appeared in all but two, both from the first season. Although Macnee evolved in the role as the series progressed, the key elements of Steed's persona and appearance were there from very early on: the slightly mysterious demeanour and, increasingly, the light, suave, flirting tone with ladies (and always with his female assistants). Finally, from the episodes with Blackman onwards, the trademark bowler hat and umbrella completed the image. Although it traditionally was associated with London "city gents", the ensemble of suit, umbrella and bowler had developed in the post-war years as mufti for ex-servicemen attending Armistice Day ceremonies. Steed's sartorial style may also have been drawn from Macnee's father. Macnee, alongside designer Pierre Cardin, adapted the look into a style all his own, and he went on to design several outfits himself for Steed based on the same basic theme. Steed was also the central character of The New Avengers (1976–77), in which he was teamed with agents named Purdey (Joanna Lumley) and Mike Gambit (Gareth Hunt). Macnee insisted on, and was proud of, almost never carrying a gun in the original series; when asked why, he explained, "I'd just come out of a World War in which I'd seen most of my friends blown to bits." Lumley later said she did all the gun-slinging in The New Avengers for the same reason. When asked in June 1982 which Avengers female lead was his favourite, Macnee declined to give a specific answer. "Well, I'd rather not say. To do so would invite trouble," he told TV Week magazine. Macnee did provide his evaluation of the female leads. Of Honor Blackman he said, "She was wonderful, presenting the concept of a strong-willed, independent and liberated woman just as that sort of woman was beginning to emerge in society." Diana Rigg was "One of the world's great actresses. A superb comedienne. I'm convinced that one day she'll be Dame Diana" (his prediction came true in 1994). Linda Thorson was "one of the sexiest women alive" while Joanna Lumley was "superb in the role of Purdey. An actress who is only now realising her immense potential." Macnee co-wrote two original novels based upon The Avengers during the 1960s, titled Dead Duck and Deadline. He hosted the documentary The Avengers: The Journey Back (1998), directed by Clyde Lucas. For the critically lambasted film version of The Avengers (1998), he lent his voice in a cameo as Invisible Jones. The character John Steed was taken over by Ralph Fiennes. Later roles Macnee's other significant roles included playing Sir Godfrey Tibbett opposite Roger Moore in the James Bond film A View to a Kill (1985), as Major Crossley in The Sea Wolves (again with Moore), guest roles in Encounter, Alias Smith and Jones (for Glen A. Larson), Magnum, P.I., Hart to Hart, Murder, She Wrote and The Love Boat. Although his best known role was heroic, many of his television appearances were as villains; among them were his roles of both the demonic Count Iblis and his provision of the character voice of the Cylons' Imperious Leader in Battlestar Galactica, also for Glen A. Larson, for which he also supplied the show's introductory voiceover. He also presented the American paranormal series Mysteries, Magic and Miracles. Macnee appeared on Broadway as the star of Anthony Shaffer's mystery Sleuth in 1972–73. He subsequently headlined the national tour of that play. Macnee reunited with Diana Rigg in her short-lived sitcom Diana (1973) in a single episode. Other television appearances include a guest appearance on Columbo in the episode "Troubled Waters" (1975); and playing Major Vickers in For the Term of his Natural Life (1983). He had recurring roles in the crime series Gavilan with Robert Urich and in the short-lived satire on big business, Empire (1984), as Dr. Calvin Cromwell. Macnee was known for narrating various James Bond Documentaries on Special Edition DVD. He also narrated the documentary Ian Fleming: 007's Creator (2000). Macnee featured prominently in two editions of the long-running British television series This Is Your Life: in 1978, when he and host Eamonn Andrews, both dressed as Steed, surprised Ian Hendry, and in 1984 when he was the edition's unsuspecting subject. He also appeared in several cult films: in The Howling (1981), as Dr. George Waggner (named whimsically after the director of The Wolf Man, 1941) and as Sir Denis Eton-Hogg in the rockumentary comedy This Is Spinal Tap (1984). He played Dr. Stark in The Creature Wasn't Nice (1981), also called Spaceship and Naked Space. Macnee played the role of actor David Mathews in the television movie Rehearsal for Murder (1982), which starred Robert Preston and Lynn Redgrave. The movie was from a script written by Columbo co-creators Richard Levinson and William Link. He took over Leo G. Carroll's role as the head of U.N.C.L.E. His character being Sir John Raleigh in Return of the Man from U.N.C.L.E.: The Fifteen-Years-Later Affair (1983), produced by Michael Sloan. He was featured in the science fiction television movie Super Force (1990) as E. B. Hungerford (the series which followed only featured Macnee's voice as a Max Headroom-style computer simulation of his character), as a supporting character in the parody film Lobster Man from Mars (1989) as Professor Plocostomos and in the television film The Return of Sam McCloud (1989) as Tom Jamison. He made an appearance in Frasier (2001), and several episodes of the American sci-fi series Nightman as Dr. Walton, a psychiatrist who would advise Johnny/Nightman. Macnee appeared in two episodes of the series Kung Fu: The Legend Continues (1993–94) and was a retired |
Temporary | maintenance |
Marie de St Pol, widow of the Earl of Pembroke, the licence for the foundation of a new educational establishment in the young university at Cambridge. The Hall of Valence Mary ("Custos & Scolares Aule Valence Marie in Cantebrigg'"), as it was originally known, was thus founded to house a body of students and fellows. The statutes were notable in that they both gave preference to students born in France who had already studied elsewhere in England, and that they required students to report fellow students if they indulged in excessive drinking or visited disreputable houses. The college was later renamed Pembroke Hall, and finally became Pembroke College in 1856. Marie was closely involved with College affairs in the 30 years until her death in 1377. She seems to have been something of a disciplinarian: the original Foundation documents had strict penalties for drunkenness and lechery, required that all students' debts were settled within two weeks of the end of term, and gave strict limits on numbers at graduation parties. In 2015, the college received a bequest of £34 million from the estate of American inventor and Pembroke alumnus Ray Dolby, thought to be the largest single donation to a college in the history of Cambridge University. Buildings Old Court The first buildings comprised a single court (now called Old Court) containing all the component parts of a college – chapel, hall, kitchen and buttery, master's lodgings, students' rooms – and the statutes provided for a manciple, a cook, a barber and a laundress. Both the founding of the college and the building of the city's first college Chapel (1355) required the grant of a papal bull. The original court was the university's smallest at only by , but was enlarged to its current size in the nineteenth century by demolishing the south range. The college's gatehouse is the oldest in Cambridge. Chapel The original Chapel now forms the Old Library and has a striking seventeenth-century plaster ceiling, designed by Henry Doogood, showing birds flying overhead. Around the Civil War, one of Pembroke's fellows and Chaplain to the future Charles I, Matthew Wren, was imprisoned by Oliver Cromwell. On his release after eighteen years, he fulfilled a promise by hiring his nephew Christopher Wren to build a great Chapel in his former college. The resulting Chapel was consecrated on St Matthew's Day, 1665, and the eastern end was extended by George Gilbert Scott in 1880, when it was consecrated on the Feast of the Annunciation. Expansion An increase in membership over the last 150 years saw a corresponding increase in building activity. The Hall was rebuilt in 1875–1876 to designs by Alfred Waterhouse after he had declared the medieval Hall unsafe. As well as the Hall, Waterhouse designed a new range of rooms, Red Buildings (1871–1872), in French Renaissance style, designed a new Master's Lodge on the site of Paschal Yard (1873, later to become N staircase), pulled down the old Lodge and the south range of Old Court to open a vista to the chapel, and finally designed a new Library (1877–1878) in the continental Gothic style. The construction of the new library was undertaken by Rattee and Kett. Waterhouse was dismissed as architect in 1878 and succeeded by George Gilbert Scott, who, after extending the chapel, provided additional accommodation with the construction of New Court in 1881, with letters on a series of shields along the string course above the first floor spelling out the text from Psalm 127:1, ("Except the Lord build the house, their labour is but vain that build it"). Building work continued into the 20th century with W. D. Caröe as architect. He added Pitt Building (M staircase) between Ivy Court and Waterhouse's Lodge, and extended New Court with the construction of O staircase on the other side of the Lodge. He linked his two buildings with an arched stone screen, Caröe Bridge, along Pembroke Street in a late Baroque style, the principal function of which was to act as a bridge by which undergraduates might cross the Master's forecourt at | former college. The resulting Chapel was consecrated on St Matthew's Day, 1665, and the eastern end was extended by George Gilbert Scott in 1880, when it was consecrated on the Feast of the Annunciation. Expansion An increase in membership over the last 150 years saw a corresponding increase in building activity. The Hall was rebuilt in 1875–1876 to designs by Alfred Waterhouse after he had declared the medieval Hall unsafe. As well as the Hall, Waterhouse designed a new range of rooms, Red Buildings (1871–1872), in French Renaissance style, designed a new Master's Lodge on the site of Paschal Yard (1873, later to become N staircase), pulled down the old Lodge and the south range of Old Court to open a vista to the chapel, and finally designed a new Library (1877–1878) in the continental Gothic style. The construction of the new library was undertaken by Rattee and Kett. Waterhouse was dismissed as architect in 1878 and succeeded by George Gilbert Scott, who, after extending the chapel, provided additional accommodation with the construction of New Court in 1881, with letters on a series of shields along the string course above the first floor spelling out the text from Psalm 127:1, ("Except the Lord build the house, their labour is but vain that build it"). Building work continued into the 20th century with W. D. Caröe as architect. He added Pitt Building (M staircase) between Ivy Court and Waterhouse's Lodge, and extended New Court with the construction of O staircase on the other side of the Lodge. He linked his two buildings with an arched stone screen, Caröe Bridge, along Pembroke Street in a late Baroque style, the principal function of which was to act as a bridge by which undergraduates might cross the Master's forecourt at first-floor level from Pitt Building to New Court without leaving the college or trespassing in what was then the Fellows' Garden. In 1926, as the Fellows had become increasingly disenchanted with Waterhouse's Hall, Maurice Webb was brought in to remove the open roof, put in a flat ceiling and add two storeys of sets above. The wall between the Hall and the Fellows' Parlour was taken down, and the latter made into a High Table dais. A new Senior Parlour was then created on the ground floor of Hitcham Building. The remodelling work was completed in 1949 when Murray Easton replaced the Gothic tracery of the windows with a simpler design in the style of the medieval Hall. In 1933 Maurice Webb built a new Master's Lodge in the south-east corner of the College gardens, on land acquired from Peterhouse in 1861. Following the war, further accommodation was created with the construction in 1957 of Orchard Building, so called because it stands on part of the Foundress's orchard. Finally, in a move to accommodate the majority of junior members on the College site rather than in hostels in the town, in the 1990s Eric Parry designed a new range of buildings on the site of the Master's Lodge, with a new Lodge at the west end. "Foundress Court" was opened in 1997 in celebration of the college's 650th Anniversary. In 2001 the Library was extended to the east and modified internally. In 2017, Pembroke College launched a new campaign of extension called the "Time and the place" (or the Mill Lane project), on the other side of Trumpington Street. The project is to enlarge the size of the college by a third, with new social spaces, rooms and offices. Gardens Pembroke's enclosed grounds include garden areas. Highlights include "The Orchard" (a patch of semi-wild ground in the centre of the college), an impressive row of Plane Trees and a bowling green, re-turfed in 1996, which is reputed to be among the oldest in continual use in Europe. Gallery Coat of arms The arms of Pembroke College were officially recorded in 1684. The formal blazon combines the arms of De Valence (bars), dimidiated with the arms of St. Pol (vair). It is described as : Barry of ten argent and azure, an orle of five martlets gules dimidiated with paly vair and gules, on a chief Or a label of five points throughout azure. Traditions Pembroke holds Formal Hall on every evening. Students of the college must wear gowns and arrive on time for Latin Grace, which starts the dinner. Like many Cambridge colleges, Pembroke also has its annual May Ball. According to popular legends, Pembroke is inhabited by ghosts occupying the Ivy Court. Student life Pembroke College has both graduate and undergraduate students, termed Valencians, after the college's original name, and its recreational rooms named as "parlours" rather than the more standard "combination room". The undergraduate student body is represented by the Junior Parlour Committee (JPC). The graduate community is represented by the Graduate Parlour Committee (GPC). In March 2016, the Junior Parlour Committee was featured in national newspapers after it cancelled the theme of an "Around The World in 80 Days" dance party. There are many clubs and societies organised by the students of the college, such as the boat club Pembroke College Boat Club and the college's dramatic society the Pembroke Players, which has been made famous by alumni including Peter Cook, Eric Idle, Tim Brooke-Taylor, Clive James and Bill Oddie and is now in its 60th year. Other sporting highlights include Pirton RUFC, the rugby union team joint with Girton College. International programmes Pembroke is the only Cambridge college to have an International Programmes Department, providing |
take an irreducible polynomial in a polynomial ring over some field . If denotes the ring of polynomials in two variables with complex coefficients, then the ideal generated by the polynomial is a prime ideal (see elliptic curve). In the ring of all polynomials with integer coefficients, the ideal generated by and is a prime ideal. It consists of all those polynomials whose constant coefficient is even. In any ring , a maximal ideal is an ideal that is maximal in the set of all proper ideals of , i.e. is contained in exactly two ideals of , namely itself and the whole ring . Every maximal ideal is in fact prime. In a principal ideal domain every nonzero prime ideal is maximal, but this is not true in general. For the UFD Hilbert's Nullstellensatz states that every maximal ideal is of the form If is a smooth manifold, is the ring of smooth real functions on , and is a point in , then the set of all smooth functions with forms a prime ideal (even a maximal ideal) in . Non-examples Consider the composition of the following two quotients Although the first two rings are integral domains (in fact the first is a UFD) the last is not an integral domain since it is isomorphic to showing that the ideal is not prime. (See the first property listed below.) Another non-example is the ideal since we have but neither nor are elements of the ideal. Properties An ideal in the ring (with unity) is prime if and only if the factor ring is an integral domain. In particular, a commutative ring (with unity) is an integral domain if and only if is a prime ideal. An ideal is prime if and only if its set-theoretic complement is multiplicatively closed. Every nonzero ring contains at least one prime ideal (in fact it contains at least one maximal ideal), which is a direct consequence of Krull's theorem. More generally, if is any multiplicatively closed set in , then a lemma essentially due to Krull shows that there exists an ideal of maximal with respect to being disjoint from , and moreover the ideal must be prime. This can be further generalized to noncommutative rings (see below). In the case we have Krull's theorem, and this recovers the maximal ideals of . Another prototypical m-system is the set, of all positive powers of a non-nilpotent element. The preimage of a prime ideal under a ring homomorphism is a prime ideal. The analogous fact is not always true for maximal ideals, which is one reason algebraic geometers define the spectrum of a ring to be its set of prime rather than maximal ideals; one wants a homomorphism of rings to give a map between their spectra. The set of all prime ideals (called the spectrum | be prime. Not every ideal which cannot be factored into two ideals is a prime ideal; e.g. cannot be factored but is not prime. In a commutative ring with at least two elements, if every proper ideal is prime, then the ring is a field. (If the ideal is prime, then the ring is an integral domain. If is any non-zero element of and the ideal is prime, then it contains and then is invertible.) A nonzero principal ideal is prime if and only if it is generated by a prime element. In a UFD, every nonzero prime ideal contains a prime element. Uses One use of prime ideals occurs in algebraic geometry, where varieties are defined as the zero sets of ideals in polynomial rings. It turns out that the irreducible varieties correspond to prime ideals. In the modern abstract approach, one starts with an arbitrary commutative ring and turns the set of its prime ideals, also called its spectrum, into a topological space and can thus define generalizations of varieties called schemes, which find applications not only in geometry, but also in number theory. The introduction of prime ideals in algebraic number theory was a major step forward: it was realized that the important property of unique factorisation expressed in the fundamental theorem of arithmetic does not hold in every ring of algebraic integers, but a substitute was found when Richard Dedekind replaced elements by ideals and prime elements by prime ideals; see Dedekind domain. Prime ideals for noncommutative rings The notion of a prime ideal can be generalized to noncommutative rings by using the commutative definition "ideal-wise". Wolfgang Krull advanced this idea in 1928. The following content can be found in texts such as Goodearl's and Lam's. If is a (possibly noncommutative) ring and is a proper ideal of , we say that is prime if for any two ideals and of : If the product of ideals is contained in , then at least one of and is contained in . It can be shown that this definition is equivalent to the commutative one in commutative rings. It is readily verified that if an ideal of a noncommutative ring satisfies the commutative definition of prime, then it also satisfies the noncommutative version. An ideal satisfying the commutative definition of prime is sometimes called a completely prime ideal to distinguish it from other merely prime ideals in the ring. Completely prime ideals are prime ideals, but the converse is not true. For example, the zero ideal in the ring of matrices over a field is a prime ideal, but it is not completely prime. This is close to the historical point of view of ideals as ideal numbers, as for the ring " is contained in " is another way of saying " divides ", and the unit ideal represents unity. Equivalent formulations of the ideal being prime include the following properties: For all and in , implies or . For any two right ideals of , implies or . For any two left ideals of , implies or . For any elements and of , if , then or . Prime ideals in commutative rings are characterized by having multiplicatively closed complements in , and with slight modification, a similar characterization can be formulated for prime ideals in noncommutative rings. A nonempty subset is called an m-system if for any and in , there exists in such that is in . The following item can then be added to |
comparison. Publications grew skeptical on how well it would perform in the market due to its inferior hardware and the amount of competing platforms. The Tetsujin was scrapped in early 1994 as the two companies began work on designing an improvement that could compete with systems such as the Sega Saturn. While NEC and Hudson knew that the system's technology was unimpressive, time constraints prevented them from creating a new one from scratch, which was codenamed "FX". The system was redesigned to resemble a PC tower, with slots that allowed for future models to increase its capabilities. Very little of the hardware itself was changed from the Tetsujin prototype, although it added a new 32-bit V-810 RISC CPU. The system was renamed to the PC-FX, the "PC" believed to be a nod to the PC Engine brand. Unusual for a fifth generation console, the PC-FX does not have a polygon graphics processor. NEC's reasoning for this was that polygon processors of the time were relatively low-powered, resulting in figures having a blocky appearance, and that it would be better for games to use pre-rendered polygon graphics instead. The PC-FX was announced in late 1993 and showcased at the 1994 Tokyo Toy Show in June. Presented alongside several competing systems—the PlayStation, Sega Saturn, Neo Geo CD, and Bandai Playdia—its PC tower design was met with ridicule from commentators. Hudson demonstrated FX Fighter, a full-motion video fighting game created in response to Sega's Virtua Fighter, to showcase the system's capabilities. Its smooth-shaded polygonal visuals were met with praise from publications, which contributed to the anticipated launch of the console. The system's target audience was roughly five years older than that of the PC Engine, in hopes that PC Engine fans would be brought over to the successor console. The console was launched in Japan on December 23, 1994. In an interview roughly a year before the system launch, a representative stated that NEC had all but ruled out a release outside Japan, concluding that it would most likely sell poorly overseas due to its high price. The PC-FX was discontinued in early 1998 with only 400,000 units sold. Technical specifications The PC-FX uses CD-ROMs as its storage medium, following on from the expansion released for its | one from scratch, which was codenamed "FX". The system was redesigned to resemble a PC tower, with slots that allowed for future models to increase its capabilities. Very little of the hardware itself was changed from the Tetsujin prototype, although it added a new 32-bit V-810 RISC CPU. The system was renamed to the PC-FX, the "PC" believed to be a nod to the PC Engine brand. Unusual for a fifth generation console, the PC-FX does not have a polygon graphics processor. NEC's reasoning for this was that polygon processors of the time were relatively low-powered, resulting in figures having a blocky appearance, and that it would be better for games to use pre-rendered polygon graphics instead. The PC-FX was announced in late 1993 and showcased at the 1994 Tokyo Toy Show in June. Presented alongside several competing systems—the PlayStation, Sega Saturn, Neo Geo CD, and Bandai Playdia—its PC tower design was met with ridicule from commentators. Hudson demonstrated FX Fighter, a full-motion video fighting game created in response to Sega's Virtua Fighter, to showcase the system's capabilities. Its smooth-shaded polygonal visuals were met with praise from publications, which contributed to the anticipated launch of the console. The system's target audience was roughly five years older than that of the PC Engine, in hopes that PC Engine fans would be brought over to the successor console. The console was launched in Japan on December 23, 1994. In an interview roughly a year before the system launch, a representative stated that NEC had all but ruled out a release outside Japan, concluding that it would most likely sell poorly overseas due to its high price. The PC-FX was discontinued in early 1998 with only 400,000 units sold. Technical specifications The PC-FX uses CD-ROMs as its storage medium, following on from the expansion released for its HuCard based predecessor. The game controller is virtually identical to a DUO-RX controller, but the rapid fire switches have been replaced with mode A/B switches. Peripherals include a PC-FX mouse, which is supported by strategy games like Farland Story FX and Power DoLLS FX. The shining quality of the PC-FX is the ability to decompress 30 JPEG pictures per second while playing digitally recorded audio, essentially a form of Motion JPEG. This gives the PC-FX superior full motion video quality over all other fifth generation consoles. The PC-FX's computer-like form factor is unusual for consoles at the time. It stands upright like a tower computer while other contemporary consoles lay flat, and it has three expansion ports. Similar to the 3DO, it features a built in power supply. The PC-FX includes an HU 62 series 32-bit system |
for child and teenage clients. Similarly in Italy, the practice of psychotherapy is restricted to graduates in psychology or medicine who have completed four years of recogniesd specialist training. Sweden has a similar restriction on the title "psychotherapist", which may only be used by professionals who have gone through a post-graduate training in psychotherapy and then applied for a licence, issued by the National Board of Health and Welfare. Legislation in France restricts the use of the title "psychotherapist" to professionals on the National Register of Psychotherapists, which requires a training in clinical psychopathology and a period of internship which is only open to physicians or titulars of a master's degree in psychology or psychoanalysis. Austria and Switzerland (2011) have laws that recognize multi-disciplinary functional approaches. In the United Kingdom, the government and Health and Care Professions Council considered mandatory legal registration but decided that it was best left to professional bodies to regulate themselves, so the Professional Standards Authority for Health and Social Care (PSA) launched an Accredited Voluntary Registers scheme. Counseling and psychotherapy are not protected titles in the United Kingdom. Counsellors and psychotherapists who have trained and qualify to a certain standard (usually a level 4 Diploma) can apply to be members of the professional bodies who are listed on the PSA Accredited Registers. United States In some states, counselors or therapists must be licensed to use certain words and titles on self-identification or advertising. In some other states, the restrictions on practice are more closely associated with the charging of fees. Licensing and regulation are performed by various states. Presentation of practice as licensed, but without such a license, is generally illegal. Without a license, for example, a practitioner cannot bill insurance companies. Information about state licensure is provided by the American Psychological Association. In addition to state laws, the American Psychological Association requires its members to adhere to its published Ethical Principles of Psychologists and Code of Conduct. The American Board of Professional Psychology examines and certifies "psychologists who demonstrate competence in approved specialty areas in professional psychology". Canada Regulation of psychotherapy is in the jurisdiction of, and varies among, the provinces and territories. In Quebec, psychotherapy is a regulated activity which is restricted to psychologists, medical doctors, and holders of a psychotherapy permit issued by the Ordre des psychologues du Québec, the Quebec order of psychologists. Members of certain specified professions, including social workers, couple and family therapists, occupational therapists, guidance counsellors, criminologists, sexologists, psychoeducators, and registered nurses may obtain a psychotherapy permit by completing certain educational and practice requirements; their professional oversight is provided by their own professional orders. Some other professionals who were practising psychotherapy before the current system came into force continue to hold psychotherapy permits alone. History Psychotherapy can be said to have been practiced through the ages, as medics, philosophers, spiritual practitioners and people in general used psychological methods to heal others. In the Western tradition, by the 19th century, a moral treatment movement (then meaning morale or mental) developed based on non-invasive non-restraint therapeutic methods. Another influential movement was started by Franz Mesmer (1734–1815) and his student Armand-Marie-Jacques de Chastenet, Marquis of Puységur (1751–1825). Called Mesmerism or animal magnetism, it would have a strong influence on the rise of dynamic psychology and psychiatry as well as theories about hypnosis. In 1853, Walter Cooper Dendy introduced the term "psycho-therapeia" regarding how physicians might influence the mental states of sufferers and thus their bodily ailments, for example by creating opposing emotions to promote mental balance. Daniel Hack Tuke cited the term and wrote about "psycho-therapeutics" in 1872, in which he also proposed making a science of animal magnetism. Hippolyte Bernheim and colleagues in the "Nancy School" developed the concept of "psychotherapy" in the sense of using the mind to heal the body through hypnotism, yet further. Charles Lloyd Tuckey's 1889 work, Psycho-therapeutics, or Treatment by Hypnotism and Suggestion popularized the work of the Nancy School in English. Also in 1889 a clinic used the word in its title for the first time, when Frederik van Eeden and Albert Willem van Renterghem in Amsterdam renamed theirs "Clinique de Psycho-thérapeutique Suggestive" after visiting Nancy. During this time, travelling stage hypnosis became popular, and such activities added to the scientific controversies around the use of hypnosis in medicine. Also in 1892, at the second congress of experimental psychology, van Eeden attempted to take the credit for the term psychotherapy and to distance the term from hypnosis. In 1896, the German journal Zeitschrift für Hypnotismus, Suggestionstherapie, Suggestionslehre und verwandte psychologische Forschungen changed its name to Zeitschrift für Hypnotismus, Psychotherapie sowie andere psychophysiologische und psychopathologische Forschungen, which is probably the first journal to use the term. Thus psychotherapy initially meant "the treatment of disease by psychic or hypnotic influence, or by suggestion". Sigmund Freud visited the Nancy School and his early neurological practice involved the use of hypnotism. However following the work of his mentor Josef Breuer—in particular a case where symptoms appeared partially resolved by what the patient, Bertha Pappenheim, dubbed a "talking cure"—Freud began focusing on conditions that appeared to have psychological causes originating in childhood experiences and the unconscious mind. He went on to develop techniques such as free association, dream interpretation, transference and analysis of the id, ego and superego. His popular reputation as the father of psychotherapy was established by his use of the distinct term "psychoanalysis", tied to an overarching system of theories and methods, and by the effective work of his followers in rewriting history. Many theorists, including Alfred Adler, Carl Jung, Karen Horney, Anna Freud, Otto Rank, Erik Erikson, Melanie Klein and Heinz Kohut, built upon Freud's fundamental ideas and often developed their own systems of psychotherapy. These were all later categorized as psychodynamic, meaning anything that involved the psyche's conscious/unconscious influence on external relationships and the self. Sessions tended to number into the hundreds over several years. Behaviorism developed in the 1920s, and behavior modification as a therapy became popularized in the 1950s and 1960s. Notable contributors were Joseph Wolpe in South Africa, M.B. Shapiro and Hans Eysenck in Britain, and John B. Watson and B.F. Skinner in the United States. Behavioral therapy approaches relied on principles of operant conditioning, classical conditioning and social learning theory to bring about therapeutic change in observable symptoms. The approach became commonly used for phobias, as well as other disorders. Some therapeutic approaches developed out of the European school of existential philosophy. Concerned mainly with the individual's ability to develop and preserve a sense of meaning and purpose throughout life, major contributors to the field (e.g., Irvin Yalom, Rollo May) and Europe (Viktor Frankl, Ludwig Binswanger, Medard Boss, R.D.Laing, Emmy van Deurzen) attempted to create therapies sensitive to common "life crises" springing from the essential bleakness of human self-awareness, previously accessible only through the complex writings of existential philosophers (e.g., Søren Kierkegaard, Jean-Paul Sartre, Gabriel Marcel, Martin Heidegger, Friedrich Nietzsche). The uniqueness of the patient-therapist relationship thus also forms a vehicle for therapeutic inquiry. A related body of thought in psychotherapy started in the 1950s with Carl Rogers. Based also on the works of Abraham Maslow and his hierarchy of human needs, Rogers brought person-centered psychotherapy into mainstream focus. The primary requirement was that the client receive three core "conditions" from his counselor or therapist: unconditional positive regard, sometimes described as "prizing" the client's humanity; congruence [authenticity/genuineness/transparency]; and empathic understanding. This type of interaction was thought to enable clients to fully experience and express themselves, and thus develop according to their innate potential. Others developed the approach, like Fritz and Laura Perls in the creation of Gestalt therapy, as well as Marshall Rosenberg, founder of Nonviolent Communication, and Eric Berne, founder of transactional analysis. Later these fields of psychotherapy would become what is known as humanistic psychotherapy today. Self-help groups and books became widespread. During the 1950s, Albert Ellis originated rational emotive behavior therapy (REBT). Independently a few years later, psychiatrist Aaron T. Beck developed a form of psychotherapy known as cognitive therapy. Both of these included relatively short, structured and present-focused techniques aimed at identifying and changing a person's beliefs, appraisals and reaction-patterns, by contrast with the more long-lasting insight-based approach of psychodynamic or humanistic therapies. Beck's approach used primarily the socratic method, and links have been drawn between ancient stoic philosophy and these cognitive therapies. Cognitive and behavioral therapy approaches were increasingly combined and grouped under the umbrella term cognitive behavioral therapy (CBT) in the 1970s. Many approaches within CBT are oriented towards active/directive yet collaborative empiricism (a form of reality-testing), and assessing and modifying core beliefs and dysfunctional schemas. These approaches gained widespread acceptance as a primary treatment for numerous disorders. A "third wave" of cognitive and behavioral therapies developed, including acceptance and commitment therapy and dialectical behavior therapy, which expanded the concepts to other disorders and/or added novel components and mindfulness exercises. However the "third wave" concept has been criticized as not essentially different from other therapies and having roots in earlier ones as well. Counseling methods developed include solution-focused therapy and systemic coaching. Postmodern psychotherapies such as narrative therapy and coherence therapy do not impose definitions of mental health and illness, but rather see the goal of therapy as something constructed by the client and therapist in a social context. Systemic therapy also developed, which focuses on family and group dynamics—and transpersonal psychology, which focuses on the spiritual facet of human experience. Other orientations developed in the last three decades include feminist therapy, brief therapy, somatic psychology, expressive therapy, applied positive psychology and the human givens approach. A survey of over 2,500 US therapists in 2006 revealed the most utilized models of therapy and the ten most influential therapists of the previous quarter-century. Types There are hundreds of psychotherapy approaches or schools of thought. By 1980 there were more than 250; by 1996 more than 450; and at the start of the 21st century there were over a thousand different named psychotherapies—some being minor variations while others are based on very different conceptions of psychology, ethics (how to live) or technique. In practice therapy is often not of one pure type but draws from a number of perspectives and schools—known as an integrative or eclectic approach. The importance of the therapeutic relationship, also known as therapeutic alliance, between client and therapist is often regarded as crucial to psychotherapy. Common factors theory addresses this and other core aspects thought to be responsible for effective psychotherapy. Sigmund Freud (1856–1939), a Viennese neurologist who studied with Jean-Martin Charcot in 1885, is often considered the father of modern psychotherapy. His methods included analyzing his patient's dreams in search of important hidden insights into their unconscious minds. Other major elements of his methods, which changed throughout the years, included identification of childhood sexuality, the role of anxiety as a manifestation of inner conflict, the differentiation of parts of the psyche (id, ego, superego), transference and countertransference (the patient's projections onto the therapist, and the therapist's emotional responses to that). Some of his concepts were too broad to be amenable to empirical testing and invalidation, and he was critiqued for this by Jaspers. Numerous major figures elaborated and refined Freud's therapeutic techniques including Melanie Klein, Donald Winnicott, and others. Since the 1960s, however, the use of Freudian-based analysis for the treatment of mental disorders has declined substantially. Different types of psychotherapy have been created along with the advent of clinical trials to test them scientifically. These incorporate subjective treatments (after Beck), behavioral treatments (after Skinner and Wolpe) and additional time-constrained and centered structures, for example, interpersonal psychotherapy. In youth issue and in schizophrenia, the systems of family treatment hold esteem. A portion of the thoughts emerging from therapy are presently pervasive and some are a piece of the tool set of ordinary clinical practice. They are not just medications, they additionally help to understand complex conduct. Therapy may address specific forms of diagnosable mental illness, or everyday problems in managing or maintaining interpersonal relationships or meeting personal goals. A course of therapy may happen before, during or after pharmacotherapy (e.g. taking psychiatric medication). Psychotherapies are categorized in several different ways. A distinction can be made between those based on a medical model and those based on a humanistic model. In the medical model, the client is seen as unwell and the therapist employs their skill to help the client back to health. The extensive use of the DSM-IV, the diagnostic and statistical manual of mental disorders in the United States is an example of a medically exclusive model. The humanistic or non-medical model in contrast strives to depathologise the human condition. The therapist attempts to create a relational environment conducive to experiential learning and help build the client's confidence in their own natural process resulting in a deeper understanding of themselves. The therapist may see themselves as a facilitator/helper. Another distinction is between individual one-to-one therapy sessions, and group psychotherapy, including couples therapy and family therapy. Therapies are sometimes classified according to their duration; a small number of sessions over a few weeks or months may be classified as brief therapy (or short-term therapy), others, where regular sessions take place for years, may be classified as long-term. Some practitioners distinguish between more "uncovering" (or "depth") approaches and more "supportive" psychotherapy. Uncovering psychotherapy emphasizes facilitating the client's insight into the roots of their difficulties. The best-known example is classical psychoanalysis. Supportive psychotherapy by contrast stresses strengthening the client's coping mechanisms and often providing encouragement and advice, as well as reality-testing and limit-setting where necessary. Depending on the client's issues and situation, a more supportive or more uncovering approach may be optimal. Humanistic These psychotherapies, also known as "experiential", are based on humanistic psychology and emerged in reaction to both behaviorism and psychoanalysis, being dubbed the "third force". They are primarily concerned with the human development and needs of the individual, with an emphasis on subjective meaning, a rejection of determinism, | and illness, but rather see the goal of therapy as something constructed by the client and therapist in a social context. Systemic therapy also developed, which focuses on family and group dynamics—and transpersonal psychology, which focuses on the spiritual facet of human experience. Other orientations developed in the last three decades include feminist therapy, brief therapy, somatic psychology, expressive therapy, applied positive psychology and the human givens approach. A survey of over 2,500 US therapists in 2006 revealed the most utilized models of therapy and the ten most influential therapists of the previous quarter-century. Types There are hundreds of psychotherapy approaches or schools of thought. By 1980 there were more than 250; by 1996 more than 450; and at the start of the 21st century there were over a thousand different named psychotherapies—some being minor variations while others are based on very different conceptions of psychology, ethics (how to live) or technique. In practice therapy is often not of one pure type but draws from a number of perspectives and schools—known as an integrative or eclectic approach. The importance of the therapeutic relationship, also known as therapeutic alliance, between client and therapist is often regarded as crucial to psychotherapy. Common factors theory addresses this and other core aspects thought to be responsible for effective psychotherapy. Sigmund Freud (1856–1939), a Viennese neurologist who studied with Jean-Martin Charcot in 1885, is often considered the father of modern psychotherapy. His methods included analyzing his patient's dreams in search of important hidden insights into their unconscious minds. Other major elements of his methods, which changed throughout the years, included identification of childhood sexuality, the role of anxiety as a manifestation of inner conflict, the differentiation of parts of the psyche (id, ego, superego), transference and countertransference (the patient's projections onto the therapist, and the therapist's emotional responses to that). Some of his concepts were too broad to be amenable to empirical testing and invalidation, and he was critiqued for this by Jaspers. Numerous major figures elaborated and refined Freud's therapeutic techniques including Melanie Klein, Donald Winnicott, and others. Since the 1960s, however, the use of Freudian-based analysis for the treatment of mental disorders has declined substantially. Different types of psychotherapy have been created along with the advent of clinical trials to test them scientifically. These incorporate subjective treatments (after Beck), behavioral treatments (after Skinner and Wolpe) and additional time-constrained and centered structures, for example, interpersonal psychotherapy. In youth issue and in schizophrenia, the systems of family treatment hold esteem. A portion of the thoughts emerging from therapy are presently pervasive and some are a piece of the tool set of ordinary clinical practice. They are not just medications, they additionally help to understand complex conduct. Therapy may address specific forms of diagnosable mental illness, or everyday problems in managing or maintaining interpersonal relationships or meeting personal goals. A course of therapy may happen before, during or after pharmacotherapy (e.g. taking psychiatric medication). Psychotherapies are categorized in several different ways. A distinction can be made between those based on a medical model and those based on a humanistic model. In the medical model, the client is seen as unwell and the therapist employs their skill to help the client back to health. The extensive use of the DSM-IV, the diagnostic and statistical manual of mental disorders in the United States is an example of a medically exclusive model. The humanistic or non-medical model in contrast strives to depathologise the human condition. The therapist attempts to create a relational environment conducive to experiential learning and help build the client's confidence in their own natural process resulting in a deeper understanding of themselves. The therapist may see themselves as a facilitator/helper. Another distinction is between individual one-to-one therapy sessions, and group psychotherapy, including couples therapy and family therapy. Therapies are sometimes classified according to their duration; a small number of sessions over a few weeks or months may be classified as brief therapy (or short-term therapy), others, where regular sessions take place for years, may be classified as long-term. Some practitioners distinguish between more "uncovering" (or "depth") approaches and more "supportive" psychotherapy. Uncovering psychotherapy emphasizes facilitating the client's insight into the roots of their difficulties. The best-known example is classical psychoanalysis. Supportive psychotherapy by contrast stresses strengthening the client's coping mechanisms and often providing encouragement and advice, as well as reality-testing and limit-setting where necessary. Depending on the client's issues and situation, a more supportive or more uncovering approach may be optimal. Humanistic These psychotherapies, also known as "experiential", are based on humanistic psychology and emerged in reaction to both behaviorism and psychoanalysis, being dubbed the "third force". They are primarily concerned with the human development and needs of the individual, with an emphasis on subjective meaning, a rejection of determinism, and a concern for positive growth rather than pathology. Some posit an inherent human capacity to maximize potential, "the self-actualizing tendency"; the task of therapy is to create a relational environment where this tendency might flourish. Humanistic psychology can, in turn, be rooted in existentialism—the belief that human beings can only find meaning by creating it. This is the goal of existential therapy. Existential therapy is in turn philosophically associated with phenomenology. Person-centered therapy, also known as client-centered, focuses on the therapist showing openness, empathy and "unconditional positive regard", to help clients express and develop their own self. Humanistic Psychodrama (HPD) is based on the human image of humanistic psychology. So all rules and methods follow the axioms of humanistic psychology. The HPD sees itself as development-oriented psychotherapy and has completely moved away from the psychoanalytic catharsis theory. Self-awareness and self-realization are essential aspects in the therapeutic process. Subjective experiences, feelings and thoughts and one's own experiences are the starting point for a change or reorientation in experience and behavior in the direction of more self-acceptance and satisfaction. Dealing with the biography of the individual is closely related to the sociometry of the group. Gestalt therapy, originally called "concentration therapy", is an existential/experiential form that facilitates awareness in the various contexts of life, by moving from talking about relatively remote situations to action and direct current experience. Derived from various influences, including an overhaul of psychoanalysis, it stands on top of essentially four load-bearing theoretical walls: phenomenological method, dialogical relationship, field-theoretical strategies, and experimental freedom. A briefer form of humanistic therapy is the human givens approach, introduced in 199899. It is a solution-focused intervention based on identifying emotional needs—such as for security, autonomy and social connection—and using various educational and psychological methods to help people meet those needs more fully or appropriately. Insight-oriented Insight-oriented psychotherapies focus on revealing or interpreting unconscious processes. Most commonly referring to psychodynamic therapy, of which psychoanalysis is the oldest and most intensive form, these applications of depth psychology encourage the verbalization of all the patient's thoughts, including free associations, fantasies, and dreams, from which the analyst formulates the nature of the past and present unconscious conflicts which are causing the patient's symptoms and character problems. There are six main schools of psychoanalysis, which all influenced psychodynamic theory: Freudian, ego psychology, object relations theory, self psychology, interpersonal psychoanalysis, and relational psychoanalysis. Techniques for analytic group therapy have also developed. Cognitive-behavioral Behavior therapies use behavioral techniques, including applied behavior analysis (also known as behavior modification), to change maladaptive patterns of behavior to improve emotional responses, cognitions, and interactions with others. Functional analytic psychotherapy is one form of this approach. By nature, behavioral therapies are empirical (data-driven), contextual (focused on the environment and context), functional (interested in the effect or consequence a behavior ultimately has), probabilistic (viewing behavior as statistically predictable), monistic (rejecting mind-body dualism and treating the person as a unit), and relational (analyzing bidirectional interactions). Cognitive therapy focuses directly on changing the thoughts, in order to improve the emotions and behaviors. Cognitive behavioral therapy attempts to combine the above two approaches, focused on the construction and reconstruction of people's cognitions, emotions and behaviors. Generally in CBT, the therapist, through a wide array of modalities, helps clients assess, recognize and deal with problematic and dysfunctional ways of thinking, emoting and behaving. The concept of "third wave" psychotherapies reflects an influence of Eastern philosophy in clinical psychology, incorporating principles such as meditation into interventions such as mindfulness-based cognitive therapy, acceptance and commitment therapy, and dialectical behavior therapy for borderline personality disorder. Interpersonal psychotherapy (IPT) is a relatively brief form of psychotherapy (deriving from both CBT and psychodynamic approaches) that has been increasingly studied and endorsed by guidelines for some conditions. It focuses on the links between mood and social circumstances, helping to build social skills and social support. It aims to foster adaptation to current interpersonal roles and situations. Exposure and response prevention (ERP) is primarily deployed by therapists in the treatment of OCD. The American Psychiatric Association (APA) state that CBT drawing primarily on behavioral techniques (such as ERP) has the "strongest evidence base" among psychosocial interventions. By confronting feared scenarios (i.e., exposure) and refraining from performing rituals (i.e., responsive prevention), patients may gradually feel less distress in confronting feared stimuli, while also feeling less inclination to use rituals to relieve that distress. Typically, ERP is delivered in "hierarchical fashion", meaning patients confront increasingly anxiety-provoking stimuli as they progress through a course of treatment. Other types include reality therapy/choice theory, multimodal therapy, and therapies for specific disorders including PTSD therapies such as cognitive processing therapy and EMDR; substance abuse therapies such as relapse prevention and contingency management; and co-occurring disorders therapies such as Seeking Safety. Systemic Systemic therapy seeks to address people not just individually, as is often the focus of other forms of therapy, but in relationship, dealing with the interactions of groups, their patterns and dynamics (includes family therapy and marriage counseling). Community psychology is a type of systemic psychology. The term group therapy was first used around 1920 by Jacob L. Moreno, whose main contribution was the development of psychodrama, in which groups were used as both cast and audience for the exploration of individual problems by reenactment under the direction of the leader. The more analytic and exploratory use of groups in both hospital and out-patient settings was pioneered by a few European psychoanalysts who emigrated to the US, such as Paul Schilder, who treated severely neurotic and mildly psychotic out-patients in small groups at Bellevue Hospital, New York. The power of groups was most influentially demonstrated in Britain during the Second World War, when several psychoanalysts and psychiatrists proved the value of group methods for officer selection in the War Office Selection Boards. A chance to run an Army psychiatric unit on group lines was then given to several of these pioneers, notably Wilfred Bion and Rickman, followed by S. H. Foulkes, Main, and Bridger. The Northfield Hospital in Birmingham gave its name to what came to be called the two "Northfield Experiments", which provided the impetus for the development since the war of both social therapy, that is, the therapeutic community movement, and the use of small groups for the treatment of neurotic and personality disorders. Today group therapy is used in clinical settings and in private practice settings. Expressive Expressive psychotherapy is a form of therapy that utilizes artistic expression (via improvisational, compositional, re-creative, and receptive experiences) as its core means of treating clients. Expressive psychotherapists use the different disciplines of the creative arts as therapeutic interventions. This includes the modalities dance therapy, drama therapy, art therapy, music therapy, writing therapy, among others. This may include techniques such as affect labeling. Expressive psychotherapists believe that often the most effective way of treating a client is through the expression of imagination in creative work and integrating and processing what issues are raised in the act. Postmodernist Also known as post-structuralist or constructivist. Narrative therapy gives attention to each person's "dominant story" through therapeutic conversations, which also may involve exploring unhelpful ideas and how they came to prominence. Possible social and cultural influences may be explored if the client deems it helpful. Coherence therapy posits multiple levels of mental constructs that create symptoms as a way to strive for self-protection or self-realization. Feminist therapy does not accept that there is one single or correct way of looking at reality and therefore is considered a postmodernist approach. Other Transpersonal psychology addresses the client in the context of a spiritual understanding of consciousness. Positive psychotherapy (PPT) (since 1968) is a method in the field of humanistic and psychodynamic psychotherapy and is based on a positive image of humans, with a health-promoting, resource-oriented and conflict-centered approach. Hypnotherapy is undertaken while a subject is in a state of hypnosis. Hypnotherapy is often applied in order to modify a subject's behavior, emotional content, and attitudes, as well as a wide range of conditions including: dysfunctional habits, anxiety, stress-related illness, pain management, and personal development. Psychedelic therapy are therapeutic practices involving psychedelic drugs, such as LSD, psilocybin, DMT, and MDMA. In psychedelic therapy, in contrast to conventional psychiatric medication taken by the patient regularly or as needed, patients generally remain in an extended psychotherapy session during the acute psychedelic activity with additional sessions both before and after in order to help integrate experiences with the psychedelics. Psychedelic therapy has been compared with the shamanic healing rituals of indigenous people. Researchers identified two main differences: the first is the shamanic belief that multiple realities exist and can be explored through altered states of consciousness, and second the belief that spirits encountered in dreams and visions are real. The charitable initiative Founders Pledge has written a research report on cost-effective giving opportunities for funding psychedelic-assisted mental health treatments. Body psychotherapy, part of the field of somatic psychology, focuses on the link between the mind and the body and tries to access deeper levels of the psyche through greater awareness of the physical body and emotions. There are various body-oriented approaches, such as Reichian (Wilhelm Reich) character-analytic vegetotherapy and orgonomy; neo-Reichian bioenergetic analysis; somatic experiencing; integrative body psychotherapy; Ron Kurtz's Hakomi psychotherapy; sensorimotor psychotherapy; Biosynthesis psychotherapy; and Biodynamic psychotherapy. These approaches are not to be confused with body work or body-therapies that seek to improve primarily physical health through direct work (touch and manipulation) on the body, rather than through directly psychological methods. Some non-Western indigenous therapies have been developed. In African countries this includes harmony restoration therapy, meseron therapy and systemic therapies based on the Ubuntu philosophy. Integrative psychotherapy is an attempt to combine ideas and strategies from more than one theoretical approach. These approaches include mixing core beliefs and combining proven techniques. Forms of integrative psychotherapy include multimodal therapy, the transtheoretical model, cyclical psychodynamics, systematic treatment selection, cognitive analytic therapy, internal family systems model, multitheoretical psychotherapy and conceptual interaction. In practice, most experienced psychotherapists develop their own integrative approach over time. Child Psychotherapy needs to be adapted to meet the developmental needs of children. Depending on age, it is generally held to be one part of an effective strategy to help the needs of a child within the family setting. Child psychotherapy training programs necessarily include courses in human development. Since children often do not have the ability to articulate thoughts and feelings, psychotherapists will use a variety of media such as musical instruments, sand and toys, crayons, paint, clay, puppets, bibliocounseling (books), or board games. The use of play therapy is often rooted in psychodynamic theory, but other approaches also exist. In addition to therapy for the child, sometimes instead of it, children may benefit if their parents work with a therapist, take parenting classes, attend grief counseling, or take other action to resolve stressful situations that affect the child. Parent management training is a highly effective form of psychotherapy that teaches parenting skills to reduce their child's behavior problems. In many cases a different psychotherapist will work with the care taker of the child, while a colleague works with the child. Therefore, contemporary thinking on working with the younger age group has leaned towards working with parent and child simultaneously, as well as individually as needed. Computer-supported Research on computer-supported and computer-based interventions has increased significantly over the course of the last two decades. The following applications frequently have been investigated: Tele-therapy / tele-mental health: In teletherapy classical psychotherapy is provided via modern communication devices, such as via videoconferencing. Virtual reality: VR is a computer-generated scenario that simulates experience. The immersive environment, used for simulated exposure, can be similar to the real world or it can be fantastical, creating a new experience. Computer-based interventions (or online interventions or internet interventions): These interventions can be described as interactive self-help. They usually entail a combination of text, audio or video elements. Computer-supported therapy (or blended therapy): Classical psychotherapy is supported by means of online or software application elements. The feasibility of such interventions has been investigated for individual and group therapy. Effects Evaluation There is considerable controversy about whether, or when, psychotherapy efficacy is best evaluated by randomized controlled trials or more individualized idiographic methods. One issue with trials is what to use as a placebo treatment group or non-treatment control group. Often, this group includes patients on a waiting list, or those receiving some kind of regular non-specific contact or support. Researchers must consider how best to match the use of inert tablets or sham treatments in placebo-controlled studies in pharmaceutical trials. Several interpretations and differing assumptions and language remain. Another issue is the attempt to standardize and manualize therapies and link them to specific symptoms of diagnostic categories, making them more amenable to research. Some report that this may reduce efficacy or gloss over individual needs. Fonagy and Roth's opinion is that the benefits of the evidence-based approach outweighs the difficulties. There are several formal frameworks for evaluating whether a psychotherapist is a good fit for a patient. One example is the Scarsdale Psychotherapy Self-Evaluation (SPSE). However, some scales, such as the SPS, elicit information specific to certain schools of psychotherapy alone (e.g. the superego). Many psychotherapists believe that the nuances of psychotherapy cannot be captured by questionnaire-style observation, and prefer to rely on their own clinical experiences and conceptual arguments to support the type of treatment they practice. Psychodynamic therapists in particular believe that evidence-based approaches are not appropriate to their methods or assumptions, though some have increasingly accepted the challenge to implement evidence-based approaches in their methods. A pioneer in investigating the results of different psychological therapies was psychologist Hans Eysenck, who argued that psychotherapy does not produce any improvement in patients. He held that behavior therapy is the only effective one. However, it was revealed that Eysenck (who died in 1997) falsified data in his studies about this subject, fabricating data that would indicate that behavioral therapy enables achievements that are impossible to believe. Fourteen of his papers were retracted by journals in 2020, and journals issued 64 statements of concern about publications by him. Rod Buchanan, a biographer of Eysenck, has argued that 87 publications by Eysenck should be retracted. Outcomes in relation with selected kinds of treatment Large-scale international reviews |
Shelley Posen (active since 1970s), Canadian folklorist and folk musician Stephen Posen (born 1939), American painter, recipient of a Guggenheim Fellowship in 1986 Zac Posen (born 1980), American fashion designer Other uses Posen speeches by Heinrich Himmler in | since 1970s), Canadian folklorist and folk musician Stephen Posen (born 1939), American painter, recipient of a Guggenheim Fellowship in 1986 Zac Posen (born 1980), American fashion designer Other uses Posen speeches by Heinrich Himmler in 1943 SMS Posen, a German dreadnought, 1908–1922 See also Posner (disambiguation) Pozen (disambiguation) |
physicist Nikolai Fedyakin, working at the Technological Institute of Kostroma, Russia, performed measurements on the properties of water which had been condensed in, or repeatedly forced through, narrow quartz capillary tubes. Some of these experiments resulted in what was seemingly a new form of water with a higher boiling point, lower freezing point, and much higher viscosity than ordinary water – about that of a syrup. Boris Derjaguin, director of the laboratory for surface physics at the Institute for Physical Chemistry in Moscow, heard about Fedyakin's experiments. He improved on the method to produce the new water, and though he still produced very small quantities of this mysterious material, he did so substantially faster than Fedyakin did. Investigations of the material properties showed a substantially lower freezing point of −40 °C or less, a boiling point of 150 °C or greater, a density of approx. 1.1 to 1.2 g/cm3, and increased expansion with increasing temperature. The results were published in Soviet science journals, and short summaries were published in Chemical Abstracts in English, but Western scientists took no notice of the work. In 1966, Derjaguin travelled to England for the "Discussions of the Faraday Society" in Nottingham. There, he presented the work again, and this time English scientists took note of what he referred to as anomalous water. English scientists then started researching the effect as well, and by 1968 it was also under study in the United States. By 1969, the concept had spread to newspapers and magazines. There was fear by the United States military that there was a so-called "polywater gap" with the Soviet Union, a popular media term indicating a possible capability "gap", or discrepancy, between the US and the USSR, popularized by media hype of the "bomber gap" and the "missile gap", during periods when the USSR appeared to be outstripping the US in numbers of these respective weapons. A scientific furore followed. Some experiments carried out were able to reproduce Derjaguin's findings, while others failed. Several theories were advanced to explain the phenomenon. Some proposed it was the cause for increasing resistance on trans-Atlantic phone cables, while others predicted that if polywater were to contact ordinary water, it would convert that water into polywater, echoing the doomsday scenario in Kurt Vonnegut's novel Cat's Cradle. By the 1970s, polywater was well known in the general population. During this time, several people questioned the authenticity of what had come to be known in the West as polywater. The main concern was contamination | polywater, instead of just part of it. Richard Feynman remarked that if such a material existed, then an animal would exist that would ingest water and excrete polywater, using the energy released from the process to survive. In fiction The story "Polywater Doodle" by Howard L. Myers (writing under the pseudonym "Dr. Dolittle") appeared in the February 1971 issue of Analog Science Fiction and Fact. It features an animal composed entirely of polywater, with the metabolism described by Richard Feynman. (The title of the story is a pun on "Polly Wolly Doodle".) Polywater is the central idea of the 1972 espionage/thriller novel A Report from Group 17 by Robert C. O'Brien. The story revolves around the use of a type of polywater to make people controllable and incapable of independent thought or action. The episodes "The Naked Time" (Star Trek) and its sequel, "The Naked Now" (Star Trek: The Next Generation) involve forms of polywater intoxication. In the original episode, a scientific research outpost falls victim to polywater, which causes the crew to become so incapacitated that they all died after shutting off environmental controls in the compound. In the sequel, a Starfleet vessel is discovered adrift, its crew frozen in various states due to polywater intoxication. In Kurt Vonnegut's novel Cat's Cradle, ice-nine was a form of water that was solid at room temperature, and solidified any water that it contacted, giving it the capability to destroy all life on Earth. See also Water memory Hard water N ray References Further reading Franks, F., Polywater MIT Press, 1981 4. M. De Paz, A. Pozzo, and M. E. Vallauri, Mass spectrometric evidence against "polywater" Chem. Phys. Letters, 7, October 1970 Discovery and invention controversies |
science, deviant or fraudulent science, bad science, junk science, and popular science ... pathological science, cargo-cult science, and voodoo science." Examples of pathological science include Martian canals, N-rays, polywater, and cold fusion. The theories and conclusions behind all of these examples are currently rejected or disregarded by the majority of scientists. Definition Pathological science, as defined by Langmuir, is a psychological process in which a scientist, originally conforming to the scientific method, unconsciously veers from that method, and begins a pathological process of wishful data interpretation (see the observer-expectancy effect and cognitive bias). Some characteristics of pathological science are: The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause. The effect is of a magnitude that remains close to the limit of detectability, or many measurements are necessary because of the very low statistical significance of the results. There are claims of great accuracy. Fantastic theories contrary to experience are suggested. Criticisms are met by ad hoc excuses. The ratio of supporters to critics rises and then falls gradually to oblivion. Langmuir never intended the term to be rigorously defined; it was simply the title of his talk on some examples of "weird science". As with any attempt to define the scientific endeavor, examples and counterexamples can always be found. Langmuir's examples N-rays Langmuir's discussion of N-rays has led to their traditional characterization as an instance of pathological science. In 1903, Prosper-René Blondlot was working on X-rays (as were many physicists of the era) and noticed a new visible radiation that could penetrate aluminium. He devised experiments in which a barely visible object was illuminated by these N-rays, and thus became "more visible". Blondlot claimed that N-rays were causing a small visual reaction, too small to be seen under normal illumination, but just visible when most normal light sources were removed and the target was just barely visible to begin with. N-rays became the topic of some debate within the science community. After a time, physicist Robert W. Wood decided to visit Blondlot's lab, which had moved on to the physical characterization of N-rays. An experiment passed the rays from a 2 mm slit through an aluminum prism, from which he was measuring the index of refraction to a precision that required measurements accurate to within 0.01 mm. Wood asked how it was possible that he could measure something to 0.01 mm from a 2 mm source, a physical impossibility in the propagation of any kind of wave. Blondlot replied, "That's one of the fascinating things about the N-rays. They don't follow the ordinary laws of science that you ordinarily think of." Wood then asked to see the experiments being run as usual, which took place in a room required to be very dark so the target was barely visible. Blondlot repeated his most recent experiments and got the same results—despite the fact that Wood had reached over and covertly sabotaged the N-ray apparatus by removing the prism. Other examples Langmuir offered additional examples of what he regarded as pathological science in his original speech: The Davis–Barnes effect (1929; after Professor Bergen Davis from Columbia University) Mitogenetic rays (1923; Alexander Gurwitsch and | era) and noticed a new visible radiation that could penetrate aluminium. He devised experiments in which a barely visible object was illuminated by these N-rays, and thus became "more visible". Blondlot claimed that N-rays were causing a small visual reaction, too small to be seen under normal illumination, but just visible when most normal light sources were removed and the target was just barely visible to begin with. N-rays became the topic of some debate within the science community. After a time, physicist Robert W. Wood decided to visit Blondlot's lab, which had moved on to the physical characterization of N-rays. An experiment passed the rays from a 2 mm slit through an aluminum prism, from which he was measuring the index of refraction to a precision that required measurements accurate to within 0.01 mm. Wood asked how it was possible that he could measure something to 0.01 mm from a 2 mm source, a physical impossibility in the propagation of any kind of wave. Blondlot replied, "That's one of the fascinating things about the N-rays. They don't follow the ordinary laws of science that you ordinarily think of." Wood then asked to see the experiments being run as usual, which took place in a room required to be very dark so the target was barely visible. Blondlot repeated his most recent experiments and got the same results—despite the fact that Wood had reached over and covertly sabotaged the N-ray apparatus by removing the prism. Other examples Langmuir offered additional examples of what he regarded as pathological science in his original speech: The Davis–Barnes effect (1929; after Professor Bergen Davis from Columbia University) Mitogenetic rays (1923; Alexander Gurwitsch and others) The Allison effect (1927; after Fred Allison) Extrasensory perception (1934), where Rhine consciously discarded contrary test results because he felt they could not be correct. Later examples A 1985 version of Langmuir's speech offered more examples, although at least one of these (polywater) occurred entirely after Langmuir's death in 1957: Water dowsing Martian canals (Observed in late 19th century and early 20th century, they turned out to be optical illusions.) Certain reported photomechanical and electromechanical effects Polywater Biological effects of magnetic fields (see magnetobiology and magnet therapy) except magnetoception Newer examples Since Langmuir's original talk, a number of newer examples of what appear to be pathological science have appeared. Denis Rousseau, one of the main debunkers of polywater, gave an update of Langmuir in 1992, and he specifically cited as examples the cases of polywater, Fleischmann's cold fusion and Jacques Benveniste's "infinite dilution". Polywater Polywater was a form of water which appeared to have a much higher boiling point and much lower freezing point than normal water. During the 1960s, many articles were published on the subject, and research on polywater was done around the world with mixed results. Eventually it was determined that many of the properties of polywater could be explained by biological contamination. When more rigorous cleaning of glassware and experimental controls were introduced, polywater could no longer be produced. It took several years for the concept of polywater to die in spite of the later negative results. Cold fusion In 1989, Martin Fleischmann and Stanley Pons announced the discovery of a simple and cheap procedure to obtain room-temperature nuclear fusion. Although there were many instances where successful results were reported they lacked |
between cars and tellers; by the 2020s most of these had been removed, obviated by the rise of mobile banking apps and the increasing sophistication of ATMs. Many hospitals have a computer-controlled pneumatic tube system to deliver drugs, documents and specimens to and from laboratories and nurses' stations. Many factories use them to deliver parts quickly across large campuses. Many larger stores use systems to securely transport excess cash from checkout stands to back offices, and to send change back to cashiers. They are used in casinos to move money, chips, and cards quickly and securely. Japanese love hotels use them to allow customers to settle bills anonymously (no face-to-face contact). NASA's original Mission Control Center had pneumatic tubes connecting controller consoles with staff support rooms. Mission Operations Control Room 2, was last used in its original configuration in 1992 and then remodeled for other missions. Because the room was designated a National Historic Landmark in 1985, it was decided in 2017 to restore it to its 1960s condition. The pneumatic tubes were removed and sent to the Cosmosphere in Kansas for restoration. Pneumatic tube systems are used in science, to transport samples during neutron activation analysis. Samples must be moved from the nuclear reactor core, in which they are bombarded with neutrons, to the instrument that records the resulting radiation. As some of the radioactive isotopes in the sample can have very short half-lives, speed is important. These systems may be automated, with a magazine of sample tubes that are moved into the reactor core in turn for a predetermined time, before being moved to the instrument station and finally to a container for storage and disposal. Until it closed in early 2011, a McDonald's in Edina, Minnesota claimed to be the "World's Only Pneumatic Air Drive-Thru," sending food from their strip-mall location to a drive-through in the middle of a parking lot. Technology editor Quentin Hardy notes that renewed interest in transmission of data by pneumatic tube accompanies discussions of digital network security, and he cites research into London's forgotten pneumatic network. Related applications include fish cannons which use mechanisms very similar to pneumatic tube systems. Applications In postal service Pneumatic post or pneumatic mail is a system to deliver letters through pressurized air tubes. It was invented by the Scottish engineer William Murdoch in the 19th century and was later developed by the London Pneumatic Despatch Company. Pneumatic post systems were used in several large cities starting in the second half of the 19th century (including an 1866 London system powerful and large enough to transport humans during trial runs – though not intended for that purpose), but later were largely abandoned. A major network of tubes in Paris (the Paris pneumatic post) was in use until 1984, when it was abandoned in favor of computers and fax machines. The Prague pneumatic post commenced for the public in 1889 in Prague, now in the Czech Republic, and the network extended approximately . Pneumatic post stations usually connect post offices, stock exchanges, banks and ministries. Italy was the only country to issue postage stamps (between 1913 and 1966) specifically for pneumatic post. Austria, France, and Germany issued postal stationery for pneumatic use. Typical applications are in banks, hospitals and supermarkets. Many large retailers used pneumatic tubes to transport cheques or other documents from cashiers to the accounting office. Historical use 1853: linking the London Stock Exchange to the city's main telegraph station (a distance of ) 1861: in London with the London Pneumatic Despatch Company providing services from Euston railway station to the General Post Office and Holborn 1864: in Liverpool connecting the Electric and International Telegraph Company telegraph stations in Castle Street, Water Street and the Exchange Buildings 1864: in Manchester to connect the Electric and International Telegraph Company central offices at York Street, with branch offices at Dulcie Buildings and Mosley Street 1865: in Birmingham, installed by the Electric and International Telegraph Company between the New Exchange Buildings in Stephenson Place and their branch office in Temple Buildings, New Street. 1865: in Berlin (until 1976), the Rohrpost, a system 400 kilometers in total length at its peak in 1940 1866: in Paris (until 1984, 467 kilometers in total length from 1934). John Steinbeck mentioned this system in The Short Reign of Pippin IV: A Fabrication: "You pay no attention to the pneumatique." 1871: in Dublin 1875: in Vienna (until 1956) - including the unrealised corpse network of Zentralfriedhof 1887: in Prague (until 2002 due to flooding), the Prague pneumatic post 1893: the first North American system was established in Philadelphia by Postmaster General John Wanamaker, who had previously employed the technology at his department store. The system, which initially connected the downtown post offices, was later extended to the principal railroad stations, the stock exchanges, and many private businesses. It was operated by the United States Post Office Department which later opened similar systems in cities such as New York (connecting Brooklyn and Manhattan), Chicago, Boston, and St. Louis. The last of these closed in 1953. Other cities: Munich, Rio de Janeiro, Buenos Aires, Hamburg, Rome, Naples, Milan, Marseille, Melbourne, Tokyo, Osaka, Nagoya, Kobe 1950s-1989: CIA headquarters (now known as the Old Headquarters Building) In public transportation 19th century In 1812, George Medhurst first proposed, but never implemented, blowing passenger carriages through a tunnel. Precursors of pneumatic tube systems for passenger transport, the atmospheric railway (for which the tube was laid between the rails, with a piston running in it suspended from the train through a sealable slot in the top of the tube) were operated as follows: 1844–54: Dublin and Kingstown Railway's Dalkey Atmospheric Railway between Kingstown (Dún Laoghaire) and Dalkey, Ireland () 1846–47: London and Croydon Railway between Croydon and New Cross, London, England () 1847–48: Isambard Kingdom Brunel's South Devon Railway between Exeter and Newton Abbot, England () 1847–60: Paris–Saint-Germain railway between Bois de Vésinet and Saint-Germain-en-Laye, France () In 1861, the London Pneumatic Despatch Company built a system large enough to move a person, although it was intended for parcels. The inauguration of the new Holborn Station on 10 October 1865 was marked by having the Duke of Buckingham, the chairman, and some company directors blown through the tube to Euston (a five-minute trip). The Crystal Palace pneumatic railway was exhibited at the Crystal Palace in 1864. This was a prototype for a proposed Waterloo and Whitehall Railway that would have run under the River Thames linking Waterloo and Charing Cross. Digging commenced in 1865 but was halted in 1868 due to financial problems. In 1867 at the American Institute Fair in New York, Alfred Ely Beach demonstrated a long, diameter pipe that was capable of moving 12 passengers plus a conductor. One year after New York City's first-ever elevated rail line went into service; in 1869, the Beach Pneumatic Transit Company of New York secretly constructed a long, diameter pneumatic subway line under Broadway, to demonstrate the possibilities of the new transport mode. The line only operated for a few months, closing after Beach was unsuccessful in getting permission to extend it – Boss Tweed, a corrupt influential politician, did not want it to go ahead as he was intending to personally invest into competing schemes for an elevated rail line. 20th century In the 1920s, the Canadian Pacific and Canadian National Railways cooperated together to lay an elaborate system of 4,500 metre pneumatic tubing between four of their offices to Postal Station A at Union Station in Toronto, Canada. There was also a connection to the mail room at the Royal York Hotel. The newspapers the Star and Telegram joined into the system, laying pipes. In the 1960s, Lockheed and MIT with the United States Department of Commerce conducted feasibility studies on a vactrain system powered by ambient atmospheric pressure and "gravitational pendulum assist" to connect cities on the country's East Coast. They calculated that the run between Philadelphia and New York City would average 174 meters | and specimens to and from laboratories and nurses' stations. Many factories use them to deliver parts quickly across large campuses. Many larger stores use systems to securely transport excess cash from checkout stands to back offices, and to send change back to cashiers. They are used in casinos to move money, chips, and cards quickly and securely. Japanese love hotels use them to allow customers to settle bills anonymously (no face-to-face contact). NASA's original Mission Control Center had pneumatic tubes connecting controller consoles with staff support rooms. Mission Operations Control Room 2, was last used in its original configuration in 1992 and then remodeled for other missions. Because the room was designated a National Historic Landmark in 1985, it was decided in 2017 to restore it to its 1960s condition. The pneumatic tubes were removed and sent to the Cosmosphere in Kansas for restoration. Pneumatic tube systems are used in science, to transport samples during neutron activation analysis. Samples must be moved from the nuclear reactor core, in which they are bombarded with neutrons, to the instrument that records the resulting radiation. As some of the radioactive isotopes in the sample can have very short half-lives, speed is important. These systems may be automated, with a magazine of sample tubes that are moved into the reactor core in turn for a predetermined time, before being moved to the instrument station and finally to a container for storage and disposal. Until it closed in early 2011, a McDonald's in Edina, Minnesota claimed to be the "World's Only Pneumatic Air Drive-Thru," sending food from their strip-mall location to a drive-through in the middle of a parking lot. Technology editor Quentin Hardy notes that renewed interest in transmission of data by pneumatic tube accompanies discussions of digital network security, and he cites research into London's forgotten pneumatic network. Related applications include fish cannons which use mechanisms very similar to pneumatic tube systems. Applications In postal service Pneumatic post or pneumatic mail is a system to deliver letters through pressurized air tubes. It was invented by the Scottish engineer William Murdoch in the 19th century and was later developed by the London Pneumatic Despatch Company. Pneumatic post systems were used in several large cities starting in the second half of the 19th century (including an 1866 London system powerful and large enough to transport humans during trial runs – though not intended for that purpose), but later were largely abandoned. A major network of tubes in Paris (the Paris pneumatic post) was in use until 1984, when it was abandoned in favor of computers and fax machines. The Prague pneumatic post commenced for the public in 1889 in Prague, now in the Czech Republic, and the network extended approximately . Pneumatic post stations usually connect post offices, stock exchanges, banks and ministries. Italy was the only country to issue postage stamps (between 1913 and 1966) specifically for pneumatic post. Austria, France, and Germany issued postal stationery for pneumatic use. Typical applications are in banks, hospitals and supermarkets. Many large retailers used pneumatic tubes to transport cheques or other documents from cashiers to the accounting office. Historical use 1853: linking the London Stock Exchange to the city's main telegraph station (a distance of ) 1861: in London with the London Pneumatic Despatch Company providing services from Euston railway station to the General Post Office and Holborn 1864: in Liverpool connecting the Electric and International Telegraph Company telegraph stations in Castle Street, Water Street and the Exchange Buildings 1864: in Manchester to connect the Electric and International Telegraph Company central offices at York Street, with branch offices at Dulcie Buildings and Mosley Street 1865: in Birmingham, installed by the Electric and International Telegraph Company between the New Exchange Buildings in Stephenson Place and their branch office in Temple Buildings, New Street. 1865: in Berlin (until 1976), the Rohrpost, a system 400 kilometers in total length at its peak in 1940 1866: in Paris (until 1984, 467 kilometers in total length from 1934). John Steinbeck mentioned this system in The Short Reign of Pippin IV: A Fabrication: "You pay no attention to the pneumatique." 1871: in Dublin 1875: in Vienna (until 1956) - including the unrealised corpse network of Zentralfriedhof 1887: in Prague (until 2002 due to flooding), the Prague pneumatic post 1893: the first North American system was established in Philadelphia by Postmaster General John Wanamaker, who had previously employed the technology at his department store. The system, which initially connected the downtown post offices, was later extended to the principal railroad stations, the stock exchanges, and many |
the state of Salzburg, Austria The Pinzgauer Cattle breed The Steyr-Puch Pinzgauer, an off-road vehicle The Noriker horse | also known as Pinzgauer or Norico-Pinzgauer The Pinzgauer Lokalbahn, or Pinzgaubahn; a railway in the area. |
priority to or restricts inheritance of a throne or fief to heirs, male or female, descended from the original title holder through males only. Traditionally, agnatic succession is applied in determining the names and membership of European dynasties. The prevalent forms of dynastic succession in Europe, Asia and parts of Africa were male-preference primogeniture, agnatic primogeniture, or agnatic seniority until after World War II. There are, however, matrilineal examples like the Lobedu Rain Queen. By the 21st century, most ongoing European monarchies had replaced their traditional agnatic succession with absolute primogeniture, meaning that the first child born to a monarch inherits the throne, regardless of the child's sex. Salic law Variations of Salic law, generally understood in modern times to mean exclusion of women as hereditary monarchs, restricted succession to thrones and inheritance of fiefs or land to men in parts of medieval and later Europe. Once common, strict Salic | in the sense that he is their lineal male ancestor. Agnatic succession Patrilineal or agnatic succession gives priority to or restricts inheritance of a throne or fief to heirs, male or female, descended from the original title holder through males only. Traditionally, agnatic succession is applied in determining the names and membership of European dynasties. The prevalent forms of dynastic succession in Europe, Asia and parts of Africa were male-preference primogeniture, agnatic primogeniture, or agnatic seniority until after World War II. There are, however, matrilineal examples like the Lobedu Rain Queen. By the 21st century, most ongoing European monarchies had replaced their traditional agnatic succession with absolute primogeniture, meaning that the first child born to a monarch inherits the throne, regardless of the child's sex. Salic law Variations of Salic law, generally understood in modern times to mean exclusion of women as hereditary monarchs, restricted succession to thrones and inheritance of fiefs or land to men in parts of medieval and later Europe. Once common, strict Salic inheritance has been officially revoked in all extant European monarchies except the Principality of Liechtenstein. Genetic genealogy The fact that human Y-chromosome DNA (Y-DNA) is paternally inherited enables patrilines and agnatic kinships of men to be traced through genetic analysis. Y-chromosomal Adam (Y-MRCA) is the patrilineal most recent common ancestor from whom all |
asthenosphere at different times depending on its temperature and pressure. The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which ride on the fluid-like (visco-elastic solid) asthenosphere. Plate motions range up to a typical 10–40 mm/year (Mid-Atlantic Ridge; about as fast as fingernails grow), to about 160 mm/year (Nazca Plate; about as fast as hair grows). The driving mechanism behind this movement is described below. Tectonic lithosphere plates consist of lithospheric mantle overlain by one or two types of crustal material: oceanic crust (in older texts called sima from silicon and magnesium) and continental crust (sial from silicon and aluminium). Average oceanic lithosphere is typically thick; its thickness is a function of its age: as time passes, it conductively cools and subjacent cooling mantle is added to its base. Because it is formed at mid-ocean ridges and spreads outwards, its thickness is therefore a function of its distance from the mid-ocean ridge where it was formed. For a typical distance that oceanic lithosphere must travel before being subducted, the thickness varies from about thick at mid-ocean ridges to greater than at subduction zones; for shorter or longer distances, the subduction zone (and therefore also the mean) thickness becomes smaller or larger, respectively. Continental lithosphere is typically about 200 km thick, though this varies considerably between basins, mountain ranges, and stable cratonic interiors of continents. The location where two plates meet is called a plate boundary. Plate boundaries are commonly associated with geological events such as earthquakes and the creation of topographic features such as mountains, volcanoes, mid-ocean ridges, and oceanic trenches. The majority of the world's active volcanoes occur along plate boundaries, with the Pacific Plate's Ring of Fire being the most active and widely known today. These boundaries are discussed in further detail below. Some volcanoes occur in the interiors of plates, and these have been variously attributed to internal plate deformation and to mantle plumes. As explained above, tectonic plates may include continental crust or oceanic crust, and most plates contain both. For example, the African Plate includes the continent and parts of the floor of the Atlantic and Indian Oceans. The distinction between oceanic crust and continental crust is based on their modes of formation. Oceanic crust is formed at sea-floor spreading centers, and continental crust is formed through arc volcanism and accretion of terranes through tectonic processes, though some of these terranes may contain ophiolite sequences, which are pieces of oceanic crust considered to be part of the continent when they exit the standard cycle of formation and spreading centers and subduction beneath continents. Oceanic crust is also denser than continental crust owing to their different compositions. Oceanic crust is denser because it has less silicon and more heavier elements ("mafic") than continental crust ("felsic"). As a result of this density stratification, oceanic crust generally lies below sea level (for example most of the Pacific Plate), while continental crust buoyantly projects above sea level (see the page isostasy for explanation of this principle). Types of plate boundaries Three types of plate boundaries exist, with a fourth, mixed type, characterized by the way the plates move relative to each other. They are associated with different types of surface phenomena. The different types of plate boundaries are: Divergent boundaries (constructive boundaries or extensional boundaries) occur where two plates slide apart from each other. At zones of ocean-to-ocean rifting, divergent boundaries form by seafloor spreading, allowing for the formation of new ocean basin. As the ocean plate splits, the ridge forms at the spreading center, the ocean basin expands, and finally, the plate area increases causing many small volcanoes and/or shallow earthquakes. At zones of continent-to-continent rifting, divergent boundaries may cause new ocean basin to form as the continent splits, spreads, the central rift collapses, and ocean fills the basin. Active zones of mid-ocean ridges (e.g., the Mid-Atlantic Ridge and East Pacific Rise), and continent-to-continent rifting (such as Africa's East African Rift and Valley and the Red Sea), are examples of divergent boundaries. Convergent boundaries (destructive boundaries or active margins) occur where two plates slide toward each other to form either a subduction zone (one plate moving underneath the other) or a continental collision. At zones of ocean-to-continent subduction (e.g. the Andes mountain range in South America, and the Cascade Mountains in Western United States), the dense oceanic lithosphere plunges beneath the less dense continent. Earthquakes trace the path of the downward-moving plate as it descends into asthenosphere, a trench forms, and as the subducted plate is heated it releases volatiles, mostly water from hydrous minerals, into the surrounding mantle. The addition of water lowers the melting point of the mantle material above the subducting slab, causing it to melt. The magma that results typically leads to volcanism. At zones of ocean-to-ocean subduction (e.g. the Aleutian Islands, the Mariana Islands, and the Japanese island arc), older, cooler, denser crust slips beneath less dense crust. This motion causes earthquakes and a deep trench to form in an arc shape. The upper mantle of the subducted plate then heats and magma rises to form curving chains of volcanic islands. Deep marine trenches are typically associated with subduction zones, and the basins that develop along the active boundary are often called "foreland basins". Closure of ocean basins can occur at continent-to-continent boundaries (e.g., Himalayas and Alps): collision between masses of granitic continental lithosphere; neither mass is subducted; plate edges are compressed, folded, uplifted. Transform boundaries (conservative boundaries or strike-slip boundaries) occur where two lithospheric plates slide, or perhaps more accurately, grind past each other along transform faults, where plates are neither created nor destroyed. The relative motion of the two plates is either sinistral (left side toward the observer) or dextral (right side toward the observer). Transform faults occur across a spreading center. Strong earthquakes can occur along a fault. The San Andreas Fault in California is an example of a transform boundary exhibiting dextral motion. Plate boundary zones occur where the effects of the interactions are unclear, and the boundaries, usually occurring along a broad belt, are not well defined and may show various types of movements in different episodes. Driving forces of plate motion It has generally been accepted that tectonic plates are able to move because of the relative density of oceanic lithosphere and the relative weakness of the asthenosphere. Dissipation of heat from the mantle is acknowledged to be the original source of the energy required to drive plate tectonics through convection or large scale upwelling and doming. The current view, though still a matter of some debate, asserts that as a consequence, a powerful source generating plate motion is the excess density of the oceanic lithosphere sinking in subduction zones. When the new crust forms at mid-ocean ridges, this oceanic lithosphere is initially less dense than the underlying asthenosphere, but it becomes denser with age as it conductively cools and thickens. The greater density of old lithosphere relative to the underlying asthenosphere allows it to sink into the deep mantle at subduction zones, providing most of the driving force for plate movement. The weakness of the asthenosphere allows the tectonic plates to move easily towards a subduction zone. Although subduction is thought to be the strongest force driving plate motions, it cannot be the only force since there are plates such as the North American Plate which are moving, yet are nowhere being subducted. The same is true for the enormous Eurasian Plate. The sources of plate motion are a matter of intensive research and discussion among scientists. One of the main points is that the kinematic pattern of the movement itself should be separated clearly from the possible geodynamic mechanism that is invoked as the driving force of the observed movement, as some patterns may be explained by more than one mechanism. In short, the driving forces advocated at the moment can be divided into three categories based on the relationship to the movement: mantle dynamics related, gravity related (main driving force accepted nowadays), and earth rotation related. Driving forces related to mantle dynamics For much of the last quarter century, the leading theory of the driving force behind tectonic plate motions envisaged large scale convection currents in the upper mantle, which can be transmitted through the asthenosphere. This theory was launched by Arthur Holmes and some forerunners in the 1930s and was immediately recognized as the solution for the acceptance of the theory as originally discussed in the papers of Alfred Wegener in the early years of the century. However, despite its acceptance, it was long debated in the scientific community because the leading theory still envisaged a static Earth without moving continents up until the major breakthroughs of the early sixties. Two- and three-dimensional imaging of Earth's interior (seismic tomography) shows a varying lateral density distribution throughout the mantle. Such density variations can be material (from rock chemistry), mineral (from variations in mineral structures), or thermal (through thermal expansion and contraction from heat energy). The manifestation of this varying lateral density is mantle convection from buoyancy forces. How mantle convection directly and indirectly relates to plate motion is a matter of ongoing study and discussion in geodynamics. Somehow, this energy must be transferred to the lithosphere for tectonic plates to move. There are essentially two main types of forces that are thought to influence plate motion: friction and gravity. Basal drag (friction): Plate motion driven by friction between the convection currents in the asthenosphere and the more rigid overlying lithosphere. Slab suction (gravity): Plate motion driven by local convection currents that exert a downward pull on plates in subduction zones at ocean trenches. Slab suction may occur in a geodynamic setting where basal tractions continue to act on the plate as it dives into the mantle (although perhaps to a greater extent acting on both the under and upper side of the slab). Lately, the convection theory has been much debated, as modern techniques based on 3D seismic tomography still fail to recognize these predicted large scale convection cells. Alternative views have been proposed. Plume tectonics In the theory of plume tectonics followed by numerous researchers during the 1990s, a modified concept of mantle convection currents is used. It asserts that super plumes rise from the deeper mantle and are the drivers or substitutes of the major convection cells. These ideas find their roots in the early 1930s in the works of Beloussov and van Bemmelen, which were initially opposed to plate tectonics and placed the mechanism in a fixistic frame of verticalistic movements. Van Bemmelen later on modulated on the concept in his "Undulation Models" and used it as the driving force for horizontal movements, invoking gravitational forces away from the regional crustal doming. The theories find resonance in the modern theories which envisage hot spots or mantle plumes which remain fixed and are overridden by oceanic and continental lithosphere plates over time and leave their traces in the geological record (though these phenomena are not invoked as real driving mechanisms, but rather as modulators). The mechanism is still advocated to explain the break-up of supercontinents during specific geological epochs. It has followers amongst the scientists involved in the theory of Earth expansion. Surge tectonics Another theory is that the mantle flows neither in cells nor large plumes but rather as a series of channels just below Earth's crust, which then provide basal friction to the lithosphere. This theory, called "surge tectonics", was popularized during the 1980s and 1990s. Recent research, based on three-dimensional computer modeling, suggests that plate geometry is governed by a feedback between mantle convection patterns and the strength of the lithosphere. Driving forces related to gravity Forces related to gravity are invoked as secondary phenomena within the framework of a more general driving mechanism such as the various forms of mantle dynamics described above. In modern views, gravity is invoked as the major driving force, through slab pull along subduction zones. Gravitational sliding away from a spreading ridge: According to many authors, plate motion is driven by the higher elevation of plates at ocean ridges. As oceanic lithosphere is formed at spreading ridges from hot mantle material, it gradually cools and thickens with age (and thus adds distance from the ridge). Cool oceanic lithosphere is significantly denser than the hot mantle material from which it is derived and so with increasing thickness it gradually subsides into the mantle to compensate the greater load. The result is a slight lateral incline with increased distance from the ridge axis. This force is regarded as a secondary force and is often referred to as "ridge push". This is a misnomer as nothing is "pushing" horizontally and tensional features are dominant along ridges. It is more accurate to refer to this mechanism as gravitational sliding as variable topography across the totality of the plate can vary considerably and the topography of spreading ridges is only the most prominent feature. Other mechanisms generating this gravitational secondary force include flexural bulging of the lithosphere before it dives underneath an adjacent plate which produces a clear topographical feature that can offset, or at least affect, the influence of topographical ocean ridges, and mantle plumes and hot spots, which are postulated to impinge on the underside of tectonic plates. Slab-pull: Current scientific opinion is that the asthenosphere is insufficiently competent or rigid to directly cause motion by friction along the base of the lithosphere. Slab pull is therefore most widely thought to be the greatest force acting on the plates. In this current understanding, plate motion is mostly driven by the weight of cold, dense plates sinking into the mantle at trenches. Recent models indicate that trench suction plays an important role as well. However, the fact that the North American Plate is nowhere being subducted, although it is in motion, presents a problem. The same holds for the African, Eurasian, and Antarctic plates. Gravitational sliding away from mantle doming: According to older theories, one of the driving mechanisms of the plates is the existence of large scale asthenosphere/mantle domes which cause the gravitational sliding of lithosphere plates away from them (see the paragraph on Mantle Mechanisms). This gravitational sliding represents a secondary phenomenon of this basically vertically oriented mechanism. It finds its roots in the Undation Model of van Bemmelen. This can act on various scales, from the small scale of one island arc up to the larger scale of an entire ocean basin. Driving forces related to Earth rotation Alfred Wegener, being a meteorologist, had proposed tidal forces and centrifugal forces as the main driving mechanisms behind continental drift; however, these forces were considered far too small to cause continental motion as the concept was of continents plowing through oceanic crust. Therefore, Wegener later changed his position and asserted that convection currents are the main driving force of plate tectonics in the last edition of his book in 1929. However, in the plate tectonics context (accepted since the seafloor spreading proposals of Heezen, Hess, Dietz, Morley, Vine, and Matthews (see below) during the early 1960s), the oceanic crust is suggested to be in motion with the continents which caused the proposals related to Earth rotation to be reconsidered. In more recent literature, these driving forces are: Tidal drag due to the gravitational force the Moon (and the Sun) exerts on the crust of Earth Global deformation of the geoid due to small displacements of the rotational pole with respect to Earth's crust Other smaller deformation effects of the crust due to wobbles and spin movements of Earth's rotation on a smaller timescale Forces that are small and generally negligible are: The Coriolis force The centrifugal force, which is treated as a slight modification of gravity For these mechanisms to be overall valid, systematic relationships should exist all over the globe between the orientation and kinematics of deformation and the geographical latitudinal and longitudinal grid of Earth itself. These systematic relations studies in the second half of the nineteenth century and the first half of the twentieth century underline exactly the opposite: that the plates had not moved in time, that the deformation grid was fixed with respect to Earth's equator and axis, and that gravitational driving forces were generally acting vertically and caused only local horizontal movements (the so-called pre-plate tectonic, "fixist theories"). Later studies (discussed below on this page), therefore, invoked many of the relationships recognized during this pre-plate tectonics period to support their theories (see the anticipations and reviews in the work of van Dijk and collaborators). Of the many forces discussed in this paragraph, tidal force is still highly debated and defended as a possible principal driving force of plate tectonics. The other forces are only used in global geodynamic models not using plate tectonics concepts (therefore beyond the discussions treated in this section) or proposed as minor modulations within the overall plate tectonics model. In 1973, George W. Moore of the USGS and R. C. Bostrom presented evidence for a general westward drift of Earth's lithosphere with respect to the mantle. He concluded that tidal forces (the tidal lag or "friction") caused by Earth's rotation and the forces acting upon it by the Moon are a driving force for plate tectonics. As Earth spins eastward beneath the Moon, the Moon's gravity ever so slightly pulls Earth's surface layer back westward, just as proposed by Alfred Wegener (see above). In a more recent 2006 study, scientists reviewed and advocated these earlier proposed ideas. It has also been suggested recently in that this observation may also explain why Venus and Mars have no plate tectonics, as Venus has no moon and Mars' moons are too small to have significant tidal effects on the planet. In a recent paper, it was suggested that, on the other hand, it can easily be observed that many plates are moving north and eastward, and that the dominantly westward motion of the Pacific Ocean basins derives simply from the eastward bias of the Pacific spreading center (which is not a predicted manifestation of such lunar forces). In the same paper the authors admit, however, that relative to the lower mantle, there is a slight westward component in the motions of all the plates. They demonstrated though that the westward drift, seen only for the past 30 Ma, is attributed to the increased dominance of the steadily growing and accelerating Pacific plate. The debate is still open. Relative significance of each driving force mechanism The vector of a plate's motion is a function of all the forces acting on the plate; however, therein lies the problem regarding the degree to which each process contributes to the overall motion of each tectonic plate. The diversity of geodynamic settings and the properties of each plate result from the impact of the various processes actively driving each individual plate. One method of dealing with this problem is to consider the relative rate at which each | America, Africa, Antarctica, India, and Australia. The evidence for such an erstwhile joining of these continents was patent to field geologists working in the southern hemisphere. The South African Alex du Toit put together a mass of such information in his 1937 publication Our Wandering Continents, and went further than Wegener in recognising the strong links between the Gondwana fragments. Wegener's work was initially not widely accepted, in part due to a lack of detailed evidence. Earth might have a solid crust and mantle and a liquid core, but there seemed to be no way that portions of the crust could move around. Distinguished scientists, such as Harold Jeffreys and Charles Schuchert, were outspoken critics of continental drift. Despite much opposition, the view of continental drift gained support and a lively debate started between "drifters" or "mobilists" (proponents of the theory) and "fixists" (opponents). During the 1920s, 1930s and 1940s, the former reached important milestones proposing that convection currents might have driven the plate movements, and that spreading may have occurred below the sea within the oceanic crust. Concepts close to the elements now incorporated in plate tectonics were proposed by geophysicists and geologists (both fixists and mobilists) like Vening-Meinesz, Holmes, and Umbgrove. In 1941 Otto Ampferer described in his publication "Thoughts on the motion picture of the Atlantic region" processes that anticipate what is now called seafloor spreading and subduction. One of the first pieces of geophysical evidence that was used to support the movement of lithospheric plates came from paleomagnetism. This is based on the fact that rocks of different ages show a variable magnetic field direction, evidenced by studies since the mid–nineteenth century. The magnetic north and south poles reverse through time, and, especially important in paleotectonic studies, the relative position of the magnetic north pole varies through time. Initially, during the first half of the twentieth century, the latter phenomenon was explained by introducing what was called "polar wander" (see apparent polar wander) (i.e., it was assumed that the north pole location had been shifting through time). An alternative explanation, though, was that the continents had moved (shifted and rotated) relative to the north pole, and each continent, in fact, shows its own "polar wander path". During the late 1950s it was successfully shown on two occasions that these data could show the validity of continental drift: by Keith Runcorn in a paper in 1956, and by Warren Carey in a symposium held in March 1956. The second piece of evidence in support of continental drift came during the late 1950s and early 60s from data on the bathymetry of the deep ocean floors and the nature of the oceanic crust such as magnetic properties and, more generally, with the development of marine geology which gave evidence for the association of seafloor spreading along the mid-oceanic ridges and magnetic field reversals, published between 1959 and 1963 by Heezen, Dietz, Hess, Mason, Vine & Matthews, and Morley. Simultaneous advances in early seismic imaging techniques in and around Wadati–Benioff zones along the trenches bounding many continental margins, together with many other geophysical (e.g. gravimetric) and geological observations, showed how the oceanic crust could disappear into the mantle, providing the mechanism to balance the extension of the ocean basins with shortening along its margins. All this evidence, both from the ocean floor and from the continental margins, made it clear around 1965 that continental drift was feasible. The theory of plate tectonics was defined in a series of papers between 1965 and 1967. The theory revolutionized the Earth sciences, explaining a diverse range of geological phenomena and their implications in other studies such as paleogeography and paleobiology. Continental drift In the late 19th and early 20th centuries, geologists assumed that Earth's major features were fixed, and that most geologic features such as basin development and mountain ranges could be explained by vertical crustal movement, described in what is called the geosynclinal theory. Generally, this was placed in the context of a contracting planet Earth due to heat loss in the course of a relatively short geological time. It was observed as early as 1596 that the opposite coasts of the Atlantic Ocean—or, more precisely, the edges of the continental shelves—have similar shapes and seem to have once fitted together. Since that time many theories were proposed to explain this apparent complementarity, but the assumption of a solid Earth made these various proposals difficult to accept. The discovery of radioactivity and its associated heating properties in 1895 prompted a re-examination of the apparent age of Earth. This had previously been estimated by its cooling rate under the assumption that Earth's surface radiated like a black body. Those calculations had implied that, even if it started at red heat, Earth would have dropped to its present temperature in a few tens of millions of years. Armed with the knowledge of a new heat source, scientists realized that Earth would be much older, and that its core was still sufficiently hot to be liquid. By 1915, after having published a first article in 1912, Alfred Wegener was making serious arguments for the idea of continental drift in the first edition of The Origin of Continents and Oceans. In that book (re-issued in four successive editions up to the final one in 1936), he noted how the east coast of South America and the west coast of Africa looked as if they were once attached. Wegener was not the first to note this (Abraham Ortelius, Antonio Snider-Pellegrini, Eduard Suess, Roberto Mantovani and Frank Bursley Taylor preceded him just to mention a few), but he was the first to marshal significant fossil and paleo-topographical and climatological evidence to support this simple observation (and was supported in this by researchers such as Alex du Toit). Furthermore, when the rock strata of the margins of separate continents are very similar it suggests that these rocks were formed in the same way, implying that they were joined initially. For instance, parts of Scotland and Ireland contain rocks very similar to those found in Newfoundland and New Brunswick. Furthermore, the Caledonian Mountains of Europe and parts of the Appalachian Mountains of North America are very similar in structure and lithology. However, his ideas were not taken seriously by many geologists, who pointed out that there was no apparent mechanism for continental drift. Specifically, they did not see how continental rock could plow through the much denser rock that makes up oceanic crust. Wegener could not explain the force that drove continental drift, and his vindication did not come until after his death in 1930. Floating continents, paleomagnetism, and seismicity zones As it was observed early that although granite existed on continents, seafloor seemed to be composed of denser basalt, the prevailing concept during the first half of the twentieth century was that there were two types of crust, named "sial" (continental type crust) and "sima" (oceanic type crust). Furthermore, it was supposed that a static shell of strata was present under the continents. It therefore looked apparent that a layer of basalt (sial) underlies the continental rocks. However, based on abnormalities in plumb line deflection by the Andes in Peru, Pierre Bouguer had deduced that less-dense mountains must have a downward projection into the denser layer underneath. The concept that mountains had "roots" was confirmed by George B. Airy a hundred years later, during study of Himalayan gravitation, and seismic studies detected corresponding density variations. Therefore, by the mid-1950s, the question remained unresolved as to whether mountain roots were clenched in surrounding basalt or were floating on it like an iceberg. During the 20th century, improvements in and greater use of seismic instruments such as seismographs enabled scientists to learn that earthquakes tend to be concentrated in specific areas, most notably along the oceanic trenches and spreading ridges. By the late 1920s, seismologists were beginning to identify several prominent earthquake zones parallel to the trenches that typically were inclined 40–60° from the horizontal and extended several hundred kilometers into Earth. These zones later became known as Wadati–Benioff zones, or simply Benioff zones, in honor of the seismologists who first recognized them, Kiyoo Wadati of Japan and Hugo Benioff of the United States. The study of global seismicity greatly advanced in the 1960s with the establishment of the Worldwide Standardized Seismograph Network (WWSSN) to monitor the compliance of the 1963 treaty banning above-ground testing of nuclear weapons. The much improved data from the WWSSN instruments allowed seismologists to map precisely the zones of earthquake concentration worldwide. Meanwhile, debates developed around the phenomenon of polar wander. Since the early debates of continental drift, scientists had discussed and used evidence that polar drift had occurred because continents seemed to have moved through different climatic zones during the past. Furthermore, paleomagnetic data had shown that the magnetic pole had also shifted during time. Reasoning in an opposite way, the continents might have shifted and rotated, while the pole remained relatively fixed. The first time the evidence of magnetic polar wander was used to support the movements of continents was in a paper by Keith Runcorn in 1956, and successive papers by him and his students Ted Irving (who was actually the first to be convinced of the fact that paleomagnetism supported continental drift) and Ken Creer. This was immediately followed by a symposium in Tasmania in March 1956. In this symposium, the evidence was used in the theory of an expansion of the global crust. In this hypothesis, the shifting of the continents can be simply explained by a large increase in the size of Earth since its formation. However, this was unsatisfactory because its supporters could offer no convincing mechanism to produce a significant expansion of Earth. Certainly there is no evidence that the moon has expanded in the past 3 billion years; other work would soon show that the evidence was equally in support of continental drift on a globe with a stable radius. During the thirties up to the late fifties, works by Vening-Meinesz, Holmes, Umbgrove, and numerous others outlined concepts that were close or nearly identical to modern plate tectonics theory. In particular, the English geologist Arthur Holmes proposed in 1920 that plate junctions might lie beneath the sea, and in 1928 that convection currents within the mantle might be the driving force. Often, these contributions are forgotten because: At the time, continental drift was not accepted. Some of these ideas were discussed in the context of abandoned fixistic ideas of a deforming globe without continental drift or an expanding Earth. They were published during an episode of extreme political and economic instability that hampered scientific communication. Many were published by European scientists and at first not mentioned or given little credit in the papers on sea floor spreading published by the American researchers in the 1960s. Mid-oceanic ridge spreading and convection In 1947, a team of scientists led by Maurice Ewing utilizing the Woods Hole Oceanographic Institution's research vessel Atlantis and an array of instruments, confirmed the existence of a rise in the central Atlantic Ocean, and found that the floor of the seabed beneath the layer of sediments consisted of basalt, not the granite which is the main constituent of continents. They also found that the oceanic crust was much thinner than continental crust. All these new findings raised important and intriguing questions. The new data that had been collected on the ocean basins also showed particular characteristics regarding the bathymetry. One of the major outcomes of these datasets was that all along the globe, a system of mid-oceanic ridges was detected. An important conclusion was that along this system, new ocean floor was being created, which led to the concept of the "Great Global Rift". This was described in the crucial paper of Bruce Heezen (1960) based on his work with Marie Tharp, which would trigger a real revolution in thinking. A profound consequence of seafloor spreading is that new crust was, and still is, being continually created along the oceanic ridges. Therefore, Heezen advocated the so-called "expanding Earth" hypothesis of S. Warren Carey (see above). Therefore, the question remained as to how new crust was continuously added along the oceanic ridges without increasing the size of Earth. In reality, this question had been solved already by numerous scientists during the 1940s and the 1950s, like Arthur Holmes, Vening-Meinesz, Coates and many others: The crust in excess disappeared along what were called the oceanic trenches, where so-called "subduction" occurred. Therefore, when various scientists during the early 1960s started to reason on the data at their disposal regarding the ocean floor, the pieces of the theory quickly fell into place. The question particularly intrigued Harry Hammond Hess, a Princeton University geologist and a Naval Reserve Rear Admiral, and Robert S. Dietz, a scientist with the U.S. Coast and Geodetic Survey who first coined the term seafloor spreading. Dietz and Hess (the former published the same idea one year earlier in Nature, but priority belongs to Hess who had already distributed an unpublished manuscript of his 1962 article by 1960) were among the small number who really understood the broad implications of sea floor spreading and how it would eventually agree with the, at that time, unconventional and unaccepted ideas of continental drift and the elegant and mobilistic models proposed by previous workers like Holmes. In the same year, Robert R. Coats of the U.S. Geological Survey described the main features of island arc subduction in the Aleutian Islands. His paper, though little noted (and even ridiculed) at the time, has since been called "seminal" and "prescient". In reality, it actually shows that the work by the European scientists on island arcs and mountain belts performed and published during the 1930s up until the 1950s was applied and appreciated also in the United States. If Earth's crust was expanding along the oceanic ridges, Hess and Dietz reasoned like Holmes and others before them, it must be shrinking elsewhere. Hess followed Heezen, suggesting that new oceanic crust continuously spreads away from the ridges in a conveyor belt–like motion. And, using the mobilistic concepts developed before, he correctly concluded that many millions of years later, the oceanic crust eventually descends along the continental margins where oceanic trenches—very deep, narrow canyons—are formed, e.g. along the rim of the Pacific Ocean basin. The important step Hess made was that convection currents would be the driving force in this process, arriving at the same conclusions as Holmes had decades before with the only difference that the thinning of the ocean crust was performed using Heezen's mechanism of spreading along the ridges. Hess therefore concluded that the Atlantic Ocean was expanding while the Pacific Ocean was shrinking. As old oceanic crust is "consumed" in the trenches (like Holmes and others, he thought this was done by thickening of the continental lithosphere, not, as now understood, by underthrusting at a larger scale of the oceanic crust itself into the mantle), new magma rises and erupts along the spreading ridges to form new crust. In effect, the ocean basins are perpetually being "recycled", with the forming of new crust and the destruction of old oceanic lithosphere occurring simultaneously. Thus, the new mobilistic concepts neatly explained why Earth does not get bigger with sea floor spreading, why there is so little sediment accumulation on the ocean floor, and why oceanic rocks are much younger than continental rocks. Magnetic striping Beginning in the 1950s, scientists like Victor Vacquier, using magnetic instruments (magnetometers) adapted from airborne devices developed during World War II to detect submarines, began recognizing odd magnetic variations across the ocean floor. This finding, though unexpected, was not entirely surprising because it was known that basalt—the iron-rich, volcanic rock making up the ocean floor—contains a strongly magnetic mineral (magnetite) and can locally distort compass readings. This distortion was recognized by Icelandic mariners as early as the late 18th century. More important, because the presence of magnetite gives the basalt measurable magnetic properties, these newly discovered magnetic variations provided another means to study the deep ocean floor. When newly formed rock cools, such magnetic materials recorded Earth's magnetic field at the time. As more and more of the seafloor was mapped during the 1950s, the magnetic variations turned out not to be random or isolated occurrences, but instead revealed recognizable patterns. When these magnetic patterns were mapped over a wide region, the ocean floor showed a zebra-like pattern: one stripe with normal polarity and the adjoining stripe with reversed polarity. The overall pattern, defined by these alternating bands of normally and reversely polarized rock, became known as magnetic striping, and was published by Ron G. Mason and co-workers in 1961, who did not find, though, an explanation for these data in terms of sea floor spreading, like Vine, Matthews and Morley a few years later. The discovery of magnetic striping called for an explanation. In the early 1960s scientists such as Heezen, Hess and Dietz had begun to theorise that mid-ocean ridges mark structurally weak zones where the ocean floor was being ripped in two lengthwise along the ridge crest (see the previous paragraph). New magma from deep within Earth rises easily through these weak zones and eventually erupts along the crest of the ridges to create new oceanic crust. This process, at first denominated the "conveyer belt hypothesis" and later called seafloor spreading, operating over many millions of years continues to form new ocean floor all across the 50,000 km-long system of mid-ocean ridges. Only four years after the maps with the "zebra pattern" of magnetic stripes were published, the link between sea floor spreading and these patterns was correctly placed, independently by Lawrence Morley, and by Fred Vine and Drummond Matthews, in 1963, now called the Vine–Matthews–Morley hypothesis. This hypothesis linked these patterns to geomagnetic reversals and was supported by several lines of evidence: the stripes are symmetrical around the crests of the mid-ocean ridges; at or near the crest of the ridge, the rocks are very young, and they become progressively older away from the ridge crest; the youngest rocks at the ridge crest always have present-day (normal) polarity; stripes of rock parallel to the ridge crest alternate in magnetic polarity (normal-reversed-normal, etc.), suggesting that they were formed during different epochs documenting the (already known from independent studies) normal and reversal episodes of Earth's magnetic field. By explaining both the zebra-like magnetic striping and the construction of the mid-ocean ridge system, the seafloor spreading hypothesis (SFS) quickly gained converts and represented another major advance in the development of the plate-tectonics theory. Furthermore, the oceanic crust now came to be appreciated as a natural "tape recording" of the history of the geomagnetic field reversals (GMFR) of Earth's magnetic field. Today, extensive studies are dedicated to the calibration of the normal-reversal patterns in the oceanic crust on one hand and known timescales derived from the dating of basalt layers in sedimentary sequences (magnetostratigraphy) on the other, to |
was the successor to the Philips Videopac G7000, the European counterpart to the American Magnavox Odyssey². The system featured excellently tailored background and foreground graphics. The G7400 could play three types of games: all normal G7000 games, special G7000 games with additional high-res background graphics that would appear only when played on the G7400, and G7400-only games with high-res sprites and backgrounds. Odyssey³ There were plans to release the G7400 in the United States as the Odyssey³ and later as the Odyssey³ Command Center; the system was demonstrated at the 1983 Consumer Electronics Show, and some prototypes have been found. The Odyssey³ was never released, mostly because company executives concluded that it was not technologically advanced enough to compete in the marketplace. Also, the video game crash of 1983 ended all lingering hopes for a release. The Odyssey³ was to feature a real mechanical keyboard, unlike the membrane keyboard | three types of games: all normal G7000 games, special G7000 games with additional high-res background graphics that would appear only when played on the G7400, and G7400-only games with high-res sprites and backgrounds. Odyssey³ There were plans to release the G7400 in the United States as the Odyssey³ and later as the Odyssey³ Command Center; the system was demonstrated at the 1983 Consumer Electronics Show, and some prototypes have been found. The Odyssey³ was never released, mostly because company executives concluded that it was not technologically advanced enough to compete in the marketplace. Also, the video game crash of 1983 ended all lingering hopes for a release. The Odyssey³ was to feature a real mechanical keyboard, unlike the membrane keyboard found in the G7000 and Odyssey², as well as a built-in joystick holder for dual-joystick games. Prototypes for a 300 baud modem and a speech synthesizer are known to have been made, and a laserdisc interface was planned to allow |
Pong exclusively through Sears retail stores. The home version was also a commercial success and led to numerous clones. The game was remade on numerous home and portable platforms following its release. Pong is part of the permanent collection of the Smithsonian Institution in Washington, D.C., due to its cultural impact. Gameplay Pong is a two-dimensional sports game that simulates table tennis. The player controls an in-game paddle by moving it vertically across the left or right side of the screen. They can compete against another player controlling a second paddle on the opposing side. Players use the paddles to hit a ball back and forth. The goal is for each player to reach eleven points before the opponent; points are earned when one fails to return the ball to the other. Development and history Pong was the first game developed by Atari. After producing Computer Space, Bushnell decided to form a company to produce more games by licensing ideas to other companies. The first contract was with Bally Manufacturing Corporation for a driving game. Soon after the founding, Bushnell hired Allan Alcorn because of his experience with electrical engineering and computer science; Bushnell and Dabney also had previously worked with him at Ampex. Prior to working at Atari, Alcorn had no experience with video games. Bushnell had originally planned to develop a driving video game, influenced by Chicago Coin's Speedway (1969) which at the time was the biggest-selling electro-mechanical game at his amusement arcade. However, Bushnell had concerns that it might be too complicated for Alcorn's first game. To acclimate Alcorn to creating games, Bushnell gave him a project secretly meant to be a warm-up exercise. Bushnell told Alcorn that he had a contract with General Electric for a product, and asked Alcorn to create a simple game with one moving spot, two paddles, and digits for score keeping. In 2011, Bushnell stated that the game was inspired by previous versions of electronic tennis he had played before; Bushnell played a version on a PDP-1 computer in 1964 while attending college. However, Alcorn has claimed it was in direct response to Bushnell's viewing of the Magnavox Odyssey's Tennis game. In May 1972, Bushnell had visited the Magnavox Profit Caravan in Burlingame, California where he played the Magnavox Odyssey demonstration, specifically the table tennis game. Though he thought the game lacked quality, seeing it prompted Bushnell to assign the project to Alcorn. Alcorn first examined Bushnell's schematics for Computer Space, but found them to be illegible. He went on to create his own designs based on his knowledge of transistor–transistor logic (TTL) and Bushnell's game. Feeling the basic game was too boring, Alcorn added features to give the game more appeal. He divided the paddle into eight segments to change the ball's angle of return. For example, the center segments return the ball at a 90° angle in relation to the paddle, while the outer segments return the ball at smaller angles. He also made the ball accelerate the longer it remained in play; missing the ball reset the speed. Another feature was that the in-game paddles were unable to reach the top of the screen. This was caused by a simple circuit that had an inherent defect. Instead of dedicating time to fixing the defect, Alcorn decided it gave the game more difficulty and helped limit the time the game could be played; he imagined two skilled players being able to play forever otherwise. Three months into development, Bushnell told Alcorn he wanted the game to feature realistic sound effects and a roaring crowd. Dabney wanted the game to "boo" and "hiss" when a player lost a round. Alcorn had limited space available for the necessary electronics and was unaware of how to create such sounds with digital circuits. After inspecting the sync generator, he discovered that it could generate different tones and used those for the game's sound effects. To construct the prototype, Alcorn purchased a $75 Hitachi black-and-white television set from a local store, placed it into a wooden cabinet, and soldered the wires into boards to create the necessary circuitry. The prototype impressed Bushnell and Dabney so much that they felt it could be a profitable product and decided to test its marketability. In August 1972, Bushnell and Alcorn installed the Pong prototype at a local bar, Andy Capp's Tavern. They selected the bar because of their good working relation with the bar's owner and manager, Bill Gaddis; Atari supplied pinball machines to Gaddis. Bushnell and Alcorn placed the prototype on one of the tables near the other entertainment machines: a jukebox, pinball machines, and Computer Space. The game was well received the first night and its popularity continued to grow over the next one and a half weeks. Bushnell then went on a business trip to Chicago to demonstrate Pong to executives at Bally and Midway Manufacturing; he intended to use Pong to fulfill his contract with Bally, rather than the driving game. A few days later, the prototype began exhibiting technical issues and Gaddis contacted Alcorn to fix it. Upon inspecting the machine, Alcorn discovered that the problem was that the coin mechanism was overflowing with quarters. After hearing about the game's success, Bushnell decided there would be more profit for Atari to manufacture the game rather than license it, but the interest of Bally and Midway had already been piqued. Bushnell decided to inform each of the two groups that the other was uninterested—Bushnell told the Bally executives that the Midway executives did not want it and vice versa—to preserve the relationships for future dealings. Upon hearing Bushnell's comment, the two groups declined his offer. Bushnell had difficulty finding financial backing for Pong; banks viewed it as a variant of pinball, which at the time the general public associated with the Mafia. Atari eventually obtained a line of credit from Wells Fargo that it used to expand its facilities to house an assembly line. The company announced Pong on 29 November 1972. Management sought assembly workers at the local unemployment office, but was unable to keep up with demand. The first arcade cabinets produced were assembled very slowly, about ten machines a day, many of which failed quality testing. Atari eventually streamlined the process and began producing the game in greater quantities. By 1973, they began shipping Pong to other countries with the aid of foreign partners. In Japan, Pong was officially released in November 1973 by Atari Japan, which would later become part of Namco. However, Pong had been beaten to the market by two Japanese Pong clones released in July 1973: Sega's Pong Tron and Taito's Elepong. Home version After the success of Pong, Bushnell pushed his employees to create new products. A new electronic technology, the large-scale integration (LSI) chip, had | into development, Bushnell told Alcorn he wanted the game to feature realistic sound effects and a roaring crowd. Dabney wanted the game to "boo" and "hiss" when a player lost a round. Alcorn had limited space available for the necessary electronics and was unaware of how to create such sounds with digital circuits. After inspecting the sync generator, he discovered that it could generate different tones and used those for the game's sound effects. To construct the prototype, Alcorn purchased a $75 Hitachi black-and-white television set from a local store, placed it into a wooden cabinet, and soldered the wires into boards to create the necessary circuitry. The prototype impressed Bushnell and Dabney so much that they felt it could be a profitable product and decided to test its marketability. In August 1972, Bushnell and Alcorn installed the Pong prototype at a local bar, Andy Capp's Tavern. They selected the bar because of their good working relation with the bar's owner and manager, Bill Gaddis; Atari supplied pinball machines to Gaddis. Bushnell and Alcorn placed the prototype on one of the tables near the other entertainment machines: a jukebox, pinball machines, and Computer Space. The game was well received the first night and its popularity continued to grow over the next one and a half weeks. Bushnell then went on a business trip to Chicago to demonstrate Pong to executives at Bally and Midway Manufacturing; he intended to use Pong to fulfill his contract with Bally, rather than the driving game. A few days later, the prototype began exhibiting technical issues and Gaddis contacted Alcorn to fix it. Upon inspecting the machine, Alcorn discovered that the problem was that the coin mechanism was overflowing with quarters. After hearing about the game's success, Bushnell decided there would be more profit for Atari to manufacture the game rather than license it, but the interest of Bally and Midway had already been piqued. Bushnell decided to inform each of the two groups that the other was uninterested—Bushnell told the Bally executives that the Midway executives did not want it and vice versa—to preserve the relationships for future dealings. Upon hearing Bushnell's comment, the two groups declined his offer. Bushnell had difficulty finding financial backing for Pong; banks viewed it as a variant of pinball, which at the time the general public associated with the Mafia. Atari eventually obtained a line of credit from Wells Fargo that it used to expand its facilities to house an assembly line. The company announced Pong on 29 November 1972. Management sought assembly workers at the local unemployment office, but was unable to keep up with demand. The first arcade cabinets produced were assembled very slowly, about ten machines a day, many of which failed quality testing. Atari eventually streamlined the process and began producing the game in greater quantities. By 1973, they began shipping Pong to other countries with the aid of foreign partners. In Japan, Pong was officially released in November 1973 by Atari Japan, which would later become part of Namco. However, Pong had been beaten to the market by two Japanese Pong clones released in July 1973: Sega's Pong Tron and Taito's Elepong. Home version After the success of Pong, Bushnell pushed his employees to create new products. A new electronic technology, the large-scale integration (LSI) chip, had recently become available, which Bushnell believed would "allow pioneering in new" game concepts. Atari began working on the reduction of Pong from a large arcade printed circuit board (PCB) down to a small LSI chip for use in a home system. The initial development cost for a game on a single LSI chip was expensive, costing around , but once the chip was developed, it became cheaper to mass-produce the game as well as more difficult to reverse-engineer. In 1974, Atari engineer Harold Lee proposed a home version of Pong that would connect to a television: Home Pong. The system began development under the codename Darlene, named after an employee at Atari. Alcorn worked with Lee to develop the designs and prototype and based them on the same digital technology used in their arcade games. The two worked in shifts to save time and money; Lee worked on the design's logic during the day, while Alcorn debugged the designs in the evenings. After the designs were approved, fellow Atari engineer Bob Brown assisted Alcorn and Lee in building a prototype. The prototype consisted of a device attached to a wooden pedestal containing over a hundred wires, which was eventually be replaced with a single chip designed by Alcorn and Lee; the chip had yet to be tested and built before the prototype was constructed. The chip was finished in the latter half of 1974, and was, at the time, the highest-performing chip used in a consumer product. Bushnell and Gene Lipkin, Atari's vice-president of sales, approached toy and electronic retailers to sell Home Pong, but were rejected. Retailers felt the product was too expensive and would not interest consumers. Atari contacted the Sears Sporting Goods department after noticing a Magnavox Odyssey advertisement in the sporting goods section of its catalog. Atari staff discussed the game with a representative, Tom Quinn, who expressed enthusiasm and offered the company an exclusive deal. Believing they could find more favorable terms elsewhere, Atari's executives declined and continued to pursue toy retailers. In January 1975, Atari staff set up a Home Pong booth at the American Toy Fair (a trade fair) in New York City, but was unsuccessful in soliciting orders due to high price of the unit. While at the show, they met Quinn again, and, a few days later, set up a meeting with him to obtain a sales order. In order to gain approval from the Sporting Goods department, Quinn suggested Atari demonstrate the game to executives in Chicago. Alcorn and Lipkin traveled to the Sears Tower and, despite a technical complication in connection with an antenna on top of the building which broadcast on the same channel as the game, obtained approval. Bushnell told Quinn he could produce 75,000 units in time for the Christmas season; however, Quinn requested double the amount. Though Bushnell knew Atari lacked the capacity to manufacture 150,000 units, he agreed. Atari acquired a new factory through funding obtained by venture capitalist Don Valentine. Supervised by Jimm Tubb, the factory fulfilled the Sears order. The first units manufactured were branded with Sears' "Tele-Games" name. Atari later released a version under its own brand in 1976. Lawsuit from Magnavox In April 1974 Magnavox filed suit against Atari, Allied Leisure, Bally Midway and Chicago Dynamics. Magnavox argued that Atari had infringed on Sanders Associates' patents relating to the concept of electronic ping-pong based on detailed records Ralph Baer kept of the Odyssey's design process dating back to 1966. Other documents included depositions from witnesses and a signed guest book that demonstrated Bushnell had played the Odyssey's table tennis game prior to releasing Pong. In response to claims that he saw the Odyssey, Bushnell later stated that, "The fact is that I absolutely did see the Odyssey game and I didn't think it was very clever." After considering his options, Bushnell decided to settle with Magnavox out of court in June 1976. Bushnell's lawyer felt they could win; however, he estimated legal costs of US$1.5 million, which would have exceeded Atari's funds. Magnavox offered Atari an agreement to become a licensee for US$1.5 million payable in eight installments. In addition, Magnavox obtained the right to full information on Atari products publicly announced or released over the next year. Magnavox continued to pursue legal action against the other companies, and proceedings began shortly after Atari's settlement. The first case took place at the district court in Chicago, with Judge John Grady presiding. Magnavox won the suit against the remaining defendants. Atari may have delayed the announcement of the Atari 2600 by a few months to avoid disclosing information about the system under the settlement agreement. Impact and legacy The Pong arcade games manufactured by Atari were a great success. The prototype was well received by Andy Capp's Tavern patrons; people came to the bar solely to play the game. Following its release, Pong consistently earned four times more revenue than other coin-operated machines. Bushnell estimated that the game earned US$35–40 per day (i.e.140–160 plays daily per console at $0.25 per play), which he described as nothing he'd ever seen before in the coin-operated entertainment industry at the time. The game's earning power resulted in an increase in the number of orders Atari received. This provided Atari with a steady source of income; the company sold the machines at three times the cost of production. By 1973, the company had filled 2,500 orders, and, at the end of 1974, sold more than 8,000 units. The arcade cabinets have since become collector's items with the cocktail-table version being the rarest. Soon after the game's successful testing at Andy Capp's Tavern, other companies began visiting the bar to inspect it. Similar games appeared on the market three months later, produced by companies like Ramtek and Nutting Associates. Atari could do little against the competitors as they had not initially filed for patents on the solid state technology used in the game. When the company did file for patents, complications delayed the process. As a result, the market consisted primarily of "Pong clones"; author Steven Kent estimated that Atari had produced less than a third of the machines. Bushnell referred to the competitors as "Jackals" because he felt they had an unfair advantage. His solution to competing against them was to produce more innovative games and concepts. Home Pong was an instant success following its limited 1975 release through Sears; around 150,000 units were sold that holiday season. The game became Sears' most successful product at the time, which earned Atari a Sears Quality Excellence Award. Atari's own version sold an additional 50,000 units. Similar to the arcade version, several companies released clones to capitalize on the home console's success, many of which continued to produce new consoles and video games. Magnavox re-released their Odyssey system with simplified hardware and new features and, later, released updated versions. Coleco entered the video game market with their Telstar console; it features three Pong variants and was also succeeded by newer models. Nintendo released the Color TV Game 6 in 1977, which plays six variations of electronic tennis. The next year, it was followed by an updated version, the Color TV Game 15, which features fifteen variations. The systems were Nintendo's entry into the home video game market and the first to produce themselves—they had previously licensed the Magnavox Odyssey. The dedicated Pong consoles and the numerous clones have since become varying levels of rare; Atari's Pong consoles are common, while APF Electronics' TV Fun consoles are moderately rare. Prices among collectors, however, vary with rarity; the Sears Tele-Games versions are often cheaper than those with the Atari brand. Several publications consider Pong the game that launched the video game industry as a lucrative enterprise. Video game author David Ellis sees the |
Byzantine historian Procopius, though not unbiased, records the Cursus Publicus system remained largely intact until it was dismantled in the Byzantine empire by the emperor Justinian in the 6th century. The Princely House of Thurn and Taxis family initiated regular mail service from Brussels in the 16th century, directing the Imperial Post of the Holy Roman Empire. The British Postal Museum claims that the oldest functioning post office in the world is on High Street in Sanquhar, Scotland. The post office has functioned continuously since 1712, during which horses and stagecoaches were used to carry mail. In parts of Europe, special postal censorship offices existed to intercept and censor mail. In France, such offices were known as cabinets noirs. Unstaffed postal facilities In many jurisdictions, mailboxes and post office boxes have long been in widespread use for drop-off and pickup (respectively) of mail and small packages outside post offices or when offices are closed. Germany's national postage system Deutsche Post introduced the Pack-Station for package delivery, including both drop-off and pickup, in 2001. In the 2000s, the United States Postal Service began to install Automated Postal Centers (APCs) in many locations in both post offices, for when they are closed or busy, and retail locations. APCs can print postage and accept mail and small packages. Notable post offices Operational Central Post Office (1939), also temporary home to the Privy Council of Canada General Post Office in Dublin (inaugurated 1818), headquarters of the Irish post and headquarters of the 1916 Easter Uprising First Toronto Post Office (1833) General Post Office (1864), erected on the site of the Black Hole of Calcutta General Post Office (1874) in Chennai, India General Post Office (1887) in Lahore, Pakistan General Post Office (1895), the headquarters of the Sri Lankan Post General Post Office (1903), headquarters of the Croatian post Istanbul Main Post Office (1905), home of the Istanbul Postal Museum James Farley Post Office (1912), America's largest operating post office, the main office for New York City. Bears the famous translation of Herodotus's description of the Persian postal system along its front facade: "Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds" General | providing customer service. "General Post Office" is sometimes used for the national headquarters of a postal service, even if the building does not provide customer service. A postal facility that is used exclusively for processing mail is instead known as a sorting office or delivery office, which may have a large central area known as a sorting or postal hall. Integrated facilities combining mail processing with railway stations or airports are known as mail exchanges. In India, post offices are found in almost every village having panchayat (a "village council"), towns, cities, and throughout the geographical area of India. India's postal system changed its name to India Post after the advent of private courier companies in the 1990s. It is run by the Indian government's Department of Posts. India Post accepts and delivers inland letters, postcards, parcels, postal stamps, and money orders (money transfers). Few post offices in India offer speed post (fast delivery) and payments or bank savings services. It is also uncommon for Indian post offices to sell insurance policies or accept payment for electricity, landline telephone, or gas bills. Until the 1990s, post offices would collect fees for radio licenses, recruitment for government jobs, and the operation of public call telephone (PCO) booths. Postmen would deliver letters, money orders, and parcels to places that are within the assigned area of a particular post office but there are no post offices in the location. Each Indian post office is assigned a unique six-digit code called the Postal Index Number, or PIN. Each post office is identified by its PIN. Private courier and delivery services often have offices as well, although these are usually not called "post offices," except in the case of Germany, which has fully privatised its national postal system. As abbreviation PO is used, together with GPO for General Post Office and LPO for Licensed Post Office. History There is evidence of corps of royal couriers disseminating the decrees of Egyptian pharaohs as early as 2400BCE, and it is possible that the service greatly precedes that date. Similarly, there may be ancient organised systems of post houses providing mounted courier service, although sources vary as to precisely who initiated the practice. In the Persian Empire, a Chapar Khaneh system existed along the Royal Road. Similar postage systems were established in India and China by the Mauryan and Han dynasties in the 2nd century BCE. The Roman historian Suetonius credited Augustus with regularising the Roman transportation and courier network, the Cursus Publicus. Local officials were obliged to provide couriers who would be responsible for their message's entire course. Locally maintained post houses () privately owned rest houses () and were obliged or honored to care for couriers along their way. The Roman emperor Diocletian later established two parallel systems: one providing fresh horses or mules for urgent correspondence and the other providing sturdy oxen for bulk shipments. The Byzantine historian Procopius, though not unbiased, records the Cursus Publicus system remained largely intact until it was dismantled in the Byzantine empire by the emperor Justinian in the 6th century. The Princely |
criteria. Notable categorizations are colors, ages, or facial expressions of the people in the photos. Slideshows can be viewed with the application, along with music and playlists. The software was updated with the release of system software version 3.40 allowing users to upload and browse photos on Facebook and Picasa. PlayMemories Studio PlayMemories is an optional stereoscopic 3D (and also standard) photo viewing application, which is installed from the PlayStation Store at 956 MB. The application is dedicated specifically to 3D photos and features the ability to zoom into 3D environments and change the angle and perspective of panoramas. It requires system software 3.40 or higher; 3D photos; a 3D HDTV, and an HDMI cable for the 3D images to be viewed properly. Video services Video editor and uploader A new application was released as part of system software version 3.40 which allows users to edit videos on PlayStation 3 and upload them to the Internet. The software features basic video editing tools including the ability to cut videos and add music and captions. Videos can then be rendered and uploaded to video sharing websites such as Facebook and YouTube. Video on demand In addition to the video service provided by the Sony Entertainment Network, the PlayStation 3 console has access to a variety of third-party video services, dependent on region: Since June 2009, VidZone has offered a free music video streaming service in Europe, Australia and New Zealand. In October 2009, Sony Computer Entertainment and Netflix announced that the Netflix streaming service would also be available on PlayStation 3 in the United States. A paid Netflix subscription was required for the service. The service became available in November 2009. Initially users had to use a free Blu-ray disc to access the service; however, in October 2010 the requirement to use a disc to gain access was removed. In April 2010, support for MLB.tv was added, allowing MLB.tv subscribers to watch regular season games live in HD and access new interactive features designed exclusively for PSN. In November 2010, access to the video and social networking site MUBI was enabled for European, New Zealand, and Australian users; the service integrates elements of social networking with rental or subscription video streaming, allowing users to watch and discuss films with other users. Also in November 2010 the video rental service VUDU, NHL GameCenter Live, and subscription service Hulu Plus launched on PlayStation 3 in the United States. In August 2011, Sony, in partnership with DirecTV, added NFL Sunday Ticket. Then in October 2011, Best Buy launched an app for its CinemaNow service. In April 2012, Amazon.com launched an Amazon Video app, accessible to Amazon Prime subscribers (in the US). Upon reviewing the PlayStation and Netflix collaboration, Pocket-Lint said "We've used the Netflix app on Xbox too and, as good as it is, we think the PS3 version might have the edge here." and stated that having Netflix and LoveFilm on PlayStation is "mind-blowingly good." In July 2013, YuppTV OTT player launched its branded application on the PS3 computer entertainment system in the United States. Audio capabilities The PlayStation 3 has the ability to play standard audio CDs, a feature that was notably removed from its successors. PlayStation 3 added the ability for ripping audio CDs to store them on the system's hard disk; the system has transcoders for ripping to either MP3, AAC, or Sony's own ATRAC (ATRAC3plus) formats. Early models were also able to playback Super Audio CDs, however this support was dropped in the third generation revision of the console from late 2007. However, all models do retain Direct Stream Digital playback ability. ATRAC formatted tracks from Walkman digital audio players can be natively played on the PlayStation 3 by connecting the player to the system's USB port. The PlayStation 3 did not feature the Sony CONNECT Music Store. OtherOS support PlayStation 3 initially shipped with the ability to install an alternative operating system alongside the main system software; Linux and other Unix-based operating systems were available. The hardware allowed access to six of the seven Synergistic Processing Elements of the Cell microprocessor, but not the RSX 'Reality Synthesizer' graphics chip. The 'OtherOS' functionality was not present in the updated PS Slim models, and the feature was subsequently removed from previous versions of the PS3 as part of the machine's firmware update version 3.21 which was released on April 1, 2010; Sony cited security concerns as the rationale. The firmware update 3.21 was mandatory for access to the PlayStation Network. The removal caused some controversy; as the update removed officially advertised features from already sold products, and gave rise to several class action lawsuits aimed at making Sony return the feature or provide compensation. On December 8, 2011, U.S. District Judge Richard Seeborg dismissed the last remaining count of the class action lawsuit (other claims in the suit had previously been dismissed), stating: "As a legal matter, ... plaintiffs have failed to allege facts or articulate a theory on which Sony may be held liable." , the U.S. Court of Appeals for the Ninth Circuit partially reversed the dismissal and have sent the case back to the district court. Leap year bug On March 1, 2010 (UTC), many of the original "fat" PlayStation 3 models worldwide were experiencing errors related to their internal system clock. The error had many symptoms. Initially, the main problem seemed to be the inability to connect to the PlayStation Network. However, the root cause of the problem was unrelated to the PlayStation Network, since even users who had never been online also had problems playing installed offline games (which queried the system timer as part of startup) and using system themes. At the same time, many users noted that the console's clock had gone back to December 31, 1999. The event was nicknamed the ApocalyPS3, a play on the word apocalypse and PS3, the abbreviation for the PlayStation 3 console. The error code displayed was typically 8001050F and affected users were unable to sign in, play games, use dynamic themes and view/sync trophies. The problem only resided within the first- through third-generation original PS3 units while the newer "Slim" models were unaffected because of different internal hardware for the clock. Sony confirmed that there was an error and stated that it was narrowing down the issue and were continuing to work to restore service. By March 2 (UTC), 2010, owners of original PS3 models could connect to PSN successfully and the clock no longer showed December 31, 1999. Sony stated that the affected models incorrectly identified 2010 as a leap year, because of a bug in the BCD method of storing the date. However, for some users, the hardware's operating system clock (mainly updated from the internet and not associated with the internal clock) needed to be updated manually or by re-syncing it via the internet. On June 29, 2010, Sony released PS3 system software update 3.40, which improved the functionality of the internal clock to properly account for leap years. Features PlayStation Portable connectivity PlayStation Portable can connect with PlayStation 3 in many ways, including in-game connectivity. For example, Formula One Championship Edition, a racing game, was shown at E3 2006 using a PSP as a real-time rear-view mirror. In addition, users are able to download original PlayStation format games from the PlayStation Store, transfer and play them on PSP as well as PS3 itself. It is also possible to use the Remote Play feature to play these and some PlayStation Network games, remotely on PSP over a network or internet connection. Sony has also demonstrated PSP playing back video content from PlayStation 3 hard disk across an ad hoc wireless network. This feature is referred to as Remote Play located under the browser icon on both PlayStation 3 and PlayStation Portable. Remote play has since expanded to allow remote access to PS3 via PSP from any wireless access point in the world. PlayStation Network PlayStation Network is the unified online multiplayer gaming and digital media delivery service provided by Sony Computer Entertainment for PlayStation 3 and PlayStation Portable, announced during the 2006 PlayStation Business Briefing meeting in Tokyo. The service is always connected, free, and includes multiplayer support. The network enables online gaming, the PlayStation Store, PlayStation Home and other services. PlayStation Network uses real currency and PlayStation Network Cards as seen with the PlayStation Store and PlayStation Home. PlayStation Plus PlayStation Plus (commonly abbreviated PS+ and occasionally referred to as PSN Plus) is a premium PlayStation Network subscription service that was officially unveiled at E3 2010 by Jack Tretton, President and CEO of SCEA. Rumors of such service had been in speculation since Kaz Hirai's announcement at TGS 2009 of a possible paid service for PSN but with the current PSN service still available. Launched alongside PS3 firmware 3.40 and PSP firmware 6.30 on June 29, 2010, the paid-for subscription service provides users with enhanced services on the PlayStation Network, on top of the current PSN service which is still available with all of its features. These enhancements include the ability to have demos and game updates download automatically to PlayStation 3. Subscribers also get early or exclusive access to some betas, game demos, premium downloadable content and other PlayStation Store items. North American users also get a free subscription to Qore. Users may choose to purchase either a one-year or a three-month subscription to PlayStation Plus. PlayStation Store The PlayStation Store is an online virtual market available to users of Sony's PlayStation 3 (PS3) and PlayStation Portable (PSP) game consoles via the PlayStation Network. The Store offers a range of downloadable content both for purchase and available free of charge. Available content includes full games, add-on content, playable demos, themes and game and movie trailers. The service is accessible through an icon on the XMB on PS3 and PSP. The PS3 store can also be accessed on PSP via a Remote Play connection to PS3. The PSP store is also available via the PC application, Media Go. , there have been over 600 million downloads from the PlayStation Store worldwide. The PlayStation Store is updated with new content each Tuesday in North America, and each Wednesday in PAL regions. In May 2010 this was changed from Thursdays to allow PSP games to be released digitally, closer to the time they are released on UMD. On March 29, 2021, Sony announced that it was would shut down the PS3 version of the Store on July 2, though previous purchases on the store will remain downloadable. However, on April 19, following fan feedback, Sony reversed their decision and confirmed that the PS3 store would remain operational. What's New What's New was announced at Gamescom 2009 and was released on September 1, 2009, with PlayStation 3 system software 3.0. The feature was to replace the existing [Information Board], which displayed news from the PlayStation website associated with the user's region. The concept was developed further into a major PlayStation Network feature, which interacts with the [Status Indicator] to display a ticker of all content, excluding recently played content (currently in North America and Japan only). The system displays the What's New screen by default instead of the [Games] menu (or [Video] menu, if a movie was inserted) when starting up. What's New has four sections: "Our Pick", "Recently Played", latest information and new content available in PlayStation Store. There are four kinds of content the What's New screen displays and links to, on the sections. "Recently Played" displays the user's recently played games and online services only, whereas, the other sections can contain website links, links to play videos and access to selected sections of the PlayStation Store. The PlayStation Store icons in the [Game] and [Video] section act similarly to the What's New screen, except that they only display and link to games and videos in the PlayStation Store, respectively. PlayStation Home PlayStation Home was a virtual 3D social networking service for the PlayStation Network. Home allowed users to create a custom avatar, which could be groomed realistically. Users could edit and decorate their personal apartments, avatars, or club houses with free, premium, or won content. Users could shop for new items or win prizes from PS3 games, or Home activities. Users could interact and connect with friends and customize content in a virtual world. Home also acted as a meeting place for users that wanted to play multiplayer video games with others. A closed beta began in Europe from May 2007 and expanded to other territories soon after. Home was delayed and expanded several times before initially releasing. The Open Beta test was started on December 11, 2008. It remained as a perpetual beta until its closure on March 31, 2015. Home was available directly from the PlayStation 3 XrossMediaBar. Membership was free, but required a PSN account. Home featured places to meet and interact, dedicated game spaces, developer spaces, company spaces, and events. The service underwent a weekly maintenance and frequent updates. At the time of its closure in March 2015, Home had been downloaded by over 41 million users. Life with PlayStation Life with PlayStation, released on September 18, 2008 to succeed Folding@home, was retired November 6, 2012. Life with PlayStation used virtual globe data to display news and information by city. Along with Folding@home functionality, the application provided access to three other information "channels", the first being the Live Channel offering news headlines and weather which were provided by Google News, The Weather Channel, the University of Wisconsin–Madison Space Science and Engineering Center, among other sources. The second channel was the World Heritage channel which offered historical information about historical sites. The third channel was the United Village channel. United Village was designed to share information about communities and cultures worldwide. An update allowed video and photo viewing in the application. The fourth channel was the U.S. exclusive PlayStation Network Game Trailers Channel for direct streaming of game trailers. Outage On April 20, 2011, Sony shut down the PlayStation Network and Qriocity for a prolonged interval, revealing on April 23 that this was due to "an external intrusion on our system". Sony later revealed that the personal information of 77 million users might have been taken, including: names; addresses; countries; email addresses; birthdates; PSN/Qriocity logins, passwords and handles/PSN online IDs. It also stated that it was possible that users' profile data, including purchase history and billing address, and PlayStation Network/Qriocity password security answers may have been obtained. There was no evidence that any credit card data had been taken, but the possibility could not be ruled out, and Sony advised customers that their credit card data may have been obtained. Additionally, the credit card numbers were encrypted and Sony never collected the three digit CVC or CSC number from the back of the credit cards which is required for authenticating some transactions. In response to the incident, Sony announced a "Welcome Back" program, 30 days free membership of PlayStation Plus for all PSN members, two free downloadable PS3 games, and a free one-year enrollment in an identity theft protection program. Sales and production costs Although its PlayStation predecessors had been very dominant against the competition and were hugely profitable for Sony, PlayStation 3 had an inauspicious start, and Sony chairman and CEO Sir Howard Stringer initially could not convince investors of a turnaround in its fortunes. The PS3 lacked the unique gameplay of the more affordable Wii which became that generation's most successful console in terms of units sold. Furthermore, PS3 had to compete directly with Xbox 360 which had a market head start, and as a result the platform no longer had exclusive titles that the PS2 enjoyed such as the Grand Theft Auto and Final Fantasy series (regarding cross-platform games, Xbox 360 versions were generally considered superior in 2006, although by 2008 the PS3 versions had reached parity or surpassed), and it took longer than expected for PS3 to enjoy strong sales and close the gap with Xbox 360. Sony also continued to lose money on each PS3 sold through 2010, although the redesigned "slim" PS3 cut these losses. PlayStation 3's initial production cost is estimated by iSuppli to have been US$805.85 for the 20 GB model and US$840.35 for the 60 GB model. However, they were priced at US$499 and US$599, respectively, meaning that units may have been sold at an estimated loss of $306 or $241 depending on model, if the cost estimates were correct, and thus may have contributed to Sony's games division posting an operating loss of ¥232.3 billion (US$1.97 billion) in the fiscal year ending March 2007. In April 2007, soon after these results were published, Ken Kutaragi, President of Sony Computer Entertainment, announced plans to retire. Various news agencies, including The Times and The Wall Street Journal reported that this was due to poor sales, while SCEI maintains that Kutaragi had been planning his retirement for six months prior to the announcement. In January 2008, Kaz Hirai, CEO of Sony Computer Entertainment, suggested that the console may start making a profit by early 2009, stating that, "the next fiscal year starts in April and if we can try to achieve that in the next fiscal year that would be a great thing" and that "[profitability] is not a definite commitment, but that is what I would like to try to shoot for". However, market analysts Nikko Citigroup have predicted that PlayStation 3 could be profitable by August 2008. In a July 2008 interview, Hirai stated that his objective is for PlayStation 3 to sell 150 million units by its ninth year, surpassing PlayStation 2's sales of 140 million in its nine years on the market. In January 2009 Sony announced that their gaming division was profitable in Q3 2008. After the system's launch, production costs were reduced significantly as a result of phasing out the Emotion Engine chip and falling hardware costs. The cost of manufacturing Cell microprocessors had fallen dramatically as a result of moving to the 65 nm production process, and Blu-ray Disc diodes had become cheaper to manufacture. As of January 2008, each unit cost around $400 to manufacture; by August 2009, Sony had reduced costs by a total of 70%, meaning it only cost Sony around $240 per unit. Critical reception Early PlayStation 3 reviews after launch were critical of its high price and lack of quality games. Game developers regarded the architecture as difficult to program for. PS3 was, however, commended for its hardware including its Blu-ray home theater capabilities and graphics potential. Critical and commercial reception to PS3 improved over time, after a series of price revisions, Blu-ray's victory over HD DVD, and the release of several well received titles. Ars Technicas original launch review gave PS3 only a 6/10, but second review of the console in June 2008 rated it a 9/10. In September 2009, IGN named PlayStation 3 the 15th-best gaming console of all time, behind both of its competitors: Wii (10th) and Xbox 360 (6th). However, PS3 has won IGN's "Console Showdown"—based on which console offers the best selection of games released during each year—in three of the four years since it began (2008, 2009 and 2011, with Xbox winning in 2010). IGN judged PlayStation 3 to have the best game line-up of 2008, based on their review scores in comparison to those of Wii and Xbox 360. In a comparison piece by PC Magazines Will Greenwald in June 2012, PS3 was selected as an overall better console compared to Xbox 360. Pocket-Lint said of the console "The PS3 has always been a brilliant games console," and that "For now, this is just about the best media device for the money." Original model PS3 was given the number-eight spot on PC World magazine's list of "The Top 21 Tech Screwups of 2006", where it was criticized for being "Late, Expensive and Incompatible". GamesRadar ranked PS3 as the top item in a feature on game-related PR disasters, asking how Sony managed to "take one of the most anticipated game systems of all time and—within the space of a year—turn it into a hate object reviled by the entire internet", but added that despite its problems the system has "untapped potential". Business Week summed up the general opinion by stating that it was "more impressed with what the PlayStation 3 could do than with what it currently does". Developers also found the machine difficult to program for. In 2007, Gabe Newell of Valve said "The PS3 is a total disaster on so many levels, I think it's really clear that Sony lost track of what customers and what developers wanted". He continued "I'd say, even at this late date, they should just cancel it and do a do over. Just say, 'This was a horrible disaster and we're sorry and we're going to stop selling this and stop trying to convince people to develop for it'". Doug Lombardi VP of Marketing for Valve has since stated that Valve is interested in developing for the console and is looking to hire talented PS3 programmers for future projects. He later restated Valve's position, "Until we have the ability to get a PS3 team together, until we find the people who want to come to Valve or who are at Valve who want to work on that, I don't really see us moving to that platform". At Sony's E3 2010 press conference, Newell made a live appearance to recant his previous statements, citing Sony's move to make the system more developer-friendly, and to announce that Valve would be developing Portal 2 for the system. He also claimed that the inclusion of Steamworks (Valve's system to automatically update their software independently) would help to make the PS3 version of Portal 2 the best console version on the market. Activision Blizzard CEO Bobby Kotick has criticized PS3's high development costs and inferior attach rate and return to that of Xbox 360 and Wii. He believes these factors are pushing developers away from working on the console. In an interview with The Times Kotick stated "I'm getting concerned about Sony; the PlayStation 3 is losing a bit of momentum and they don't make it easy for me to support the platform." He continued, "It's expensive to develop for the console, and the Wii and the Xbox are just selling better. Games generate a better return on invested capital (ROIC) on the Xbox than on the PlayStation." Kotick also claimed that Activision Blizzard may stop supporting the system if the situation is not addressed. "[Sony has] to cut the [PS3's retail] price, because if they don't, the attach rates are likely to slow. If we are being realistic, we might have to stop supporting Sony." Kotick received heavy criticism for the statement, notably from developer BioWare who questioned the wisdom of the threatened move, and referred to the statement as "silly." Despite the initial negative press, several websites have given the system very good reviews mostly regarding its hardware. CNET United Kingdom praised the system saying, "the PS3 is a versatile and impressive piece of home-entertainment equipment that lives up to the hype [...] the PS3 is well worth its hefty price tag." CNET awarded it a score of 8.8 out of 10 and voted it as its number one "must-have" gadget, praising its robust graphical capabilities and stylish exterior design while criticizing its limited selection of available games. In addition, both Home Theater Magazine and Ultimate AV have given the system's Blu-ray playback very favorable reviews, stating that the quality of playback exceeds that of many current standalone Blu-ray Disc players. In an interview, Kazuo Hirai, chairman of Sony Computer Entertainment argued for the choice of a complex architecture. Hexus Gaming reviewed the PAL version and summed the review up by saying, "as the PlayStation 3 matures and developers start really pushing it, we'll see the PlayStation 3 emerge as the console of choice for gaming." At GDC 2007, Shiny Entertainment founder Dave Perry stated, "I think that Sony has made the best machine. It's the best piece of hardware, without question". Slim model and rebranding The PlayStation 3 Slim received extremely positive reviews as well as a boost in sales; less than 24 hours after its announcement, PS3 Slim took the number-one bestseller spot on Amazon.com in the video games section for fifteen consecutive days. It regained the number-one position again one day later. PS3 Slim also received praise from PC World giving it a 90 out of 100 praising its new repackaging and the new value it brings at a lower price as well as praising its quietness and the reduction in its power consumption. This is in stark contrast to the original PS3's launch in which it was given position number-eight on their "The Top 21 Tech Screwups of 2006" list. CNET awarded PS3 Slim four out of five stars praising its Blu-ray capabilities, 120 GB hard drive, free online gaming service and more affordable pricing point, but complained about the lack of backward compatibility for PlayStation 2 games. TechRadar gave PS3 Slim four and a half stars out of five praising its new smaller size and summed up its review stating "Over all, the PS3 Slim is a phenomenal | accessible to Amazon Prime subscribers (in the US). Upon reviewing the PlayStation and Netflix collaboration, Pocket-Lint said "We've used the Netflix app on Xbox too and, as good as it is, we think the PS3 version might have the edge here." and stated that having Netflix and LoveFilm on PlayStation is "mind-blowingly good." In July 2013, YuppTV OTT player launched its branded application on the PS3 computer entertainment system in the United States. Audio capabilities The PlayStation 3 has the ability to play standard audio CDs, a feature that was notably removed from its successors. PlayStation 3 added the ability for ripping audio CDs to store them on the system's hard disk; the system has transcoders for ripping to either MP3, AAC, or Sony's own ATRAC (ATRAC3plus) formats. Early models were also able to playback Super Audio CDs, however this support was dropped in the third generation revision of the console from late 2007. However, all models do retain Direct Stream Digital playback ability. ATRAC formatted tracks from Walkman digital audio players can be natively played on the PlayStation 3 by connecting the player to the system's USB port. The PlayStation 3 did not feature the Sony CONNECT Music Store. OtherOS support PlayStation 3 initially shipped with the ability to install an alternative operating system alongside the main system software; Linux and other Unix-based operating systems were available. The hardware allowed access to six of the seven Synergistic Processing Elements of the Cell microprocessor, but not the RSX 'Reality Synthesizer' graphics chip. The 'OtherOS' functionality was not present in the updated PS Slim models, and the feature was subsequently removed from previous versions of the PS3 as part of the machine's firmware update version 3.21 which was released on April 1, 2010; Sony cited security concerns as the rationale. The firmware update 3.21 was mandatory for access to the PlayStation Network. The removal caused some controversy; as the update removed officially advertised features from already sold products, and gave rise to several class action lawsuits aimed at making Sony return the feature or provide compensation. On December 8, 2011, U.S. District Judge Richard Seeborg dismissed the last remaining count of the class action lawsuit (other claims in the suit had previously been dismissed), stating: "As a legal matter, ... plaintiffs have failed to allege facts or articulate a theory on which Sony may be held liable." , the U.S. Court of Appeals for the Ninth Circuit partially reversed the dismissal and have sent the case back to the district court. Leap year bug On March 1, 2010 (UTC), many of the original "fat" PlayStation 3 models worldwide were experiencing errors related to their internal system clock. The error had many symptoms. Initially, the main problem seemed to be the inability to connect to the PlayStation Network. However, the root cause of the problem was unrelated to the PlayStation Network, since even users who had never been online also had problems playing installed offline games (which queried the system timer as part of startup) and using system themes. At the same time, many users noted that the console's clock had gone back to December 31, 1999. The event was nicknamed the ApocalyPS3, a play on the word apocalypse and PS3, the abbreviation for the PlayStation 3 console. The error code displayed was typically 8001050F and affected users were unable to sign in, play games, use dynamic themes and view/sync trophies. The problem only resided within the first- through third-generation original PS3 units while the newer "Slim" models were unaffected because of different internal hardware for the clock. Sony confirmed that there was an error and stated that it was narrowing down the issue and were continuing to work to restore service. By March 2 (UTC), 2010, owners of original PS3 models could connect to PSN successfully and the clock no longer showed December 31, 1999. Sony stated that the affected models incorrectly identified 2010 as a leap year, because of a bug in the BCD method of storing the date. However, for some users, the hardware's operating system clock (mainly updated from the internet and not associated with the internal clock) needed to be updated manually or by re-syncing it via the internet. On June 29, 2010, Sony released PS3 system software update 3.40, which improved the functionality of the internal clock to properly account for leap years. Features PlayStation Portable connectivity PlayStation Portable can connect with PlayStation 3 in many ways, including in-game connectivity. For example, Formula One Championship Edition, a racing game, was shown at E3 2006 using a PSP as a real-time rear-view mirror. In addition, users are able to download original PlayStation format games from the PlayStation Store, transfer and play them on PSP as well as PS3 itself. It is also possible to use the Remote Play feature to play these and some PlayStation Network games, remotely on PSP over a network or internet connection. Sony has also demonstrated PSP playing back video content from PlayStation 3 hard disk across an ad hoc wireless network. This feature is referred to as Remote Play located under the browser icon on both PlayStation 3 and PlayStation Portable. Remote play has since expanded to allow remote access to PS3 via PSP from any wireless access point in the world. PlayStation Network PlayStation Network is the unified online multiplayer gaming and digital media delivery service provided by Sony Computer Entertainment for PlayStation 3 and PlayStation Portable, announced during the 2006 PlayStation Business Briefing meeting in Tokyo. The service is always connected, free, and includes multiplayer support. The network enables online gaming, the PlayStation Store, PlayStation Home and other services. PlayStation Network uses real currency and PlayStation Network Cards as seen with the PlayStation Store and PlayStation Home. PlayStation Plus PlayStation Plus (commonly abbreviated PS+ and occasionally referred to as PSN Plus) is a premium PlayStation Network subscription service that was officially unveiled at E3 2010 by Jack Tretton, President and CEO of SCEA. Rumors of such service had been in speculation since Kaz Hirai's announcement at TGS 2009 of a possible paid service for PSN but with the current PSN service still available. Launched alongside PS3 firmware 3.40 and PSP firmware 6.30 on June 29, 2010, the paid-for subscription service provides users with enhanced services on the PlayStation Network, on top of the current PSN service which is still available with all of its features. These enhancements include the ability to have demos and game updates download automatically to PlayStation 3. Subscribers also get early or exclusive access to some betas, game demos, premium downloadable content and other PlayStation Store items. North American users also get a free subscription to Qore. Users may choose to purchase either a one-year or a three-month subscription to PlayStation Plus. PlayStation Store The PlayStation Store is an online virtual market available to users of Sony's PlayStation 3 (PS3) and PlayStation Portable (PSP) game consoles via the PlayStation Network. The Store offers a range of downloadable content both for purchase and available free of charge. Available content includes full games, add-on content, playable demos, themes and game and movie trailers. The service is accessible through an icon on the XMB on PS3 and PSP. The PS3 store can also be accessed on PSP via a Remote Play connection to PS3. The PSP store is also available via the PC application, Media Go. , there have been over 600 million downloads from the PlayStation Store worldwide. The PlayStation Store is updated with new content each Tuesday in North America, and each Wednesday in PAL regions. In May 2010 this was changed from Thursdays to allow PSP games to be released digitally, closer to the time they are released on UMD. On March 29, 2021, Sony announced that it was would shut down the PS3 version of the Store on July 2, though previous purchases on the store will remain downloadable. However, on April 19, following fan feedback, Sony reversed their decision and confirmed that the PS3 store would remain operational. What's New What's New was announced at Gamescom 2009 and was released on September 1, 2009, with PlayStation 3 system software 3.0. The feature was to replace the existing [Information Board], which displayed news from the PlayStation website associated with the user's region. The concept was developed further into a major PlayStation Network feature, which interacts with the [Status Indicator] to display a ticker of all content, excluding recently played content (currently in North America and Japan only). The system displays the What's New screen by default instead of the [Games] menu (or [Video] menu, if a movie was inserted) when starting up. What's New has four sections: "Our Pick", "Recently Played", latest information and new content available in PlayStation Store. There are four kinds of content the What's New screen displays and links to, on the sections. "Recently Played" displays the user's recently played games and online services only, whereas, the other sections can contain website links, links to play videos and access to selected sections of the PlayStation Store. The PlayStation Store icons in the [Game] and [Video] section act similarly to the What's New screen, except that they only display and link to games and videos in the PlayStation Store, respectively. PlayStation Home PlayStation Home was a virtual 3D social networking service for the PlayStation Network. Home allowed users to create a custom avatar, which could be groomed realistically. Users could edit and decorate their personal apartments, avatars, or club houses with free, premium, or won content. Users could shop for new items or win prizes from PS3 games, or Home activities. Users could interact and connect with friends and customize content in a virtual world. Home also acted as a meeting place for users that wanted to play multiplayer video games with others. A closed beta began in Europe from May 2007 and expanded to other territories soon after. Home was delayed and expanded several times before initially releasing. The Open Beta test was started on December 11, 2008. It remained as a perpetual beta until its closure on March 31, 2015. Home was available directly from the PlayStation 3 XrossMediaBar. Membership was free, but required a PSN account. Home featured places to meet and interact, dedicated game spaces, developer spaces, company spaces, and events. The service underwent a weekly maintenance and frequent updates. At the time of its closure in March 2015, Home had been downloaded by over 41 million users. Life with PlayStation Life with PlayStation, released on September 18, 2008 to succeed Folding@home, was retired November 6, 2012. Life with PlayStation used virtual globe data to display news and information by city. Along with Folding@home functionality, the application provided access to three other information "channels", the first being the Live Channel offering news headlines and weather which were provided by Google News, The Weather Channel, the University of Wisconsin–Madison Space Science and Engineering Center, among other sources. The second channel was the World Heritage channel which offered historical information about historical sites. The third channel was the United Village channel. United Village was designed to share information about communities and cultures worldwide. An update allowed video and photo viewing in the application. The fourth channel was the U.S. exclusive PlayStation Network Game Trailers Channel for direct streaming of game trailers. Outage On April 20, 2011, Sony shut down the PlayStation Network and Qriocity for a prolonged interval, revealing on April 23 that this was due to "an external intrusion on our system". Sony later revealed that the personal information of 77 million users might have been taken, including: names; addresses; countries; email addresses; birthdates; PSN/Qriocity logins, passwords and handles/PSN online IDs. It also stated that it was possible that users' profile data, including purchase history and billing address, and PlayStation Network/Qriocity password security answers may have been obtained. There was no evidence that any credit card data had been taken, but the possibility could not be ruled out, and Sony advised customers that their credit card data may have been obtained. Additionally, the credit card numbers were encrypted and Sony never collected the three digit CVC or CSC number from the back of the credit cards which is required for authenticating some transactions. In response to the incident, Sony announced a "Welcome Back" program, 30 days free membership of PlayStation Plus for all PSN members, two free downloadable PS3 games, and a free one-year enrollment in an identity theft protection program. Sales and production costs Although its PlayStation predecessors had been very dominant against the competition and were hugely profitable for Sony, PlayStation 3 had an inauspicious start, and Sony chairman and CEO Sir Howard Stringer initially could not convince investors of a turnaround in its fortunes. The PS3 lacked the unique gameplay of the more affordable Wii which became that generation's most successful console in terms of units sold. Furthermore, PS3 had to compete directly with Xbox 360 which had a market head start, and as a result the platform no longer had exclusive titles that the PS2 enjoyed such as the Grand Theft Auto and Final Fantasy series (regarding cross-platform games, Xbox 360 versions were generally considered superior in 2006, although by 2008 the PS3 versions had reached parity or surpassed), and it took longer than expected for PS3 to enjoy strong sales and close the gap with Xbox 360. Sony also continued to lose money on each PS3 sold through 2010, although the redesigned "slim" PS3 cut these losses. PlayStation 3's initial production cost is estimated by iSuppli to have been US$805.85 for the 20 GB model and US$840.35 for the 60 GB model. However, they were priced at US$499 and US$599, respectively, meaning that units may have been sold at an estimated loss of $306 or $241 depending on model, if the cost estimates were correct, and thus may have contributed to Sony's games division posting an operating loss of ¥232.3 billion (US$1.97 billion) in the fiscal year ending March 2007. In April 2007, soon after these results were published, Ken Kutaragi, President of Sony Computer Entertainment, announced plans to retire. Various news agencies, including The Times and The Wall Street Journal reported that this was due to poor sales, while SCEI maintains that Kutaragi had been planning his retirement for six months prior to the announcement. In January 2008, Kaz Hirai, CEO of Sony Computer Entertainment, suggested that the console may start making a profit by early 2009, stating that, "the next fiscal year starts in April and if we can try to achieve that in the next fiscal year that would be a great thing" and that "[profitability] is not a definite commitment, but that is what I would like to try to shoot for". However, market analysts Nikko Citigroup have predicted that PlayStation 3 could be profitable by August 2008. In a July 2008 interview, Hirai stated that his objective is for PlayStation 3 to sell 150 million units by its ninth year, surpassing PlayStation 2's sales of 140 million in its nine years on the market. In January 2009 Sony announced that their gaming division was profitable in Q3 2008. After the system's launch, production costs were reduced significantly as a result of phasing out the Emotion Engine chip and falling hardware costs. The cost of manufacturing Cell microprocessors had fallen dramatically as a result of moving to the 65 nm production process, and Blu-ray Disc diodes had become cheaper to manufacture. As of January 2008, each unit cost around $400 to manufacture; by August 2009, Sony had reduced costs by a total of 70%, meaning it only cost Sony around $240 per unit. Critical reception Early PlayStation 3 reviews after launch were critical of its high price and lack of quality games. Game developers regarded the architecture as difficult to program for. PS3 was, however, commended for its hardware including its Blu-ray home theater capabilities and graphics potential. Critical and commercial reception to PS3 improved over time, after a series of price revisions, Blu-ray's victory over HD DVD, and the release of several well received titles. Ars Technicas original launch review gave PS3 only a 6/10, but second review of the console in June 2008 rated it a 9/10. In September 2009, IGN named PlayStation 3 the 15th-best gaming console of all time, behind both of its competitors: Wii (10th) and Xbox 360 (6th). However, PS3 has won IGN's "Console Showdown"—based on which console offers the best selection of games released during each year—in three of the four years since it began (2008, 2009 and 2011, with Xbox winning in 2010). IGN judged PlayStation 3 to have the best game line-up of 2008, based on their review scores in comparison to those of Wii and Xbox 360. In a comparison piece by PC Magazines Will Greenwald in June 2012, PS3 was selected as an overall better console compared to Xbox 360. Pocket-Lint said of the console "The PS3 has always been a brilliant games console," and that "For now, this is just about the best media device for the money." Original model PS3 was given the number-eight spot on PC World magazine's list of "The Top 21 Tech Screwups of 2006", where it was criticized for being "Late, Expensive and Incompatible". GamesRadar ranked PS3 as the top item in a feature on game-related PR disasters, asking how Sony managed to "take one of the most anticipated game systems of all time and—within the space of a year—turn it into a hate object reviled by the entire internet", but added that despite its problems the system has "untapped potential". Business Week summed up the general opinion by stating that it was "more impressed with what the PlayStation 3 could do than with what it currently does". Developers also found the machine difficult to program for. In 2007, Gabe Newell of Valve said "The PS3 is a total disaster on so many levels, I think it's really clear that Sony lost track of what customers and what developers wanted". He continued "I'd say, even at this late date, they should just cancel it and do a do over. Just say, 'This was a horrible disaster and we're sorry and we're going to stop selling this and stop trying to convince people to develop for it'". Doug Lombardi VP of Marketing for Valve has since stated that Valve is interested in developing for the console and is looking to hire talented PS3 programmers for future projects. He later restated Valve's position, "Until we have the ability to get a PS3 team together, until we find the people who want to come to Valve or who are at Valve who want to work on that, I don't really see us moving to that platform". At Sony's E3 2010 press conference, Newell made a live appearance to recant his previous statements, citing Sony's move to make the system more developer-friendly, and to announce that Valve would be developing Portal 2 for the system. He also claimed that the inclusion of Steamworks (Valve's system to automatically update their software independently) would help to make the PS3 version of Portal 2 the best console version on the market. Activision Blizzard CEO Bobby Kotick has |
individual and team awards for his performance in the field, his record-breaking achievements, and legacy in the sport. Early years Pelé was born Edson Arantes do Nascimento on 23 October 1940, in Três Corações, Minas Gerais, Brazil, the son of Fluminense footballer Dondinho (born João Ramos do Nascimento) and Celeste Arantes. He was the elder of two siblings. He was named after the American inventor Thomas Edison. His parents decided to remove the "i" and call him "Edson", but there was a mistake on the birth certificate, leading many documents to show his name as "Edison", not "Edson", as he is called. He was originally nicknamed "Dico" by his family. He received the nickname "Pelé" during his school days, when it is claimed he was given it because of his pronunciation of the name of his favourite player, local Vasco da Gama goalkeeper Bilé, which he misspoke but the more he complained the more it stuck. In his autobiography, Pelé stated he had no idea what the name means, nor did his old friends. Apart from the assertion that the name is derived from that of Bilé, and that it is Hebrew for "miracle" (פֶּ֫לֶא), the word has no known meaning in Portuguese. Pelé grew up in poverty in Bauru in the state of São Paulo. He earned extra money by working in tea shops as a servant. Taught to play by his father, he could not afford a proper football and usually played with either a sock stuffed with newspaper and tied with a string or a grapefruit. He played for several amateur teams in his youth, including Sete de Setembro, Canto do Rio, São Paulinho, and Amériquinha. Pelé led Bauru Athletic Club juniors (coached by Waldemar de Brito) to two São Paulo state youth championships. In his mid-teens, he played for an indoor football team called Radium. Indoor football had just become popular in Bauru when Pelé began playing it. He was part of the first futsal (indoor football) competition in the region. Pelé and his team won the first championship and several others. According to Pelé, futsal (indoor football) presented difficult challenges; he said it was a lot quicker than football on the grass and that players were required to think faster because everyone is close to each other in the pitch. Pelé credits futsal for helping him think better on the spot. In addition, futsal allowed him to play with adults when he was about 14 years old. In one of the tournaments he participated, he was initially considered too young to play, but eventually went on to end up top scorer with 14 or 15 goals. "That gave me a lot of confidence", Pelé said, "I knew then not to be afraid of whatever might come". Club career Santos In 1956, de Brito took Pelé to Santos, an industrial and port city located near São Paulo, to try out for professional club Santos FC, telling the directors at Santos that the 15-year-old would be "the greatest football player in the world." Pelé impressed Santos coach Lula during his trial at the Estádio Vila Belmiro, and he signed a professional contract with the club in June 1956. Pelé was highly promoted in the local media as a future superstar. He made his senior team debut on 7 September 1956 at the age of 15 against Corinthians Santo Andre and had an impressive performance in a 7–1 victory, scoring the first goal in his prolific career during the match. When the 1957 season started, Pelé was given a starting place in the first team and, at the age of 16, became the top scorer in the league. Ten months after signing professionally, the teenager was called up to the Brazil national team. After the 1958 and the 1962 World Cup, wealthy European clubs, such as Real Madrid, Juventus and Manchester United, tried to sign him in vain; in 1958 Inter Milan even managed to get him a regular contract, but Angelo Moratti was forced to tear the contract up at the request of Santos' chairman following a revolt by Santos' Brazilian fans. In 1961 the government of Brazil under President Jânio Quadros declared Pelé an "official national treasure" to prevent him from being transferred out of the country. Pelé won his first major title with Santos in 1958 as the team won the Campeonato Paulista; Pelé would finish the tournament as top scorer with 58 goals, a record that stands today. A year later, he would help the team earn their first victory in the Torneio Rio-São Paulo with a 3–0 over Vasco da Gama. However, Santos was unable to retain the Paulista title. In 1960, Pelé scored 33 goals to help his team regain the Campeonato Paulista trophy but lost out on the Rio-São Paulo tournament after finishing in 8th place. In the 1960 season, Pelé scored 47 goals and helped Santos regain the Campeonato Paulista. The club went on to win the Taça Brasil that same year, beating Bahia in the finals; Pelé finished as top scorer of the tournament with 9 goals. The victory allowed Santos to participate in the Copa Libertadores, the most prestigious club tournament in the Western hemisphere. Santos's most successful Copa Libertadores season started in 1962; the team was seeded in Group One alongside Cerro Porteño and Deportivo Municipal Bolivia, winning every match of their group but one (a 1–1 away tie versus Cerro). Santos defeated Universidad Católica in the semi-finals and met defending champions Peñarol in the finals. Pelé scored twice in the playoff match to secure the first title for a Brazilian club. Pelé finished as the second top scorer of the competition with four goals. That same year, Santos would successfully defend the Campeonato Paulista (with 37 goals from Pelé) and the Taça Brasil (Pelé scoring four goals in the final series against Botafogo). Santos would also win the 1962 Intercontinental Cup against Benfica. Wearing his number 10 shirt, Pelé produced one of the best performances of his career, scoring a hat-trick in Lisbon as Santos won 5–2. As the defending champions, Santos qualified automatically to the semi-final stage of the 1963 Copa Libertadores. The ballet blanco, the nickname given to Santos for Pelé, managed to retain the title after victories over Botafogo and Boca Juniors. Pelé helped Santos overcome a Botafogo team that featured Brazilian greats such as Garrincha and Jairzinho with a last-minute goal in the first leg of the semi-finals which made it 1–1. In the second leg, Pelé scored a hat-trick in the Estádio do Maracanã as Santos won, 0–4, in the second leg. Santos started the final series by winning, 3–2, in the first leg and defeating Boca Juniors 1–2, in La Bombonera. It was a rare feat in official competitions, with another goal from Pelé. Santos became the first (and to date the only) Brazilian team to lift the Copa Libertadores in Argentine soil. Pelé finished the tournament with 5 goals. Santos lost the Campeonato Paulista after finishing in third place but went on to win the Rio-São Paulo tournament after a 0–3 win over Flamengo in the final, with Pelé scoring one goal. Pelé would also help Santos retain the Intercontinental Cup and the Taça Brasil against AC Milan and Bahia respectively. In the 1964 Copa Libertadores, Santos were beaten in both legs of the semi-finals by Independiente. The club won the Campeonato Paulista, with Pelé netting 34 goals. Santos also shared the Rio-São Paulo title with Botafogo and won the Taça Brasil for the fourth consecutive year. In the 1965 Copa Libertadores, Santos reached the semi-finals and met Peñarol in a rematch of the 1962 final. After two matches, a playoff was needed to break the tie. Unlike 1962, Peñarol came out on top and eliminated Santos 2–1. Pelé would, however, finish as the topscorer of the tournament with eight goals. This proved to be the start of a decline as Santos failed to retain the Torneio Rio-São Paulo. In 1966, Pelé and Santos also failed to retain the Taça Brasil as Pelé's goals were not enough to prevent a 9–4 defeat by Cruzeiro (led by Tostão) in the final series. The club did, however, win the Campeonato Paulista in 1967, 1968 and 1969. On 19 November 1969, Pelé scored his 1,000th goal in all competitions, in what was a highly anticipated moment in Brazil. The goal, dubbed O Milésimo (The Thousandth), occurred in a match against Vasco da Gama, when Pelé scored from a penalty kick, at the Maracanã Stadium. Pelé states that his most memorable goal was scored at Rua Javari stadium on a Campeonato Paulista match against São Paulo rival Clube Atlético Juventus on 2 August 1959. As there is no video footage of this match, Pelé asked that a computer animation be made of this specific goal. In March 1961, Pelé scored the gol de placa (goal worthy of a plaque), against Fluminense at the Maracanã. Pelé received the ball on the edge of his own penalty area, and ran the length of the field, eluding opposition players with feints, before striking the ball beyond the goalkeeper. A plaque was commissioned with a dedication to "the most beautiful goal in the history of the Maracanã". In 1969, the two factions involved in the Nigerian Civil War agreed to a 48-hour ceasefire so they could watch Pelé play an exhibition game in Lagos. Santos ended up playing to a 2–2 draw with Lagos side Stationary Stores FC and Pelé scored his team's goals. The civil war went on for one more year after this game. During his time at Santos, Pelé played alongside many gifted players, including Zito, Pepe, and Coutinho; the latter partnered him in numerous one-two plays, attacks, and goals. Pelé's 643 goals for Santos was the most goals scored for a single club until it was surpassed by Lionel Messi of Barcelona in December 2020. New York Cosmos After the 1974 season (his 19th with Santos), Pelé retired from Brazilian club football although he continued to occasionally play for Santos in official competitive matches. Two years later, he came out of semi-retirement to sign with the New York Cosmos of the North American Soccer League (NASL) for the 1975 season. At a chaotic press conference at New York's 21 Club, the Cosmos unveiled Pelé. John O'Reilly, the club's media spokesman, stated, "We had superstars in the United States but nothing at the level of Pelé. Everyone wanted to touch him, shake his hand, get a photo with him." Though well past his prime at this point, Pelé was credited with significantly increasing public awareness and interest of the sport in the US. During his first public appearance in Boston, he was injured by a crowd of fans who had surrounded him and was evacuated on a stretcher. Pelé made his debut for the Cosmos on 15 June 1975 against the Dallas Tornado at Downing Stadium, scoring one goal in a 2–2 draw. Pelé opened the door for many other stars to play in North America. Giorgio Chinaglia followed him to the Cosmos, then Franz Beckenbauer and his former Santos teammate Carlos Alberto. Over the next few years other players came to the league, including Johan Cruyff, Eusebio, Bobby Moore, George Best and Gordon Banks. In 1975, one week before the Lebanese Civil War, Pelé played a friendly game for the Lebanese club Nejmeh against a team of Lebanese Premier League stars, scoring two goals which were not included in his official tally. On the day of the game, 40,000 spectators were at the stadium from early morning to watch the match. He led the Cosmos to the 1977 NASL championship, in his third and final season with the club. In June 1977, the Cosmos attracted an NASL record 62,394 fans to Giants Stadium for a 3–0 victory past the Tampa Bay Rowdies with a 37-year-old Pelé scoring a hat-trick. In the first leg of the quarter-finals, they attracted a US record crowd of 77,891 for what turned into an 8–3 rout of the Fort Lauderdale Strikers at Giants Stadium. In the second leg of the semi-finals against the Rochester Lancers, the Cosmos won 4–1. Pelé finished his official playing career by leading the New York Cosmos to their first Soccer Bowl title with a 2–1 win over the Seattle Sounders at the Civic Stadium in Portland, Oregon. On 1 October 1977, Pelé closed out his career in an exhibition match between the Cosmos and Santos. The match was played in front of a sold-out crowd at Giants Stadium and was televised in the US on ABC's Wide World of Sports as well as throughout the world. Pelé's father and wife both attended the match, as well as Muhammad Ali and Bobby Moore. Delivering a message to the audience prior to the start of the game—"Love is more important than what we can take in life"—Pelé played the first half with the Cosmos, the second with Santos. The game ended with the Cosmos winning 2–1, with Pelé scoring with a 30-yard free-kick for the Cosmos in what was the final goal of his career. During the second half it started to rain, prompting a Brazilian newspaper to come out with the headline the following day: "Even The Sky Was Crying." International career Pelé's first international match was a 2–1 defeat against Argentina on 7 July 1957 at the Maracanã. In that match, he scored his first goal for Brazil aged 16 years and nine months, and he remains the youngest goalscorer for his country. 1958 World Cup Pelé arrived in Sweden sidelined by a knee injury but on his return from the treatment room, his colleagues stood together and insisted upon his selection. His first match was against the USSR in the third match of the first round of the 1958 FIFA World Cup, where he gave the assist to Vavá's second goal. He was at the time the youngest player ever to participate in the World Cup. Against France in the semi-final, Brazil was leading 2–1 at halftime, and then Pelé scored a hat-trick, becoming the youngest in World Cup history to do so. On 29 June 1958, Pelé became the youngest player to play in a World Cup final match at 17 years and 249 days. He scored two goals in that final as Brazil beat Sweden 5–2 in Stockholm, the capital. Pelé hit the post and then Vavá scored two goals to give Brazil the lead. His first goal where he flicked the ball over a defender before volleying into the corner of the net, was selected as one of the best goals in the history of the World Cup. Following Pelé's second goal, Swedish player Sigvard Parling would later comment; "When Pelé scored the fifth goal in that Final, I have to be honest and say I felt like applauding". When the match ended, Pelé passed out on the field, and was revived by Garrincha. He then recovered, and was compelled by the victory to weep as he was being congratulated by his teammates. He finished the tournament with six goals in four matches played, tied for second place, behind record-breaker Just Fontaine, and was named best young player of the tournament. His impact was arguably greater off the field, with Barney Ronay writing, "With nothing but talent to guide him, the boy from Minas Gerais became the first black global sporting superstar, and a source of genuine uplift and inspiration." It was in the 1958 World Cup that Pelé began wearing a jersey with number 10. The event was the result of disorganization: the leaders of the Brazilian Federation did not allocate the shirt numbers of players and it was up to FIFA to choose the number 10 shirt for Pelé who was a substitute on the occasion. The press proclaimed Pelé the greatest revelation of the 1958 World Cup, and he was also retroactively given the Silver Ball as the second best player of the tournament, behind Didi. South American Championship Pelé also played in the South American Championship. In the 1959 competition he was named best player of the tournament and was top scorer with 8 goals, as Brazil came second despite being unbeaten in the tournament. He scored in five of Brazil's six games, including two goals against Chile and a hat-trick against Paraguay. 1962 World Cup When the 1962 World Cup started, Pelé was the best rated player in the world. In the first match of the 1962 World Cup in Chile, against Mexico, Pelé assisted the first goal and then scored the second one, after a run past four defenders, to go up 2–0. He injured himself in the next game while attempting a long-range shot against Czechoslovakia. This would keep him out of the rest of the tournament, and forced coach Aymoré Moreira to make his only lineup change of the tournament. The substitute was Amarildo, who performed well for the rest of the tournament. However, it was Garrincha who would take the leading role and carry Brazil to their second World Cup title, after beating Czechoslovakia at the final in Santiago. 1966 World Cup Pelé was the most famous footballer in the world during the 1966 World Cup in England, and Brazil fielded some world champions like Garrincha, Gilmar and Djalma Santos with the addition of other stars like Jairzinho, Tostão and Gérson, leading to high expectations for them. Brazil was eliminated in the first round, playing only three matches. The World Cup was marked, among other things, for brutal fouls on Pelé that left him injured by the Bulgarian and Portuguese defenders. Pelé scored the first goal from a free kick against Bulgaria, becoming the first player to score in three successive FIFA World Cups, but due to his injury, a result of persistent fouling by the Bulgarians, he missed the second game against Hungary. His coach stated that after the first game he felt "every team will take care of him in the same manner". Brazil lost that game and Pelé, although still recovering, was brought back for the last crucial match against Portugal at Goodison Park in Liverpool by the Brazilian coach Vicente Feola. Feola changed the entire defense, including the goalkeeper, while in midfield he returned to the formation of the first match. During the game, Portugal defender João Morais fouled Pelé, but was not sent off by referee George McCabe; a decision retrospectively viewed as being among the worst refereeing errors in World Cup history. Pelé had to stay on the field limping for the rest of the game, since substitutes were not allowed at that time. After this game he vowed he would never again play in the World Cup, a decision he would later change. 1970 World Cup Pelé was called to the national team in early 1969, he refused at first, but then accepted and played in six World Cup qualifying matches, scoring six goals. The 1970 World Cup in Mexico was expected to be Pelé's last. Brazil's squad for the tournament featured major changes in relation to the 1966 squad. Players like Garrincha, Nilton Santos, Valdir Pereira, Djalma Santos and Gilmar had already retired. However, Brazil's 1970 World Cup squad, which included players like Pelé, Rivelino, Jairzinho, Gérson, Carlos Alberto Torres, Tostão and Clodoaldo, is often considered to be the greatest football team in history. The front five of Jairzinho, Pelé, Gerson, Tostão and Rivelino together created an attacking momentum, with Pelé having a central role in Brazil's way to the final. All of Brazil's matches in the tournament (except the final) were played in Guadalajara, and in the first match against Czechoslovakia, Pelé gave Brazil a 2–1 lead, by controlling Gerson's long pass with his chest and then scoring. In this match Pelé attempted to lob goalkeeper Ivo Viktor from the half-way line, only narrowly missing the Czechoslovak goal. Brazil went on to win the match, 4–1. In the first half of the match against England, Pelé nearly scored with a header that was saved by the England goalkeeper Gordon Banks. Pelé recalled he was already shouting "Goal" when he headed the ball. It was often referred to as the "save of the century." In the second half, he controlled a cross from Tostão before flicking the ball to Jairzinho who scored the only goal. Against Romania, Pelé scored two goals, which included a 20-yard bending free-kick, with Brazil winning 3–2. In the quarter-final against Peru, Brazil won 4–2, with Pelé assisting Tostão for Brazil's third goal. In the semi-final, Brazil faced Uruguay for the first time since the 1950 World Cup final round match. Jairzinho put Brazil ahead 2–1, and Pelé assisted Rivelino for the 3–1. During that match, Pelé made one of his most famous plays. Tostão passed the ball for Pelé to collect which Uruguay's goalkeeper Ladislao Mazurkiewicz took notice of and ran off his line to get the ball before Pelé. However, Pelé got there first and fooled Mazurkiewicz with a feint by not touching the ball, causing it to roll to the goalkeepers left, while Pelé went to the goalkeepers right. Pelé ran around the goalkeeper to retrieve the ball and took a shot while turning towards the goal, but he turned in excess as he shot, and the ball drifted just wide of the far post. Brazil played Italy in the final at the Azteca Stadium in Mexico City. Pelé scored the opening goal with a header after outjumping Italian defender Tarcisio Burgnich. Brazil's 100th World Cup goal, Pelé's leap of joy into the arms of teammate Jairzinho in celebrating the goal is regarded as one of the most iconic moments in World Cup history. He then made assists for Brazil's third goal, scored by Jairzinho, and the fourth finished by Carlos Alberto. The last goal of the game is often considered the greatest team goal of all time because it involved all but two of the team's outfield players. The play culminated after Pelé made a blind pass that went into Carlos Alberto's running trajectory. He came running from behind and struck the ball to score. Brazil won the match 4–1, keeping the Jules Rimet Trophy indefinitely, and Pelé received the Golden Ball as player of the tournament. Burgnich, who marked Pelé during the final, was quoted saying "I told myself before the game, he's made of skin and bones just like everyone else – but I was wrong". In terms of his goals and assist throughout the 1970 World Cup, Pelé was directly responsible for 53% of Brazil's goals throughout the tournament. Pelé's last international match was on 18 July 1971 against Yugoslavia in Rio de Janeiro. With Pelé on the field, the Brazilian team's record was 67 wins, 14 draws and 11 losses. Brazil never lost a match while fielding both Pelé and Garrincha. Style of play Pelé has also been known for connecting the phrase "The Beautiful Game" with football. A prolific goalscorer, he was known for his ability to anticipate opponents in the area and finish off chances with an accurate and powerful shot with either foot. Pelé was also a hard-working team player, and a complete forward, with exceptional vision and intelligence, who was recognised for his precise passing and ability to link up with teammates and provide them with assists. In his early career, he played in a variety of attacking positions. Although he usually operated inside the penalty area as a main striker or centre forward, his wide range of skills also allowed him to play in a more withdrawn role, as an inside forward or second striker, or out wide. In his later career, he took on more of a deeper playmaking role behind the strikers, often functioning as an attacking midfielder. Pelé's unique playing style combined speed, creativity, and technical skill with physical power, stamina, and athleticism. His excellent technique, balance, flair, agility, and dribbling skills enabled him to beat opponents with the ball, and frequently saw him use sudden changes of direction and elaborate feints in order to get past players, such as his trademark move, the drible da vaca. Another one of his signature moves was the paradinha, or little stop. Despite his relatively small stature, , he excelled in the air, due | regular contract, but Angelo Moratti was forced to tear the contract up at the request of Santos' chairman following a revolt by Santos' Brazilian fans. In 1961 the government of Brazil under President Jânio Quadros declared Pelé an "official national treasure" to prevent him from being transferred out of the country. Pelé won his first major title with Santos in 1958 as the team won the Campeonato Paulista; Pelé would finish the tournament as top scorer with 58 goals, a record that stands today. A year later, he would help the team earn their first victory in the Torneio Rio-São Paulo with a 3–0 over Vasco da Gama. However, Santos was unable to retain the Paulista title. In 1960, Pelé scored 33 goals to help his team regain the Campeonato Paulista trophy but lost out on the Rio-São Paulo tournament after finishing in 8th place. In the 1960 season, Pelé scored 47 goals and helped Santos regain the Campeonato Paulista. The club went on to win the Taça Brasil that same year, beating Bahia in the finals; Pelé finished as top scorer of the tournament with 9 goals. The victory allowed Santos to participate in the Copa Libertadores, the most prestigious club tournament in the Western hemisphere. Santos's most successful Copa Libertadores season started in 1962; the team was seeded in Group One alongside Cerro Porteño and Deportivo Municipal Bolivia, winning every match of their group but one (a 1–1 away tie versus Cerro). Santos defeated Universidad Católica in the semi-finals and met defending champions Peñarol in the finals. Pelé scored twice in the playoff match to secure the first title for a Brazilian club. Pelé finished as the second top scorer of the competition with four goals. That same year, Santos would successfully defend the Campeonato Paulista (with 37 goals from Pelé) and the Taça Brasil (Pelé scoring four goals in the final series against Botafogo). Santos would also win the 1962 Intercontinental Cup against Benfica. Wearing his number 10 shirt, Pelé produced one of the best performances of his career, scoring a hat-trick in Lisbon as Santos won 5–2. As the defending champions, Santos qualified automatically to the semi-final stage of the 1963 Copa Libertadores. The ballet blanco, the nickname given to Santos for Pelé, managed to retain the title after victories over Botafogo and Boca Juniors. Pelé helped Santos overcome a Botafogo team that featured Brazilian greats such as Garrincha and Jairzinho with a last-minute goal in the first leg of the semi-finals which made it 1–1. In the second leg, Pelé scored a hat-trick in the Estádio do Maracanã as Santos won, 0–4, in the second leg. Santos started the final series by winning, 3–2, in the first leg and defeating Boca Juniors 1–2, in La Bombonera. It was a rare feat in official competitions, with another goal from Pelé. Santos became the first (and to date the only) Brazilian team to lift the Copa Libertadores in Argentine soil. Pelé finished the tournament with 5 goals. Santos lost the Campeonato Paulista after finishing in third place but went on to win the Rio-São Paulo tournament after a 0–3 win over Flamengo in the final, with Pelé scoring one goal. Pelé would also help Santos retain the Intercontinental Cup and the Taça Brasil against AC Milan and Bahia respectively. In the 1964 Copa Libertadores, Santos were beaten in both legs of the semi-finals by Independiente. The club won the Campeonato Paulista, with Pelé netting 34 goals. Santos also shared the Rio-São Paulo title with Botafogo and won the Taça Brasil for the fourth consecutive year. In the 1965 Copa Libertadores, Santos reached the semi-finals and met Peñarol in a rematch of the 1962 final. After two matches, a playoff was needed to break the tie. Unlike 1962, Peñarol came out on top and eliminated Santos 2–1. Pelé would, however, finish as the topscorer of the tournament with eight goals. This proved to be the start of a decline as Santos failed to retain the Torneio Rio-São Paulo. In 1966, Pelé and Santos also failed to retain the Taça Brasil as Pelé's goals were not enough to prevent a 9–4 defeat by Cruzeiro (led by Tostão) in the final series. The club did, however, win the Campeonato Paulista in 1967, 1968 and 1969. On 19 November 1969, Pelé scored his 1,000th goal in all competitions, in what was a highly anticipated moment in Brazil. The goal, dubbed O Milésimo (The Thousandth), occurred in a match against Vasco da Gama, when Pelé scored from a penalty kick, at the Maracanã Stadium. Pelé states that his most memorable goal was scored at Rua Javari stadium on a Campeonato Paulista match against São Paulo rival Clube Atlético Juventus on 2 August 1959. As there is no video footage of this match, Pelé asked that a computer animation be made of this specific goal. In March 1961, Pelé scored the gol de placa (goal worthy of a plaque), against Fluminense at the Maracanã. Pelé received the ball on the edge of his own penalty area, and ran the length of the field, eluding opposition players with feints, before striking the ball beyond the goalkeeper. A plaque was commissioned with a dedication to "the most beautiful goal in the history of the Maracanã". In 1969, the two factions involved in the Nigerian Civil War agreed to a 48-hour ceasefire so they could watch Pelé play an exhibition game in Lagos. Santos ended up playing to a 2–2 draw with Lagos side Stationary Stores FC and Pelé scored his team's goals. The civil war went on for one more year after this game. During his time at Santos, Pelé played alongside many gifted players, including Zito, Pepe, and Coutinho; the latter partnered him in numerous one-two plays, attacks, and goals. Pelé's 643 goals for Santos was the most goals scored for a single club until it was surpassed by Lionel Messi of Barcelona in December 2020. New York Cosmos After the 1974 season (his 19th with Santos), Pelé retired from Brazilian club football although he continued to occasionally play for Santos in official competitive matches. Two years later, he came out of semi-retirement to sign with the New York Cosmos of the North American Soccer League (NASL) for the 1975 season. At a chaotic press conference at New York's 21 Club, the Cosmos unveiled Pelé. John O'Reilly, the club's media spokesman, stated, "We had superstars in the United States but nothing at the level of Pelé. Everyone wanted to touch him, shake his hand, get a photo with him." Though well past his prime at this point, Pelé was credited with significantly increasing public awareness and interest of the sport in the US. During his first public appearance in Boston, he was injured by a crowd of fans who had surrounded him and was evacuated on a stretcher. Pelé made his debut for the Cosmos on 15 June 1975 against the Dallas Tornado at Downing Stadium, scoring one goal in a 2–2 draw. Pelé opened the door for many other stars to play in North America. Giorgio Chinaglia followed him to the Cosmos, then Franz Beckenbauer and his former Santos teammate Carlos Alberto. Over the next few years other players came to the league, including Johan Cruyff, Eusebio, Bobby Moore, George Best and Gordon Banks. In 1975, one week before the Lebanese Civil War, Pelé played a friendly game for the Lebanese club Nejmeh against a team of Lebanese Premier League stars, scoring two goals which were not included in his official tally. On the day of the game, 40,000 spectators were at the stadium from early morning to watch the match. He led the Cosmos to the 1977 NASL championship, in his third and final season with the club. In June 1977, the Cosmos attracted an NASL record 62,394 fans to Giants Stadium for a 3–0 victory past the Tampa Bay Rowdies with a 37-year-old Pelé scoring a hat-trick. In the first leg of the quarter-finals, they attracted a US record crowd of 77,891 for what turned into an 8–3 rout of the Fort Lauderdale Strikers at Giants Stadium. In the second leg of the semi-finals against the Rochester Lancers, the Cosmos won 4–1. Pelé finished his official playing career by leading the New York Cosmos to their first Soccer Bowl title with a 2–1 win over the Seattle Sounders at the Civic Stadium in Portland, Oregon. On 1 October 1977, Pelé closed out his career in an exhibition match between the Cosmos and Santos. The match was played in front of a sold-out crowd at Giants Stadium and was televised in the US on ABC's Wide World of Sports as well as throughout the world. Pelé's father and wife both attended the match, as well as Muhammad Ali and Bobby Moore. Delivering a message to the audience prior to the start of the game—"Love is more important than what we can take in life"—Pelé played the first half with the Cosmos, the second with Santos. The game ended with the Cosmos winning 2–1, with Pelé scoring with a 30-yard free-kick for the Cosmos in what was the final goal of his career. During the second half it started to rain, prompting a Brazilian newspaper to come out with the headline the following day: "Even The Sky Was Crying." International career Pelé's first international match was a 2–1 defeat against Argentina on 7 July 1957 at the Maracanã. In that match, he scored his first goal for Brazil aged 16 years and nine months, and he remains the youngest goalscorer for his country. 1958 World Cup Pelé arrived in Sweden sidelined by a knee injury but on his return from the treatment room, his colleagues stood together and insisted upon his selection. His first match was against the USSR in the third match of the first round of the 1958 FIFA World Cup, where he gave the assist to Vavá's second goal. He was at the time the youngest player ever to participate in the World Cup. Against France in the semi-final, Brazil was leading 2–1 at halftime, and then Pelé scored a hat-trick, becoming the youngest in World Cup history to do so. On 29 June 1958, Pelé became the youngest player to play in a World Cup final match at 17 years and 249 days. He scored two goals in that final as Brazil beat Sweden 5–2 in Stockholm, the capital. Pelé hit the post and then Vavá scored two goals to give Brazil the lead. His first goal where he flicked the ball over a defender before volleying into the corner of the net, was selected as one of the best goals in the history of the World Cup. Following Pelé's second goal, Swedish player Sigvard Parling would later comment; "When Pelé scored the fifth goal in that Final, I have to be honest and say I felt like applauding". When the match ended, Pelé passed out on the field, and was revived by Garrincha. He then recovered, and was compelled by the victory to weep as he was being congratulated by his teammates. He finished the tournament with six goals in four matches played, tied for second place, behind record-breaker Just Fontaine, and was named best young player of the tournament. His impact was arguably greater off the field, with Barney Ronay writing, "With nothing but talent to guide him, the boy from Minas Gerais became the first black global sporting superstar, and a source of genuine uplift and inspiration." It was in the 1958 World Cup that Pelé began wearing a jersey with number 10. The event was the result of disorganization: the leaders of the Brazilian Federation did not allocate the shirt numbers of players and it was up to FIFA to choose the number 10 shirt for Pelé who was a substitute on the occasion. The press proclaimed Pelé the greatest revelation of the 1958 World Cup, and he was also retroactively given the Silver Ball as the second best player of the tournament, behind Didi. South American Championship Pelé also played in the South American Championship. In the 1959 competition he was named best player of the tournament and was top scorer with 8 goals, as Brazil came second despite being unbeaten in the tournament. He scored in five of Brazil's six games, including two goals against Chile and a hat-trick against Paraguay. 1962 World Cup When the 1962 World Cup started, Pelé was the best rated player in the world. In the first match of the 1962 World Cup in Chile, against Mexico, Pelé assisted the first goal and then scored the second one, after a run past four defenders, to go up 2–0. He injured himself in the next game while attempting a long-range shot against Czechoslovakia. This would keep him out of the rest of the tournament, and forced coach Aymoré Moreira to make his only lineup change of the tournament. The substitute was Amarildo, who performed well for the rest of the tournament. However, it was Garrincha who would take the leading role and carry Brazil to their second World Cup title, after beating Czechoslovakia at the final in Santiago. 1966 World Cup Pelé was the most famous footballer in the world during the 1966 World Cup in England, and Brazil fielded some world champions like Garrincha, Gilmar and Djalma Santos with the addition of other stars like Jairzinho, Tostão and Gérson, leading to high expectations for them. Brazil was eliminated in the first round, playing only three matches. The World Cup was marked, among other things, for brutal fouls on Pelé that left him injured by the Bulgarian and Portuguese defenders. Pelé scored the first goal from a free kick against Bulgaria, becoming the first player to score in three successive FIFA World Cups, but due to his injury, a result of persistent fouling by the Bulgarians, he missed the second game against Hungary. His coach stated that after the first game he felt "every team will take care of him in the same manner". Brazil lost that game and Pelé, although still recovering, was brought back for the last crucial match against Portugal at Goodison Park in Liverpool by the Brazilian coach Vicente Feola. Feola changed the entire defense, including the goalkeeper, while in midfield he returned to the formation of the first match. During the game, Portugal defender João Morais fouled Pelé, but was not sent off by referee George McCabe; a decision retrospectively viewed as being among the worst refereeing errors in World Cup history. Pelé had to stay on the field limping for the rest of the game, since substitutes were not allowed at that time. After this game he vowed he would never again play in the World Cup, a decision he would later change. 1970 World Cup Pelé was called to the national team in early 1969, he refused at first, but then accepted and played in six World Cup qualifying matches, scoring six goals. The 1970 World Cup in Mexico was expected to be Pelé's last. Brazil's squad for the tournament featured major changes in relation to the 1966 squad. Players like Garrincha, Nilton Santos, Valdir Pereira, Djalma Santos and Gilmar had already retired. However, Brazil's 1970 World Cup squad, which included players like Pelé, Rivelino, Jairzinho, Gérson, Carlos Alberto Torres, Tostão and Clodoaldo, is often considered to be the greatest football team in history. The front five of Jairzinho, Pelé, Gerson, Tostão and Rivelino together created an attacking momentum, with Pelé having a central role in Brazil's way to the final. All of Brazil's matches in the tournament (except the final) were played in Guadalajara, and in the first match against Czechoslovakia, Pelé gave Brazil a 2–1 lead, by controlling Gerson's long pass with his chest and then scoring. In this match Pelé attempted to lob goalkeeper Ivo Viktor from the half-way line, only narrowly missing the Czechoslovak goal. Brazil went on to win the match, 4–1. In the first half of the match against England, Pelé nearly scored with a header that was saved by the England goalkeeper Gordon Banks. Pelé recalled he was already shouting "Goal" when he headed the ball. It was often referred to as the "save of the century." In the second half, he controlled a cross from Tostão before flicking the ball to Jairzinho who scored the only goal. Against Romania, Pelé scored two goals, which included a 20-yard bending free-kick, with Brazil winning 3–2. In the quarter-final against Peru, Brazil won 4–2, |
his farewell, he said: "I bless you, Father, for judging me worthy of this hour, so that in the company of the martyrs I may share the cup of Christ." The date of Polycarp's death is in dispute. Eusebius dates it to the reign of Marcus Aurelius, c. 166–167. However, a post-Eusebian addition to the Martyrdom of Polycarp, dates his death to Saturday, February 23, in the proconsulship of Lucius Statius Quadratus, c. 155 or 156. These earlier dates better fit the tradition of his association with Ignatius and John the Evangelist. Great Sabbath The Martyrdom of Polycarp states that Polycarp was taken on the Sabbath and killed on "the Great Sabbath". English patristic scholar William Cave (1637–1713) believed that this was evidence that the Smyrnaeans under Polycarp observed the seventh-day Sabbath, i.e. assembled on Saturdays. J. B. Lightfoot records as a common interpretation of the expression "the Great Sabbath" to refer to Pesach or another Jewish festival. This is contradicted by the standard Jewish calendar, under which Nisan 14, the date of the Pesach, can fall no earlier than late March and hence at least a month after the traditional date of Polycarp's death February 23. Hence, Lightfoot understood the expression in reference to the Purim festival, celebrated a month before Pesach, while other scholars suggest that at the time the Jewish calendar had not yet been standardized, and that this day, both Jews and Christians celebrated Pesach and a (Quartodeciman) Christian Passover, respectively. Importance Polycarp occupies an important place in the history of the early Christian Church. He is among the earliest Christians whose writings survived. Jerome wrote that Polycarp was a "disciple of the apostle John and by him ordained presbyter of Smyrna". He was an elder of an important congregation that was a large contributor to the founding of the Christian Church. He is from an era whose orthodoxy is widely accepted by Eastern Orthodox Churches, Oriental Orthodox Churches, Church of God groups, Sabbatarian groups, mainstream Protestants and Catholics alike. According to Eusebius, Polycrates of Ephesus cited the example of Polycarp in defense of local practices during the quartodeciman controversy. Irenaeus, who as a young man had heard Polycarp preach, described him as "a man who was of much greater weight, and a more steadfast witness of truth, than Valentinus, and Marcion, and the rest of the heretics". Polycarp lived in an age after the deaths of the apostles, when a variety of interpretations of the sayings of Jesus were being preached. His role was to authenticate orthodox teachings through his connection with the apostle John: "a high value was attached to the witness Polycarp could give as to the genuine tradition of old apostolic doctrine" "his testimony condemning as offensive novelties the figments of the heretical teachers". Irenaeus states (iii. 3) that on Polycarp's visit to Rome, his testimony converted many disciples of Marcion and Valentinus. Polycarp is remembered in the Church of England with a Lesser Festival on 23 February. Relics In the church Sant' Ambrogio | the Roman emperor. On his farewell, he said: "I bless you, Father, for judging me worthy of this hour, so that in the company of the martyrs I may share the cup of Christ." The date of Polycarp's death is in dispute. Eusebius dates it to the reign of Marcus Aurelius, c. 166–167. However, a post-Eusebian addition to the Martyrdom of Polycarp, dates his death to Saturday, February 23, in the proconsulship of Lucius Statius Quadratus, c. 155 or 156. These earlier dates better fit the tradition of his association with Ignatius and John the Evangelist. Great Sabbath The Martyrdom of Polycarp states that Polycarp was taken on the Sabbath and killed on "the Great Sabbath". English patristic scholar William Cave (1637–1713) believed that this was evidence that the Smyrnaeans under Polycarp observed the seventh-day Sabbath, i.e. assembled on Saturdays. J. B. Lightfoot records as a common interpretation of the expression "the Great Sabbath" to refer to Pesach or another Jewish festival. This is contradicted by the standard Jewish calendar, under which Nisan 14, the date of the Pesach, can fall no earlier than late March and hence at least a month after the traditional date of Polycarp's death February 23. Hence, Lightfoot understood the expression in reference to the Purim festival, celebrated a month before Pesach, while other scholars suggest that at the time the Jewish calendar had not yet been standardized, and that this day, both Jews and Christians celebrated Pesach and a (Quartodeciman) Christian Passover, respectively. Importance Polycarp occupies an important place in the history of the early Christian Church. He is among the earliest Christians whose writings survived. Jerome wrote that Polycarp was a "disciple of the apostle John and by him ordained presbyter of Smyrna". He was an elder of an important congregation that was a large contributor to the founding of the Christian Church. He is from an era whose orthodoxy is widely accepted by Eastern Orthodox Churches, Oriental Orthodox Churches, Church of God groups, Sabbatarian groups, mainstream Protestants and Catholics alike. According to Eusebius, Polycrates of Ephesus cited the example of Polycarp in defense of local practices during the quartodeciman controversy. Irenaeus, who as a young man had heard Polycarp preach, described him as "a man |
the outlawing of war, did not imply a renunciation of force under all circumstances, but rather support for the ill-defined concept of 'collective security' under the League of Nations. At the same time, on the party's left, Stafford Cripps's small but vocal Socialist League opposed the official policy, on the non-pacifist ground that the League of Nations was 'nothing but the tool of the satiated imperialist powers'." Lansbury was eventually persuaded to resign as Labour leader by the non-pacifist wing of the party and was replaced by Clement Attlee. As the threat from Nazi Germany increased in the 1930s, the Labour Party abandoned its pacifist position and supported rearmament, largely as the result of the efforts of Ernest Bevin and Hugh Dalton, who by 1937 had also persuaded the party to oppose Neville Chamberlain's policy of appeasement. The League of Nations attempted to play its role in ensuring world peace in the 1920s and 1930s. However, with the increasingly revisionist and aggressive behaviour of Nazi Germany, Fascist Italy and Imperial Japan, it ultimately failed to maintain such a world order. Economic sanctions were used against states that committed aggression, such as those against Italy when it invaded Abyssinia, but there was no will on the part of the principal League powers, Britain and France, to subordinate their interests to a multilateral process or to disarm at all themselves. Spain The Spanish Civil War proved a major test for international pacifism, and the work of pacifist organisations (such as War Resisters' International and the Fellowship of Reconciliation) and individuals (such as José Brocca and Amparo Poch) in that arena has until recently been ignored or forgotten by historians, overshadowed by the memory of the International Brigades and other militaristic interventions. Shortly after the war ended, Simone Weil, despite having volunteered for service on the republican side, went on to publish The Iliad or the Poem of Force, a work that has been described as a pacifist manifesto. In response to the threat of fascism, some pacifist thinkers, such as Richard B. Gregg, devised plans for a campaign of nonviolent resistance in the event of a fascist invasion or takeover. France As the prospect of a second major war began to seem increasingly inevitable, much of France adopted pacifist views, though some historians argue that France felt more war anxiety than a moral objection to a second war. Hitler's spreading influence and territory posed an enormous threat to French livelihood from their neighbors. The French countryside had been devastated during World War I and the entire nation was reluctant to subject its territory to the same treatment. Though all countries in the First World War had suffered great losses, France was one of the most devastated and many did not want a second war. Germany As Germany dealt with the burdens of the Treaty of Versailles, a conflict arose in the 1930s between German Christianity and German nationalism. Many Germans found the terms of the treaty debilitating and humiliating, so German nationalism offered a way to regain the country's pride. German Christianity warned against the risks of entering a war similar to the previous one. As the German depression worsened and fascism began to rise in Germany, a greater tide of Germans began to sway toward Hitler's brand of nationalism that would come to crush pacifism. World War II With the start of World War II, pacifist and antiwar sentiment declined in nations affected by the war. Even the communist-controlled American Peace Mobilization reversed its antiwar activism once Germany invaded the Soviet Union in 1941. After the Japanese attack on Pearl Harbor, the non-interventionist America First Committee dropped its opposition to American involvement in the war and disbanded, but many smaller religious and socialist groups continued their opposition to war. Great Britain Bertrand Russell argued that the necessity of defeating Adolf Hitler and the Nazis was a unique circumstance in which war was not the worst of the possible evils; he called his position relative pacifism. Shortly before the outbreak of war, British writers such as E. M. Forster, Leonard Woolf, David Garnett and Storm Jameson all rejected their earlier pacifism and endorsed military action against Nazism. Similarly, Albert Einstein wrote: "I loathe all armies and any kind of violence; yet I'm firmly convinced that at present these hateful weapons offer the only effective protection." The British pacifists Reginald Sorensen and C. J. Cadoux, while bitterly disappointed by the outbreak of war, nevertheless urged their fellow pacifists "not to obstruct the war effort." Pacifists across Great Britain further struggled to uphold their anti-military values during the Blitz, a coordinated, long-term attack by the Luftwaffe on Great Britain. As the country was ravaged nightly by German bombing raids, pacifists had to seriously weigh the importance of their political and moral values against the desire to protect their nation. France Some scholars theorize that pacifism was the cause of France's rapid fall to the Germans after it was invaded by the Nazis in June 1940, resulting in a takeover of the government by the German military. Whether or not pacifism weakened French defenses against the Germans, there was no hope of sustaining a real pacifist movement after Paris fell. Just as peaceful Germans succumbed to violent nationalism, the pacifist French were muzzled by the totality of German control over nearly all of France. The French pacifists André and Magda Trocmé helped conceal hundreds of Jews fleeing the Nazis in the village of Le Chambon-sur-Lignon. After the war, the Trocmés were declared Righteous Among the Nations. Germany Pacifists in Nazi Germany were dealt with harshly, reducing the movement into almost nonexistence; those who continued to advocate for the end of the war and violence were often sent to labor camps; German pacifist Carl von Ossietzky and Olaf Kullmann, a Norwegian pacifist active during the Nazi occupation, were both imprisoned in concentration camps and died as a result of their mistreatment there. Austrian farmer Franz Jägerstätter was executed in 1943 for refusing to serve in the Wehrmacht. German nationalism consumed even the most peaceful of Christians, who may have believed that Hitler was acting in the good faith of Germany or who may have been so suppressed by the Nazi regime that they were content to act as bystanders to the violence occurring around them. Dietrich Bonhoeffer, an anti-Nazi German pastor who later died in 1945 in the Flossenbürg concentration camp, once wrote in a letter to his grandmother: "The issue really is: Germanism or Christianity." After the end of the war, it was discovered that "The Black Book" or Sonderfahndungsliste G.B., a list of Britons to be arrested in the event of a successful German invasion of Britain, included three active pacifists: Vera Brittain, Sybil Thorndike and Aldous Huxley (who had left the country). Conscientious objectors There were conscientious objectors and war tax resisters in both World War I and World War II. The United States government allowed sincere objectors to serve in noncombatant military roles. However, those draft resisters who refused any cooperation with the war effort often spent much of the wars in federal prisons. During World War II, pacifist leaders such as Dorothy Day and Ammon Hennacy of the Catholic Worker Movement urged young Americans not to enlist in military service. During the two world wars, young men conscripted into the military, but who refused to take up arms, were called conscientious objectors. Though these men had to either answer their conscription or face prison time, their status as conscientious objectors permitted them to refuse to take part in battle using weapons, and the military was forced to find a different use for them. Often, these men were assigned various tasks close to battle such as medical duties, though some were assigned various civilian jobs including farming, forestry, hospital work and mining. Conscientious objectors were often viewed by soldiers as cowards and liars, and they were sometimes accused of shirking military duty out of fear rather than as the result of conscience. In Great Britain during World War II, the majority of the public did not approve of moral objection by soldiers but supported their right to abstain from direct combat. On the more extreme sides of public opinion were those who fully supported the objectors and those who believed they should be executed as traitors. The World War II objectors were often scorned as fascist sympathizers and traitors, though many of them cited the influence of World War I and their shell shocked fathers as major reasons for refusing to participate. Later 20th century Baptist minister Martin Luther King Jr. led a civil rights movement in the U.S., employing Gandhian nonviolent resistance to repeal laws enforcing racial segregation and to work for integration of schools, businesses and government. In 1957, his wife Coretta Scott King, along with Albert Schweitzer, Benjamin Spock and others, formed the Committee for a Sane Nuclear Policy (now Peace Action) to resist the nuclear arms race. In 1958 British activists formed the Campaign for Nuclear Disarmament with Bertrand Russell as its president. In 1960, Thich Nhat Hanh came to the U.S. to study comparative religion at Princeton University and was subsequently appointed a lecturer in Buddhism at Columbia University. Nhất Hạnh had written a letter to King in 1965 entitled "Searching for the Enemy of Man" and met with King in 1966 to urge him to publicly denounce the Vietnam War. In a famous 1967 speech at Riverside Church in New York City, King publicly questioned the U.S. involvement in Vietnam for the first time. Other examples from this period include the 1986 People Power Revolution in the Philippines led by Corazon Aquino and the 1989 Tiananmen Square protests, with the broadly publicized "Tank Man" incident as its indelible image. On December 1, 1948, President José Figueres Ferrer of Costa Rica abolished the Costa Rican military. In 1949, the abolition of the military was introduced in Article 12 of the Costa Rican constitution. The budget previously dedicated to the military is now dedicated to providing healthcare services and education. Antiwar literature of the 20th century Edmund Blunden's Undertones of War (1928). Robert Graves's Goodbye to All That (1929). Erich Marie Remarque's All Quiet on the Western Front (1929). Beverley Nichols's Cry Havoc! (1933). A.A. Milne's Peace with Honour (1934). Aldous Huxley's Ends and Means (1937). Religious attitudes Baháʼí Faith Bahá'u'lláh, the founder of the Baháʼí Faith abolished holy war and emphasized its abolition as a central teaching of his faith. However, the Baháʼí Faith does not have an absolute pacifistic position. For example, Baháʼís are advised to do social service instead of active army service, but when this is not possible because of obligations in certain countries, the Baháʼí law of loyalty to one's government is preferred and the individual should perform the army service. Shoghi Effendi, the head of the Baháʼí Faith in the first half of the 20th century, noted that in the Baháʼí view, absolute pacifists are anti-social and exalt the individual over society which could lead to anarchy; instead he noted that the Baháʼí conception of social life follows a moderate view where the individual is not suppressed or exalted. On the level of society, Bahá'u'lláh promotes the principle of collective security, which does not abolish the use of force, but prescribes "a system in which Force is made the servant of Justice". The idea of collective security from the Bahá'í teachings states that if a government violates a fundamental norm of international law or provision of a future world constitution which Bahá'ís believe will be established by all nations, then the other governments should step in. Buddhism Ahimsa (do no harm), is a primary virtue in Buddhism (as well as other Indian religions such as Hinduism and Jainism). This leads to a misconception that Buddhism is a religion based solely on peace; however, like all religions, there is a long history of violence in various Buddhist traditions and many examples of prolonged violence in its 2,500-year existence. Like many religious scholars and believers of other religions, many Buddhists disavow any connection between their religion and the violence committed in its name or by its followers, and find various ways of dealing with problematic texts. Notable pacifists or peace activists within Buddhist traditions include Thích Nhất Hạnh who advocated for peace in response to the Vietnam War, founded the Plum Village Tradition, and helped popularize engaged Buddhism, Robert Baker Aitken and Anne Hopkins Aitken who founded the Buddhist Peace Fellowship, Cheng Yen founder of the Tzu Chi Foundation, Daisaku Ikeda who is a Japanese Buddhist leader, writer, president of Soka Gakkai International, and founder of multiple educational and peace research institutions, Bhikkhu Bodhi American Theravada Buddhist monk and founder of Buddhist Global Relief, Thai activist and author Sulak Sivaraksa, Cambodian activist Preah Maha Ghosananda. and Japanese activist and peace pagoda builder Nichidatsu Fujii Christianity Peace churches Peace churches are Christian denominations explicitly advocating pacifism. The term "historic peace churches" refers specifically to three church traditions: the Church of the Brethren, the Mennonites (and other Anabaptists, such as the Amish and Hutterites), as well as the Quakers (Religious Society of Friends). The historic peace churches have, from their origins as far back as the 16th century, always taken the position that Jesus was himself a pacifist who explicitly taught and practiced pacifism, and that his followers must do likewise. Pacifist churches vary on whether physical force can ever be justified in self-defense or protecting others, as many adhere strictly to nonresistance when confronted by violence. But all agree that violence on behalf of a country or a government is prohibited for Christians. Holiness movement The Emmanuel Association of Churches, Immanuel Missionary Church, Church of God (Guthrie, Oklahoma) and Christ's Sanctified Holy Church are denominations in the holiness movement (which is largely Methodist with a minority from other backgrounds such as Quaker, Anabaptist and Restorationist) known for their opposition to war today; they are known as "holiness pacifists". The Emmanuel Association teaches: Pentecostal churches Jay Beaman's thesis states that 13 of 21, or 62% of American Pentecostal groups formed by 1917 show evidence of being pacifist sometime in their history. Furthermore, Jay Beaman has shown in his thesis that there has been a shift away from pacifism in the American Pentecostal churches to more a style of military support and chaplaincy. The major organisation for Pentecostal Christians who believe in pacifism is the PCPF, the Pentecostal Charismatic Peace Fellowship. The United Pentecostal Church, the largest Apostolic/Oneness denomination, takes an official stand of conscientious objection: its Articles of Faith read, "We are constrained to declare against participating in combatant service in war, armed insurrection ... aiding or abetting in or the actual destruction of human life. We believe that we can be consistent in serving our Government in certain noncombatant capacities, but not in the bearing of arms." Other denominations The Peace Pledge Union is a pacifist organisation from which the Anglican Pacifist Fellowship (APF) later emerged within the Anglican Church. The APF succeeded in gaining ratification of the pacifist position at two successive Lambeth Conferences, but many Anglicans would not regard themselves as pacifists. South African Bishop Desmond Tutu is the most prominent Anglican pacifist. Rowan Williams led an almost united Anglican Church in Britain in opposition to the 2003 Iraq War. In Australia Peter Carnley similarly led a front of bishops opposed to the Government of Australia's involvement in the invasion of Iraq. The Catholic Worker Movement is concerned with both social justice and pacifist issues, and voiced consistent opposition to the Spanish Civil War and World War II. Many of its early members were imprisoned for their opposition to conscription. Within the Roman Catholic Church, the Pax Christi organisation is the premier pacifist lobby group. It holds positions similar to APF, and the two organisations are known to work together on ecumenical projects. Within Roman Catholicism there has been a discernible move towards a more pacifist position through the twentieth and early twenty-first centuries. Popes Benedict XV, John XXIII and John Paul II were all vocal in their opposition to specific wars. By taking the name Benedict XVI, some suspected that Joseph Ratzinger would continue the strong emphasis upon nonviolent conflict resolution of his predecessor. However, the Roman Catholic Church officially maintains the legitimacy of Just War, which is rejected by some pacifists. In the twentieth century there was a notable trend among prominent Roman Catholics towards pacifism. Individuals such as Dorothy Day and Henri Nouwen stand out among them. The monk and mystic Thomas Merton was noted for his commitment to pacifism during the Vietnam War era. Murdered Salvadoran Bishop Óscar Romero was notable for using non-violent resistance tactics and wrote meditative sermons focusing on the power of prayer and peace. School of the Americas Watch was founded by Maryknoll Fr. Roy Bourgeois in 1990 and uses strictly pacifist principles to protest the training of Latin American military officers by United States Army officers at the School of the Americas in the state of Georgia. The Southern Baptist Convention has stated in the Baptist Faith and Message, "It is the duty of Christians to seek peace with all men on principles of righteousness. In accordance with the spirit and teachings of Christ they should do all in their power to put an end to war." The United Methodist Church explicitly supports conscientious objection by its members "as an ethically valid position" while simultaneously allowing for differences of opinion and belief for those who do not object to military service. Members of the Rastafari Movement's Mansion Nyabinghi are specifically noted for having a large population of Pacifist members, though not all of them are. Hinduism Non violence, or ahimsa, is a central part of Hinduism and is one of the fundamental Yamas – self restraints needed to live a proper life. The concept of ahimsa grew gradually within Hinduism, one of the signs being the discouragement of ritual animal sacrifice. Most Hindus today have a vegetarian diet. The classical texts of Hinduism devote numerous chapters discussing what people who practice the virtue of Ahimsa, can and must | rejection of all forms of warfare. Teichman's beliefs have been summarized by Brian Orend as "... A pacifist rejects war and believes there are no moral grounds which can justify resorting to war. War, for the pacifist, is always wrong." In a sense the philosophy is based on the idea that the ends do not justify the means. The word pacific denotes conciliatory. Moral considerations Pacifism may be based on moral principles (a deontological view) or pragmatism (a consequentialist view). Principled pacifism holds that at some point along the spectrum from war to interpersonal physical violence, such violence becomes morally wrong. Pragmatic pacifism holds that the costs of war and interpersonal violence are so substantial that better ways of resolving disputes must be found. Pacifists generally reject theories of Just War. Nonviolence Some pacifists follow principles of nonviolence, believing that nonviolent action is morally superior and/or most effective. Some however, support physical violence for emergency defence of self or others. Others support destruction of property in such emergencies or for conducting symbolic acts of resistance like pouring red paint to represent blood on the outside of military recruiting offices or entering air force bases and hammering on military aircraft. Not all nonviolent resistance (sometimes also called civil resistance) is based on a fundamental rejection of all violence in all circumstances. Many leaders and participants in such movements, while recognizing the importance of using non-violent methods in particular circumstances, have not been absolute pacifists. Sometimes, as with the civil rights movement's march from Selma to Montgomery in 1965, they have called for armed protection. The interconnections between civil resistance and factors of force are numerous and complex. Absolute pacifism An absolute pacifist is generally described by the BBC as one who believes that human life is so valuable, that a human should never be killed and war should never be conducted, even in self-defense. The principle is described as difficult to abide by consistently, due to violence not being available as a tool to aid a person who is being harmed or killed. It is further claimed that such a pacifist could logically argue that violence leads to more undesirable results than non-violence. Police actions and national liberation Although all pacifists are opposed to war between nation states, there have been occasions where pacifists have supported military conflict in the case of civil war or revolution. For instance, during the American Civil War, both the American Peace Society and some former members of the Non-Resistance Society supported the Union's military campaign, arguing they were carrying out a "police action" against the Confederacy, whose act of Secession they regarded as criminal. Following the outbreak of the Spanish Civil War, French pacifist René Gérin urged support for the Spanish Republic. Gérin argued that the Spanish Nationalists were "comparable to an individual enemy" and the Republic's war effort was equivalent to the action of a domestic police force suppressing crime. In the 1960s, some pacifists associated with the New Left supported wars of national liberation and supported groups such as the Viet Cong and the Algerian FLN, arguing peaceful attempts to liberate such nations were no longer viable, and war was thus the only option. History Early traditions Advocacy of pacifism can be found far back in history and literature. China During the Warring States period, the pacifist Mohist School opposed aggressive war between the feudal states. They took this belief into action by using their famed defensive strategies to defend smaller states from invasion from larger states, hoping to dissuade feudal lords from costly warfare. The Seven Military Classics of ancient China view warfare negatively, and as a last resort. For example, the Three Strategies of Huang Shigong says: "As for the military, it is not an auspicious instrument; it is the way of heaven to despise it", and the Wei Liaozi writes: "As for the military, it is an inauspicious instrument; as for conflict and contention, it runs counter to virtue". The Taoist scripture "Classic of Great Peace (Taiping jing)" foretells "the coming Age of Great Peace (Taiping)". The Taiping Jing advocates "a world full of peace". Lemba The Lemba religion of southern French Congo, along with its symbolic herb, is named for pacifism : "lemba, lemba" (peace, peace), describes the action of the plant lemba-lemba (Brillantaisia patula T. Anders). Likewise in Cabinda, "Lemba is the spirit of peace, as its name indicates." Moriori The Moriori, of the Chatham Islands, practiced pacifism by order of their ancestor Nunuku-whenua. This enabled the Moriori to preserve what limited resources they had in their harsh climate, avoiding waste through warfare. In turn, this led to their almost complete annihilation in 1835 by invading Ngāti Mutunga and Ngāti Tama Māori from the Taranaki region of the North Island of New Zealand. The invading Māori killed, enslaved and cannibalised the Moriori. A Moriori survivor recalled : "[The Maori] commenced to kill us like sheep ... [We] were terrified, fled to the bush, concealed ourselves in holes underground, and in any place to escape our enemies. It was of no avail; we were discovered and killed – men, women and children indiscriminately." Greece In Ancient Greece, pacifism seems not to have existed except as a broad moral guideline against violence between individuals. No philosophical program of rejecting violence between states, or rejecting all forms of violence, seems to have existed. Aristophanes, in his play Lysistrata, creates the scenario of an Athenian woman's anti-war sex strike during the Peloponnesian War of 431–404 BC, and the play has gained an international reputation for its anti-war message. Nevertheless, it is both fictional and comical, and though it offers a pragmatic opposition to the destructiveness of war, its message seems to stem from frustration with the existing conflict (then in its twentieth year) rather than from a philosophical position against violence or war. Equally fictional is the nonviolent protest of Hegetorides of Thasos. Euripides also expressed strong anti-war ideas in his work, especially The Trojan Women. Roman Empire Several Roman writers rejected the militarism of Roman society and gave voice to anti-war sentiments, including Propertius, Tibullus and Ovid. The Stoic Seneca the Younger criticised warfare in his book Naturales quaestiones (circa 65 AD). Maximilian of Tebessa was a Christian conscientious objector. He was killed for refusing to be conscripted. Christianity Throughout history many have understood Jesus of Nazareth to have been a pacifist, drawing on his Sermon on the Mount. In the sermon Jesus stated that one should "not resist an evildoer" and promoted his turn the other cheek philosophy. "If anyone strikes you on the right cheek, turn the other also; and if anyone wants to sue you and take your coat, give your cloak as well ... Love your enemies, do good to those who hate you, bless those who curse you, pray for those who abuse you." The New Testament story is of Jesus, besides preaching these words, surrendering himself freely to an enemy intent on having him killed and proscribing his followers from defending him. There are those, however, who deny that Jesus was a pacifist and state that Jesus never said not to fight, citing examples from the New Testament. One such instance portrays an angry Jesus driving dishonest market traders from the temple. A frequently quoted passage is Luke 22:36: "He said to them, 'But now, the one who has a purse must take it, and likewise a bag. And the one who has no sword must sell his cloak and buy one.'" Pacifists have typically explained that verse as Jesus fulfilling prophecy, since in the next verse, Jesus continues to say: "It is written: 'And he was numbered with the transgressors'; and I tell you that this must be fulfilled in me. Yes, what is written about me is reaching its fulfillment." Others have interpreted the non-pacifist statements in the New Testament to be related to self-defense or to be metaphorical and state that on no occasion did Jesus shed blood or urge others to shed blood. Modern history Beginning in the 16th century, the Protestant Reformation gave rise to a variety of new Christian sects, including the historic peace churches. Foremost among them were the Religious Society of Friends (Quakers), Amish, Mennonites, Hutterites, and Church of the Brethren. The humanist writer Desiderius Erasmus was one of the most outspoken pacifists of the Renaissance, arguing strongly against warfare in his essays The Praise of Folly (1509) and The Complaint of Peace (1517). The Quakers were prominent advocates of pacifism, who as early as 1660 had repudiated violence in all forms and adhered to a strictly pacifist interpretation of Christianity. They stated their beliefs in a declaration to King Charles II: "We utterly deny all outward wars and strife, and fightings with outward weapons, for any end, or under any pretense whatever; this is our testimony to the whole world. The Spirit of Christ ... which leads us into all truth, will never move us to fight and war against any man with outward weapons, neither for the kingdom of Christ, nor for the kingdoms of this world. Throughout the many 18th century wars in which Great Britain participated, the Quakers maintained a principled commitment not to serve in the army and militia or even to pay the alternative £10 fine. The English Quaker William Penn, who founded the Province of Pennsylvania, employed an anti-militarist public policy. Unlike residents of many of the colonies, Quakers chose to trade peacefully with the Indians, including for land. The colonial province was, for the 75 years from 1681 to 1756, essentially unarmed and experienced little or no warfare in that period. From the 16th to the 18th centuries, a number of thinkers devised plans for an international organisation that would promote peace, and reduce or even eliminate the occurrence of war. These included the French politician Duc de Sully, the philosophers Émeric Crucé and the Abbe de Saint-Pierre, and the English Quakers William Penn and John Bellers. Pacifist ideals emerged from two strands of thought that coalesced at the end of the 18th century. One, rooted in the secular Enlightenment, promoted peace as the rational antidote to the world's ills, while the other was a part of the evangelical religious revival that had played an important part in the campaign for the abolition of slavery. Representatives of the former included Jean-Jacques Rousseau, in Extrait du Projet de Paix Perpetuelle de Monsieur l'Abbe Saint-Pierre (1756), Immanuel Kant, in his Thoughts on Perpetual Peace, and Jeremy Bentham who proposed the formation of a peace association in 1789. Representative of the latter, was William Wilberforce who thought that strict limits should be imposed on British involvement in the French Revolutionary Wars based on Christian ideals of peace and brotherhood. Bohemian Bernard Bolzano taught about the social waste of militarism and the needlessness of war. He urged a total reform of the educational, social, and economic systems that would direct the nation's interests toward peace rather than toward armed conflict between nations. During the late nineteenth and early twentieth centuries, pacifism was not entirely frowned upon throughout Europe. It was considered a political stance against costly capitalist-imperialist wars, a notion particularly popular in the British Liberal Party of the twentieth century. However, during the eras of World War One and especially World War Two, public opinion on the ideology split. Those against the Second World War, some argued, were not fighting against unnecessary wars of imperialism but instead acquiescing to the fascists of Germany, Italy and Japan. Peace movements During the period of the Napoleonic Wars, although no formal peace movement was established until the end of hostilities, a significant peace movement animated by universalist ideals did emerge, due to the perception of Britain fighting in a reactionary role and the increasingly visible impact of the war on the welfare of the nation in the form of higher taxation levels and high casualty rates. Sixteen peace petitions to Parliament were signed by members of the public, anti-war and anti-Pitt demonstrations convened and peace literature was widely published and disseminated. The first peace movements appeared in 1815–16. In the United States the first such movement was the New York Peace Society, founded in 1815 by the theologian David Low Dodge, and the Massachusetts Peace Society. It became an active organization, holding regular weekly meetings, and producing literature which was spread as far as Gibraltar and Malta, describing the horrors of war and advocating pacificism on Christian grounds. The London Peace Society (also known as the Society for the Promotion of Permanent and Universal Peace) was formed in 1816 to promote permanent and universal peace by the philanthropist William Allen. In the 1840s, British women formed "Olive Leaf Circles", groups of around 15 to 20 women, to discuss and promote pacifist ideas. The peace movement began to grow in influence by the mid-nineteenth century. The London Peace Society, under the initiative of American consul Elihu Burritt and the reverend Henry Richard, convened the first International Peace Congress in London in 1843. The congress decided on two aims: the ideal of peaceable arbitration in the affairs of nations and the creation of an international institution to achieve that. Richard became the secretary of the Peace Society in 1850 on a full-time basis, a position which he would keep for the next 40 years, earning himself a reputation as the 'Apostle of Peace'. He helped secure one of the earliest victories for the peace movement by securing a commitment from the Great Powers in the Treaty of Paris (1856) at the end of the Crimean War, in favour of arbitration. On the European continent, wracked by social upheaval, the first peace congress was held in Brussels in 1848 followed by Paris a year later. After experiencing a recession in support due to the resurgence of militarism during the American Civil War and Crimean War, the movement began to spread across Europe and began to infiltrate the new socialist movements. In 1870, Randal Cremer formed the Workman's Peace Association in London. Cremer, alongside the French economist Frédéric Passy was also the founding father of the first international organisation for the arbitration of conflicts in 1889, the Inter-Parliamentary Union. The National Peace Council was founded in after the 17th Universal Peace Congress in London (July August 1908). An important thinker who contributed to pacifist ideology was Russian writer Leo Tolstoy. In one of his latter works, The Kingdom of God is Within You, Tolstoy provides a detailed history, account and defense of pacifism. Tolstoy's work inspired a movement named after him advocating pacifism to arise in Russia and elsewhere. The book was a major early influence on Mahatma Gandhi, and the two engaged in regular correspondence while Gandhi was active in South Africa. Bertha von Suttner, the first woman to be a Nobel Peace Prize laureate, became a leading figure in the peace movement with the publication of her novel, Die Waffen nieder! ("Lay Down Your Arms!") in 1889 and founded an Austrian pacifist organization in 1891. Non-violent resistance In colonial New Zealand, during the latter half of the 19th century Pākehā settlers used numerous tactics to confiscate land from the indigenous Māori, including warfare. In the 1870s and 1880s, Parihaka, then reported to be the largest Māori settlement in New Zealand, became the centre of a major campaign of non-violent resistance to land confiscations. One Māori leader, Te Whiti-o-Rongomai, quickly became the leading figure in the movement, stating in a speech that "Though some, in darkness of heart, seeing their land ravished, might wish to take arms and kill the aggressors, I say it must not be. Let not the Pakehas think to succeed by reason of their guns... I want not war". Te Whiti-o-Rongomai achieved renown for his non-violent tactics among the Māori, which proved more successful in preventing land confiscations than acts of violent resistance. Mahatma Gandhi was a major political and spiritual leader of India, instrumental in the Indian independence movement. The Nobel prize winning great poet Rabindranath Tagore, who was also an Indian, gave him the honorific "Mahatma", usually translated "Great Soul". He was the pioneer of a brand of nonviolence (or ahimsa) which he called satyagraha—translated literally as "truth force". This was the resistance of tyranny through civil disobedience that was not only nonviolent but also sought to change the heart of the opponent. He contrasted this with duragraha, "resistant force", which sought only to change behaviour with stubborn protest. During his 30 years of work (1917–1947) for the independence of his country from British colonial rule, Gandhi led dozens of nonviolent campaigns, spent over seven years in prison, and fasted nearly to the death on several occasions to obtain British compliance with a demand or to stop inter-communal violence. His efforts helped lead India to independence in 1947, and inspired movements for civil rights and freedom worldwide. World War I Peace movements became active in the Western world after 1900, often focusing on treaties that would settle disputes through arbitration, and efforts to support the Hague conventions. The sudden outbreak of the First World War in July 1914 dismayed the peace movement. Socialist parties in every industrial nation had committed themselves to antiwar policies, but when the war came, all of them, except in Russia and the United States, supported their own governments. There were highly publicized dissidents, some of whom were imprisoned for opposing draft laws, such as Eugene Debs in the U.S. In Britain, the prominent activist Stephen Henry Hobhouse was jailed for refusing military service, citing his convictions as a "socialist and a Christian". Many socialist groups and movements were antimilitarist, arguing that war by its nature was a type of governmental coercion of the working class for the benefit of capitalist elites. The French socialist pacifist leader Jean Jaurès was assassinated by a nationalist fanatic on July 31, 1914. The national parties in the Second International increasingly supported their respective nations in war, and the International was dissolved in 1916. In 1915, the League of Nations Society was formed by British liberal leaders to promote a strong international organisation that could enforce the peaceful resolution of conflict. Later that year, the League to Enforce Peace was established in the U.S. to promote similar goals. Hamilton Holt published a September 28, 1914, editorial in his magazine the Independent called "The Way to Disarm: A Practical Proposal" that called for an international organization to agree upon the arbitration of disputes and to guarantee the territorial integrity of its members by maintaining military forces sufficient to defeat those of any non-member. The ensuing debate among prominent internationalists modified Holt's plan to align it more closely with proposals offered in Great Britain by Viscount James Bryce, a former British ambassador to the United States. These and other initiatives were pivotal in the change in attitudes that gave birth to the League of Nations after the war. In addition to the traditional peace churches, some of the many groups that protested against the war were the Woman's Peace Party (which was organized in 1915 and led by noted reformer Jane Addams), the International Committee of Women for Permanent Peace (ICWPP) (also organized in 1915), the American Union Against Militarism, the Fellowship of Reconciliation and the American Friends Service Committee. Jeannette Rankin, the first woman elected to Congress, was another fierce advocate of pacifism, the only person to vote against American entrance into both wars. Between the two World Wars After the immense loss of nearly ten million men to trench warfare, a sweeping change of attitude toward militarism crashed over Europe, particularly in nations such as Great Britain, where many questioned its involvement in the war. After World War I's official end in 1918, peace movements across the continent and the United States renewed, gradually gaining popularity among young Europeans who grew up in the shadow of Europe's trauma over the Great War. Organizations formed in this period included the War Resisters' International, the Women's International League for Peace and Freedom, the No More War Movement, the Service Civil International and the Peace Pledge Union (PPU). The League of Nations also convened several disarmament conferences in the interbellum period such as the Geneva Conference, though the support that pacifist policy and idealism received varied across European nations. These organizations and movements attracted tens of thousands of Europeans, spanning most professions including "scientists, artists, musicians, politicians, clerks, students, activists and thinkers." Great Britain Pacifism and revulsion with war were very popular sentiments in 1920s Britain. Novels and poems on the theme of the futility of war and the slaughter of the youth by old fools were published, including, Death of a Hero by Richard Aldington, Erich Remarque's translated All Quiet on the Western Front and Beverley Nichols's expose Cry Havoc. A debate at the University of Oxford in 1933 on the motion 'one must fight for King and country' captured the changed mood when the motion was resoundingly defeated. Dick Sheppard established the Peace Pledge Union in 1934, which totally renounced war and aggression. The idea of collective security was also popular; instead of outright pacifism, the public generally exhibited a determination to stand up to aggression, but preferably with the use of economic sanctions and multilateral negotiations. Many members of the Peace Pledge Union later joined the Bruderhof during its period of residence in the Cotswolds, where Englishmen and Germans, many of whom were Jewish, lived side by side despite local persecution. The British Labour Party had a strong pacifist wing in the early 1930s, and between 1931 and 1935 it was led by George Lansbury, a Christian pacifist who later chaired the No More War Movement and was president of the PPU. The 1933 annual conference resolved unanimously to |
Dutch naval commander and folk hero Piet Hein (scientist) (1905–1996), descendant of the above, Danish poet and scientist Piet Hein Donner (born 1948), Dutch politician nicknamed "Piet Hein" Piet-Hein Geeris (born 1972), Dutch field hockey player Ships Dutch ship Piet Hein (1810), an 80-gun ship of the line of the navy of the Batavian Republic, never launched French ship | and scientist Piet Hein Donner (born 1948), Dutch politician nicknamed "Piet Hein" Piet-Hein Geeris (born 1972), Dutch field hockey player Ships Dutch ship Piet Hein (1810), an 80-gun ship of the line of the navy of the Batavian |
shorter than twenty amino acids found in hydrolysed protein, but this term is no longer commonly used. History In trying to uncover the intermediate stages of abiogenesis, scientist Sidney W. Fox in the 1950s and 1960s, studied the spontaneous formation of peptide structures under conditions that might plausibly have existed early in Earth's history. He demonstrated that amino acids could spontaneously form small chains called peptides. In one of his experiments, he allowed amino acids to dry out as if puddled in a warm, dry spot in prebiotic conditions. He found that, as they dried, the amino acids formed long, often cross-linked, thread-like microscopic polypeptide globules, he named "proteinoid microspheres". Polymerization The abiotic polymerization of amino acids into proteins through the formation of peptide bonds was thought to occur only at temperatures over 140 °C. However, the biochemist Sidney Walter Fox and his co-workers discovered that phosphoric acid acted as a catalyst for this reaction. They were able to form protein-like chains from a mixture of 18 common amino acids at 70 °C in the presence of phosphoric acid, and dubbed these protein-like chains proteinoids. Fox later found naturally occurring proteinoids similar to those he had created in his laboratory in lava and cinders from Hawaiian volcanic vents and determined that the amino acids present polymerized due to the heat of escaping gases and lava. Other catalysts have since been found; one of them, amidinium carbodiimide, is formed in primitive Earth experiments and is effective in dilute aqueous solutions. When present in certain concentrations in aqueous solutions, proteinoids form small microspheres. This is because some of the amino acids incorporated into proteinoid chains are more hydrophobic than others, and so proteinoids cluster together like droplets of oil in water. These | in lava and cinders from Hawaiian volcanic vents and determined that the amino acids present polymerized due to the heat of escaping gases and lava. Other catalysts have since been found; one of them, amidinium carbodiimide, is formed in primitive Earth experiments and is effective in dilute aqueous solutions. When present in certain concentrations in aqueous solutions, proteinoids form small microspheres. This is because some of the amino acids incorporated into proteinoid chains are more hydrophobic than others, and so proteinoids cluster together like droplets of oil in water. These structures exhibit a few characteristics of living cells: An outer wall. Osmotic swelling and shrinking. Budding. Binary fission (dividing into two daughter microspheres). Streaming movement of internal particles. Fox thought that the microspheres may have provided a cell compartment within which organic molecules could have become concentrated and protected from the outside environment during the process of chemical evolution. Proteinoid microspheres are today being considered for use in pharmaceuticals, providing microscopic biodegradable capsules in which to package and deliver oral drugs. In another experiment using a similar method to set suitable conditions for life to form, Fox collected volcanic material from a cinder cone in Hawaii. He discovered that the temperature was over just beneath the surface of the cinder cone, and suggested that this might have been the environment in which life was created—molecules could have formed and then been washed through the loose volcanic ash and into the sea. He placed lumps of lava over amino acids derived from methane, ammonia and water, sterilized all materials, and baked the lava over the amino acids for a few hours in a glass oven. A brown, sticky substance formed over the surface and when the lava was drenched in sterilized water a thick, brown liquid leached out. It turned out that the amino acids had combined to form proteinoids, and the proteinoids had combined to form small spheres. Fox called these "microspheres". His protobionts were not cells, although they formed clumps and chains reminiscent of bacteria. Based upon such experiments, |
well-received from states and academics alike, with many cases submitted to it for its first decade of operation. Between 1922 and 1940 the Court heard a total of 29 cases and delivered 27 separate advisory opinions. With the heightened international tension in the 1930s, the Court became less used. By a resolution from the League of Nations on 18 April 1946, both the Court and the League ceased to exist and were replaced by the International Court of Justice and the United Nations. The Court's mandatory jurisdiction came from three sources: the Optional Clause of the League of Nations, general international conventions and special bipartite international treaties. Cases could also be submitted directly by states, but they were not bound to submit material unless it fell into those three categories. The Court could issue either judgments or advisory opinions. Judgments were directly binding but not advisory opinions. In practice, member states of the League of Nations followed advisory opinions anyway for fear of possibly undermining the moral and legal authority of the Court and the League. History Founding and early years An international court had long been proposed; Pierre Dubois suggested it in 1305 and Émeric Crucé in 1623. An idea of an international court of justice arose in the political world at the First Hague Peace Conference in 1899, where it was declared that arbitration between states was the easiest solution to disputes, providing a temporary panel of judges to arbitrate in such cases, the Permanent Court of Arbitration. At the Second Hague Peace Conference in 1907, a draft convention for a permanent Court of Arbitral Justice was written although disputes and other pressing business at the Conference meant that such a body was never established, owing to difficulties agreeing on a procedure to select the judges. The outbreak of the First World War, and, in particular, its conclusion made it clear to many academics that some kind of world court was needed, and it was widely expected that one would be established. Article 14 of the Covenant of the League of Nations, created after the Treaty of Versailles, allowed the League to investigate setting up an international court. In June 1920, an Advisory Committee of jurists appointed by the League of Nations finally established a working guideline for the appointment of judges, and the Committee was then authorised to draft a constitution for a permanent court not of arbitration but of justice. The Statute of the Permanent Court of International Justice was accepted in Geneva on December 13, 1920. The Court first sat on 30 January 1922, at the Peace Palace, The Hague, covering preliminary business during the first session (such as establishing procedure and appointing officers) Nine judges sat, along with three deputies, since Antonio Sánchez de Bustamante y Sirven, Ruy Barbosa and Wang Ch'ung-hui were unable to attend, the last being at the Washington Naval Conference. The Court elected Bernard Loder as President and Max Huber as Vice-President; Huber was replaced by André Weiss a month later. On 14 February the Court was officially opened, and rules of procedure were established on 24 March, when the court ended its first session. The court first sat to decide cases on 15 June. During its first year of business, the Court issued three advisory opinions, all related to the International Labour Organization created by the Treaty of Versailles. The initial reaction to the Court was good, from politicians, practising lawyers and academics alike. Ernest Pollock, the former Attorney General for England and Wales said, "May we not as lawyers regard the establishment of an International Court of Justice as an advance in the science that we pursue?" John Henry Wigmore said that the creation of the Court "should have given every lawyer a thrill of cosmic vibration", and James Brown Scott wrote that "the one dream of our ages has been realised in our time". Much praise was heaped upon the appointment of an American judge despite the fact that the United States had not become a signatory to the Court's protocol, and it was thought that it would soon do so. Increasing work The Court faced increasing work as it went on, allaying the fears of those commentators who had believed the Court would become like the Supreme Court of the United States, which was not presented with a case for its first six terms. The Court was given nine cases during 1922 and 1969, however, with judgments called "cases" and advisory opinions called "questions". Three cases were disposed of during the Court's first session, one during an extraordinary sitting between 8 January and 7 February 1923 (the Tunis-Morocco Nationality Question), four during the second ordinary sitting between 15 June 1923 and 15 September 1923 (Eastern Carelia Question, S.S. "Wimbledon" case, German Settlers Question, Acquisition of Polish Nationality Question) and one during a second extraordinary session from 12 November to 6 December 1923 (Jaworznia Question). A replacement for Ruy Barbosa (who had died on 1 March 1923 without hearing any cases) was also found, with the election of Epitácio Pessoa on 10 September 1923. The workload the following year was reduced, containing two judgments and one advisory opinion; the Mavrommatis Palestine Concessions Case, the Interpretation of the Treaty of Neuilly Case (the first case of the Court's Chamber of Summary Procedure) and the Monastery of Saint-Naoum Question. During the same year, a new President and Vice-President were elected, since they were mandated to serve for a term of three years. At the elections on 4 September 1924, André Weiss was again elected Vice-President and Max Huber became the second President of the Court. Judicial pensions were created at the same time, with a judge being given 1/30th of his annual pay for every year he had served once he had both retired and turned 65. 1925 was an exceedingly busy year for the court, which sat for 210 days, with four extraordinary sessions as well as the ordinary session, producing 3 judgments and 4 advisory opinions. The first judgment was given in the Exchange of Greek and Turkish Populations Case, the second (by the Court of Summary Procedure) was on the interpretation of the Interpretation of the Treaty of Neuilly Case, and the third in the Mavrommatis Palestine Concessions Case. The 4 advisory opinions issued by the Court were in the Polish Postal Service in Danzig Question, the Expulsion of the Ecumenical Patriarch Question, the Treaty of Lausanne Question and the German Interests in Polish Upper Silesia Question. 1926 saw reduced business, with only one ordinary session and one extraordinary session; it was, however, the first year that all 11 judges had been present to hear cases. The court heard two cases, providing one judgment and one advisory opinion; a second question on German Interests in Polish Upper Silesia, this time a judgment rather than an advisory opinion, and an advisory opinion on the International Labour Organization. Despite the reduction of work in 1926, 1927 was another busy year, the Court sitting continuously from 15 June to 16 December, handing down 4 orders, 4 judgments and 1 advisory opinion. The judgments were in the Belgium-China Case, the Case Concerning the Factory at Chorzow, the Lotus Case and a continuation of the Mavrommatis Jerusalem Concessions Case. 3 of the advisory opinions were on the Competence of the European Commission on the Danube, and the 4th was on the Jurisdiction of Danzig Courts. The 4 orders were on the German Interests in Polish Upper Silesia. This year saw another set of elections; on 6 December, with Dionisio Anzilotti elected President and André Weiss elected Vice-President. Weiss died the following year, and John Bassett Moore resigned; Max Huber was elected Vice-President on 12 September 1928 to succeed Weiss, while a second death (Lord Finlay) left the Court increasingly understaffed. Replacements for Moore and Finlay were elected on 19 September 1929; Henri Fromageot and Cecil Hurst respectively. After the second round of elections in September 1930, the Court was reorganised. On 16 January 1931 Mineichirō Adachi was appointed President, and Gustavo Guerrero Vice-President. United States never joins The United States never joined the World Court, primarily because enemies of the League of Nations in the Senate argued that the Court was too closely linked to the League of Nations. The leading opponent was Senator William Borah, Republican of Idaho. The United States finally recognised the Court's jurisdiction, following a long and drawn out process. President Warren G. Harding had first suggested US involvement in 1923, and on 9 December 1929, three court protocols were signed. The U.S. demanded a veto over cases involving the U.S. but other nations rejected the idea. President Franklin Roosevelt did not risk his political capital and gave only passive support even though a two-thirds vote of approval was needed in the Senate. A barrage of telegrams flooded Congress, inspired by attacks made by Charles Coughlin and others. The treaty failed by seven votes on January 29, 1935. The United States finally accepted the Court's jurisdiction on 28 December 1935, but the treaty was never ratified, and the U.S. never joined. Francis Boyle attributes the failure to a strong isolationist element in the US Senate, arguing that the ineffectiveness shown by US nonparticipation in the Court and other international institutions could be linked to the start of the Second World War. Growing international tension and dissolution of the court 1933 was a busy year for the court, which cleared its 20th case (and "greatest triumph"); the Eastern Greenland Case. This period was marked by growing international tension, however, with Japan and Germany announcing their withdrawal from the League of Nations, to come into effect in 1935. That did not directly affect the Court, since the protocol accepting Court jurisdiction was separately ratified, but it influenced whether a nation would be willing to bring a case before it, as evidenced by Germany's withdrawal from two pending cases. 1934, the Court's 13th year, "has been in keeping with the traditions associated with that number", with few cases since the world's governments were more concerned with the growing international tension. The Court's business continued to be small in 1935, 1936, 1937, 1938, and 1939 although 1937 was marked by Monaco's acceptance of the Court protocol. The Court's judicial output in 1940 consisted entirely of a set of orders, completed in a meeting between 19 and 26 February, caused by an international situation, which left the Court with "uncertain prospects for the future". Following the German invasion of the Netherlands, the Court was unable to meet although the Registrar and President were afforded full diplomatic immunity. Informed that the situation would not be tolerated after diplomatic missions from other nations left The Hague on 16 July, the President and Registrar left the Netherlands and moved to Switzerland, accompanied by their staff. The Court was unable to meet between 1941 and 1944, but the framework remained intact, and it soon became apparent that the Court would be dissolved. In 1943, an international panel met to consider "the question of the Permanent Court of International Justice", meeting from 20 March to 10 February 1944. The panel agreed that the name and functioning of the Court should be preserved but for some future court rather than a continuation of the current one. Between 21 August and 7 October 1944, the Dumbarton Oaks Conference was held, which, among other things, created an international court attached to the United Nations, to succeed the Permanent Court of International Justice. As a result of these conferences and others, the judges of the Permanent Court of International Justice officially resigned in October 1945 and, via a resolution by the League of Nations on 18 April 1946, the Court and the League both ceased to exist, being replaced by the International Court of Justice and the United Nations. Organisation Judges The Court initially consisted of 11 judges and 4 deputy judges, recommended by member states of the League of Nations to the Secretary General of the League of Nations, who would put them before the Council and Assembly for election. The Council and Assembly were to bear in mind that the elected panel of judges was to represent every major legal tradition in the League, along with "every major civilisation". Each member state was allowed to recommend 4 potential judges, with a maximum of 2 from its own nation. Judges were elected by a straight majority vote, held independently in the Council and Assembly. The judges served for a period of nine years, with their term limits all expiring at the same time, necessitating a completely new set of elections. The judges were independent and rid themselves of their nationality for the purposes of hearing cases, owing allegiance to no individual member state, but it was forbidden to have more than one judge from the same state. As a sign of their independence from national ties, judges were given full diplomatic immunity when engaged in Court business. The only requirements for judges were "high moral character" and "the qualifications required in their respective countries [for] the highest judicial offices" or | Court of Arbitral Justice was written although disputes and other pressing business at the Conference meant that such a body was never established, owing to difficulties agreeing on a procedure to select the judges. The outbreak of the First World War, and, in particular, its conclusion made it clear to many academics that some kind of world court was needed, and it was widely expected that one would be established. Article 14 of the Covenant of the League of Nations, created after the Treaty of Versailles, allowed the League to investigate setting up an international court. In June 1920, an Advisory Committee of jurists appointed by the League of Nations finally established a working guideline for the appointment of judges, and the Committee was then authorised to draft a constitution for a permanent court not of arbitration but of justice. The Statute of the Permanent Court of International Justice was accepted in Geneva on December 13, 1920. The Court first sat on 30 January 1922, at the Peace Palace, The Hague, covering preliminary business during the first session (such as establishing procedure and appointing officers) Nine judges sat, along with three deputies, since Antonio Sánchez de Bustamante y Sirven, Ruy Barbosa and Wang Ch'ung-hui were unable to attend, the last being at the Washington Naval Conference. The Court elected Bernard Loder as President and Max Huber as Vice-President; Huber was replaced by André Weiss a month later. On 14 February the Court was officially opened, and rules of procedure were established on 24 March, when the court ended its first session. The court first sat to decide cases on 15 June. During its first year of business, the Court issued three advisory opinions, all related to the International Labour Organization created by the Treaty of Versailles. The initial reaction to the Court was good, from politicians, practising lawyers and academics alike. Ernest Pollock, the former Attorney General for England and Wales said, "May we not as lawyers regard the establishment of an International Court of Justice as an advance in the science that we pursue?" John Henry Wigmore said that the creation of the Court "should have given every lawyer a thrill of cosmic vibration", and James Brown Scott wrote that "the one dream of our ages has been realised in our time". Much praise was heaped upon the appointment of an American judge despite the fact that the United States had not become a signatory to the Court's protocol, and it was thought that it would soon do so. Increasing work The Court faced increasing work as it went on, allaying the fears of those commentators who had believed the Court would become like the Supreme Court of the United States, which was not presented with a case for its first six terms. The Court was given nine cases during 1922 and 1969, however, with judgments called "cases" and advisory opinions called "questions". Three cases were disposed of during the Court's first session, one during an extraordinary sitting between 8 January and 7 February 1923 (the Tunis-Morocco Nationality Question), four during the second ordinary sitting between 15 June 1923 and 15 September 1923 (Eastern Carelia Question, S.S. "Wimbledon" case, German Settlers Question, Acquisition of Polish Nationality Question) and one during a second extraordinary session from 12 November to 6 December 1923 (Jaworznia Question). A replacement for Ruy Barbosa (who had died on 1 March 1923 without hearing any cases) was also found, with the election of Epitácio Pessoa on 10 September 1923. The workload the following year was reduced, containing two judgments and one advisory opinion; the Mavrommatis Palestine Concessions Case, the Interpretation of the Treaty of Neuilly Case (the first case of the Court's Chamber of Summary Procedure) and the Monastery of Saint-Naoum Question. During the same year, a new President and Vice-President were elected, since they were mandated to serve for a term of three years. At the elections on 4 September 1924, André Weiss was again elected Vice-President and Max Huber became the second President of the Court. Judicial pensions were created at the same time, with a judge being given 1/30th of his annual pay for every year he had served once he had both retired and turned 65. 1925 was an exceedingly busy year for the court, which sat for 210 days, with four extraordinary sessions as well as the ordinary session, producing 3 judgments and 4 advisory opinions. The first judgment was given in the Exchange of Greek and Turkish Populations Case, the second (by the Court of Summary Procedure) was on the interpretation of the Interpretation of the Treaty of Neuilly Case, and the third in the Mavrommatis Palestine Concessions Case. The 4 advisory opinions issued by the Court were in the Polish Postal Service in Danzig Question, the Expulsion of the Ecumenical Patriarch Question, the Treaty of Lausanne Question and the German Interests in Polish Upper Silesia Question. 1926 saw reduced business, with only one ordinary session and one extraordinary session; it was, however, the first year that all 11 judges had been present to hear cases. The court heard two cases, providing one judgment and one advisory opinion; a second question on German Interests in Polish Upper Silesia, this time a judgment rather than an advisory opinion, and an advisory opinion on the International Labour Organization. Despite the reduction of work in 1926, 1927 was another busy year, the Court sitting continuously from 15 June to 16 December, handing down 4 orders, 4 judgments and 1 advisory opinion. The judgments were in the Belgium-China Case, the Case Concerning the Factory at Chorzow, the Lotus Case and a continuation of the Mavrommatis Jerusalem Concessions Case. 3 of the advisory opinions were on the Competence of the European Commission on the Danube, and the 4th was on the Jurisdiction of Danzig Courts. The 4 orders were on the German Interests in Polish Upper Silesia. This year saw another set of elections; on 6 December, with Dionisio Anzilotti elected President and André Weiss elected Vice-President. Weiss died the following year, and John Bassett Moore resigned; Max Huber was elected Vice-President on 12 September 1928 to succeed Weiss, while a second death (Lord Finlay) left the Court increasingly understaffed. Replacements for Moore and Finlay were elected on 19 September 1929; Henri Fromageot and Cecil Hurst respectively. After the second round of elections in September 1930, the Court was reorganised. On 16 January 1931 Mineichirō Adachi was appointed President, and Gustavo Guerrero Vice-President. United States never joins The United States never joined the World Court, primarily because enemies of the League of Nations in the Senate argued that the Court was too closely linked to the League of Nations. The leading opponent was Senator William Borah, Republican of Idaho. The United States finally recognised the Court's jurisdiction, following a long and drawn out process. President Warren G. Harding had first suggested US involvement in 1923, and on 9 December 1929, three court protocols were signed. The U.S. demanded a veto over cases involving the U.S. but other nations rejected the idea. President Franklin Roosevelt did not risk his political capital and gave only passive support even though a two-thirds vote of approval was needed in the Senate. A barrage of telegrams flooded Congress, inspired by attacks made by Charles Coughlin and others. The treaty failed by seven votes on January 29, 1935. The United States finally accepted the Court's jurisdiction on 28 December 1935, but the treaty was never ratified, and the U.S. never joined. Francis Boyle attributes the failure to a strong isolationist element in the US Senate, arguing that the ineffectiveness shown by US nonparticipation in the Court and other international institutions could be linked to the start of the Second World War. Growing international tension and dissolution of the court 1933 was a busy year for the court, which cleared its 20th case (and "greatest triumph"); the Eastern Greenland Case. This period was marked by growing international tension, however, with Japan and Germany announcing their withdrawal from the League of Nations, to come into effect in 1935. That did not directly affect the Court, since the protocol accepting Court jurisdiction was separately ratified, but it influenced whether a nation would be willing to bring a case before it, as evidenced by Germany's withdrawal from two pending cases. 1934, the Court's 13th year, "has been in keeping with the traditions associated with that number", with few cases since the world's governments were more concerned with the growing international tension. The Court's business continued to be small in 1935, 1936, 1937, 1938, and 1939 although 1937 was marked by Monaco's acceptance of the Court protocol. The Court's judicial output in 1940 consisted entirely of a set of orders, completed in a meeting between 19 and 26 February, caused by an international situation, which left the Court with "uncertain prospects for the future". Following the German invasion of the Netherlands, the Court was unable to meet although the Registrar and President were afforded full diplomatic immunity. Informed that the situation would not be tolerated after diplomatic missions from other nations left The Hague on 16 July, the President and Registrar left the Netherlands and moved to Switzerland, accompanied by their staff. The Court was unable to meet between 1941 and 1944, but the framework remained intact, and it soon became apparent that the Court would be dissolved. In 1943, an international panel met to consider "the question of the Permanent Court of International Justice", meeting from 20 March to 10 February 1944. The panel agreed that the name and functioning of the Court should be preserved but for some future court rather than a continuation of the current one. Between 21 August and 7 October 1944, the Dumbarton Oaks Conference was held, which, among other things, created an international court attached to the United Nations, to succeed the Permanent Court of International Justice. As a result of these conferences and others, the judges of the Permanent Court of International Justice officially resigned in October 1945 and, via a resolution by the League of Nations on 18 April 1946, the Court and the League both ceased to exist, being replaced by the International Court of Justice and the United Nations. Organisation Judges The Court initially consisted of 11 judges and 4 deputy judges, recommended by member states of the League of Nations to the Secretary General of the League of Nations, who would put them before the Council and Assembly for election. The Council and Assembly were to bear in mind that the elected panel of judges was to represent every major legal tradition in the League, along with "every major civilisation". Each member state was allowed to recommend 4 potential judges, with a maximum of 2 from its own nation. Judges were elected by a straight majority vote, held independently in the Council and Assembly. The judges served for a period of nine years, with their term limits all expiring at the same time, necessitating a completely new set of elections. The judges were independent and rid themselves of their nationality for the purposes of hearing cases, owing allegiance to no individual member state, but it was forbidden to have more than one judge from the same state. As a sign of their independence from national ties, judges were given full diplomatic immunity when engaged in Court business. The only requirements for judges were "high moral character" and "the qualifications required in their respective countries [for] the highest judicial offices" or to be "jurisconsults of recognized competence in international law". The first panel was elected on 14 September 1921, with the 4 deputies being elected on the 16th. On the first vote, Rafael Altamira y Crevea of Spain, Dionisio Anzilotti of Italy, Bernard Loder of the Netherlands, Ruy Barbosa of Brazil, Yorozu Oda of Japan, André Weiss of France, Antonio Sánchez de Bustamante y Sirven of Cuba and Lord Finlay of |
is commonly referred to as the "cheese cutter effect", either during sudden torsion or over a long period of wearing, especially if the thin jewelry bears any weight. Jewelry Prince Albert piercings are typically pierced at either 12 or 10g (2 or 2.5mm). They are often (gradually) stretched soon after, with jewelry within the 8g to 2g (3mm to 6.5mm) range being the most popular. One of the reasons not to perform the initial piercing at a small diameter (16g or 14g) or otherwise to immediately stretch it to 10g or 8g using a taper is to prevent the 'cheese-cutter effect', although personal preference and individual anatomy also play a role in these decisions. Further stretching to sizes 0 or 00g (8 or 9mm) and larger is not uncommon. If a sufficiently heavy barbell or ring is worn continuously, a mild form of 'auto-stretching' can be observed. This means that stretching to a larger gauge is easier and might not require a taper. While most wearers find that PAs are comfortable to wear and rarely remove them, even during sex, some individuals have found that extremely large or heavy jewelry is uncomfortable to wear for long periods or interferes with the sexual functioning of the penis. Jewelry suitably worn in a Prince Albert piercing includes the circular barbell, curved barbell, captive bead, segment ring, and the prince's wand. Curved barbells used for PA piercings are worn such that one ball sits on the lower side of the penis and the other ball sits at the urethral opening. This type of jewelry prevents discomfort that can come from larger jewelry moving around during daily wear. History and culture The origin of this piercing is unknown. Many theories suggest that the piercing was used to secure the penis in some manner, rather than having a sexual or cultural purpose. Genital piercings appeared in the Kama Sutra as a way of enhancing sexual pleasure. In modern times, the Prince Albert piercing was popularized by Jim Ward in the early 1970s. In West Hollywood, Ward met Richard Simonton (aka Doug Malloy) and Fakir Musafar. Together, these men further developed the Prince | the top of the glans. While some piercers may choose to avoid the nerve bundle that runs along the center of the frenulum altogether, others may choose otherwise. The piercing can be centred if the bearer is circumcised. Otherwise, the piercing must be done off-centre so that the surrounding skin can reposition itself. Procedure The piercer usually starts by pushing a metal or glass tube down the urethra, or using their fingers to hold the urethra open. The piercer then slides the needle into the frenulum and goes up the tube, using pliers to bend the ring into shape. Healing and potential side effects The Prince Albert healing time can take from 4 weeks to 6 months. A fresh PA piercing may cause bleeding, swelling and inflammation. In rare cases, it can lead to local infections. Some men find that the dribble caused by the PA when urinating necessitates sitting down to urinate. With practice, some men can control the stream while standing. Some PA wearers report it enhances sexual pleasure for both partners. However, others penetrated by males with this piercing report discomfort. PA rings can cause additional discomfort to female partners in cases when the penis comes in contact with the cervix. Sexual partners of those with piercings may experience complications during oral sex such as chipped teeth, choking, foreign bodies getting stuck between the partner's teeth, and mucosal injury to receptive partners. As with many piercings, there is risk of the jewelry becoming caught on clothing and being pulled or torn out. Very large gauge or heavy jewelry can cause thinning of the tissue between the urethral opening and the healed fistula resulting in an accidental tearing or other complications with sexual experiences. Conversely, extremely thin jewelry can cause the same tearing in what is commonly referred to as the "cheese cutter effect", either during sudden torsion or over a long period of wearing, especially if the thin jewelry bears any weight. Jewelry Prince Albert piercings are typically pierced at either 12 or 10g (2 or 2.5mm). They are often (gradually) stretched soon after, with jewelry within the 8g to 2g (3mm to 6.5mm) range being the most popular. One of the reasons not to perform the initial piercing at a small diameter (16g or 14g) or otherwise to immediately stretch it to 10g or 8g using a taper is to prevent the 'cheese-cutter effect', although personal preference |
year's time ("Carino Mio"). Cherry and her Fandango girls arrive ("There's a Coach Comin' In"). Julio learns his claim is running dry which means he has to move on to make a living and that he will not be there to greet Jennifer when she returns. Act II A year later in October, the miners celebrate the high times in Rumson now that the Fandango girls are around ("Hand Me Down That Can o' Beans"). Edgar Crocker, a miner who has saved his money, falls for Elizabeth and she responds, although Ben does not notice since he thinks Raymond Janney is in love with her (he is). Another miner, Mike Mooney, tells Julio about a lake that has gold dust on the bottom and he considers looking for it ("Another Autumn"). Jennifer returns in December, having learned civilized ways back East ("All for Him"). Ben tells his daughter that he will soon be moving on since he was not meant to stay in one place for long ("Wand'rin' Star"). The next day as Cherry and the girls are packing to leave they tell her about Julio leaving to find the lake with a bottom of gold. Raymond Janney offers to buy Elizabeth from Ben for $3,000, but she runs off with Edgar Crocker. Word comes of another strike 40 miles south of Rumson and the rest of the town packs up to leave except for Jennifer, who is waiting for Julio to return, and Ben, who suddenly realizes that Rumson is indeed his town. Late in April, Julio appears, a broken man. Ben welcomes him and Julio is amazed to see Jennifer is there. As they move toward each other, the wagons filled with people move on. Songs Act 1 I'm On My Way - Steve Bullnack, Jake Whippany, Mike Mooney, Lee Zen, Sing Yuy, Sandy Twist, Edgar Crocker, Reuben Sloane and Miners Rumson - Jake What's Goin' On Here? - Jennifer Rumson I Talk to the Trees - Julio Valveras and Jennifer "They Call the Wind Maria" - Steve, Miners and Dancer I Still See Elisa - Ben Rumson How Can I Wait? - Jennifer Trio - Elizabeth Woodling, Sarah Woodling and Jacob Woodling Rumson (Reprise) - Jake In Between - Ben Whoop-Ti-Ay! - Ben, Elizabeth and Miners How Can I Wait? (Reprise) - Jennifer and Julio Act 2 Hand Me Down That Can O'Beans - Jake and Miners Rope Dance - Fandangos, Pete Billings and Singer Can-can - Suzanne Duval, Rocky, Fandangos and Miners Another Autumn - Julio Valveras, Dancer and Pete Billings Movin' - Miners I'm On My Way (Reprise) - Miners All For Him - Jennifer Wand'rin' Star - Ben I Talk to the Trees (Reprise) - Jennifer Strike! - Steve, Jasper and Jake (I Was Born Under a) Wand'rin' Star (Reprise) - Jake, Steve, Sandy and Miners Productions The musical had a pre-Broadway try-out at the Shubert Theater in Philadelphia opening on September 17, 1951. It opened on Broadway at the Shubert Theatre on November 12, 1951, and closed on July 19, 1952, after 289 performances. The production was directed by Daniel Mann, set design by Oliver Smith, costume design by Motley, lighting design by Peggy Clark, music for dances arranged by Trude Rittmann, with dances and musical ensembles by Agnes de Mille set to the orchestrations of Ted Royal. It starred James Barton (as Ben Rumson), Olga San Juan (Jennifer Rumson), Tony Bavaar (Julio Valveras), Gemze de Lappe (Yvonne Sorel), James Mitchell (Pete Billings), Kay Medford (Cherry), and Marijane Maricle (Elizabeth Woodling). Burl Ives and Eddie Dowling later took over the role of Ben Rumson. De Mille later restaged the dances as a stand-alone ballet, Gold Rush. The West End production opened on February 11, | by Frederick Loewe. The story centers on a miner and his daughter and follows the lives and loves of the people in a mining camp in Gold Rush-era California. Popular songs from the show included "Wand'rin' Star", "I Talk to the Trees" and "They Call the Wind Maria". The musical ran on Broadway in 1951 and in the West End in 1953. In 1969 the film version also titled Paint Your Wagon was released. It had a highly revised plot and some new songs composed by Lerner and André Previn. Synopsis Act I In the California Wilderness in May 1853, a crusty old miner, Ben Rumson, is conducting a makeshift funeral for a friend. Meanwhile, his 16-year-old daughter Jennifer discovers gold dust. Ben claims the land, and prospectors start flocking to the brand new town of Rumson ("I'm On My Way"). Two months later Rumson has a population of 400, all of whom are men except for Jennifer. Prospector Jake Whippany is waiting to save enough money to send for Cherry and her Fandango girls ("Rumson"), while Jennifer senses the tension building in town ("What's Going On Here?"). Julio Valveras, a handsome young miner forced to live and work outside of town because he is Mexican, comes to town with dirty laundry and runs into Jennifer, who volunteers to do his laundry. They also talk to each other ("I Talk to the Trees"). Steve Bulmarck and the other men ponder the lonely nomadic life they lead in the song "They Call the Wind Maria". Two months later the men want Ben to send Jennifer away, and he wishes her mother was still alive to help him ("I Still See Elisa"). Jennifer is in love with Julio ("How Can I Wait?"), and when Ben sees Jennifer dancing with Julio's clothes, he decides to send her East on the next stage. Jacob Woodling, a Mormon man with two wives, Sarah and Elizabeth, arrives in Rumson where the men demand Jacob sell one of his wives. To his surprise, Ben finds himself wooing Elizabeth ("In Between") and wins her for $800 ("Whoop-Ti-Yay"). Jennifer is disgusted by her father's actions and runs away, telling Julio that she will be reunited with him in a year's time ("Carino Mio"). Cherry and her Fandango girls arrive ("There's a Coach Comin' In"). Julio learns his claim is running dry which means he has to move on to make a living and that he will not be there to greet Jennifer when she returns. Act II A year later in October, the miners celebrate the high times in Rumson now that the Fandango girls are around ("Hand Me Down That Can o' Beans"). Edgar Crocker, a miner who has saved his money, falls for Elizabeth and she responds, although Ben does not notice since he thinks Raymond Janney is in love with her (he is). Another miner, Mike Mooney, tells Julio about a lake that has gold dust on the bottom and he considers looking for it ("Another Autumn"). Jennifer returns in December, having learned civilized ways back East ("All for Him"). Ben tells his daughter that he will soon be moving on since he was not meant to stay in one place for long ("Wand'rin' Star"). The next day as Cherry and the girls are packing to leave they tell her about Julio leaving to find the lake with a bottom of gold. |
President's letter, and which will be further explained by the undersigned on the first fitting occasion." In addition to playing on the musical term "overture" and the geographical reference to the Pacific Ocean there is also the irony, revealed as the story unfolds, that these "pacific overtures" to initiate commercial exploitation of the Pacific nation were backed by a none too subtle threat of force. Productions Pacific Overtures previewed in Boston and ran at The Kennedy Center for a month before opening on Broadway at the Winter Garden Theatre on January 11, 1976. It closed after 193 performances on June 27, 1976. Directed by Harold Prince, the choreography was by Patricia Birch, scenic design by Boris Aronson, costume design by Florence Klotz, and lighting design by Tharon Musser. The original cast recording was released originally by RCA Records and later on CD. This production was nominated for 10 Tony Awards, and won Best Scenic Design (Boris Aronson) and Best Costume Design (Florence Klotz). The original Broadway production was filmed and broadcast on Japanese television in 1976. An off-Broadway production ran at the Promenade Theatre from October 25, 1984 for 109 performances, transferring from an earlier production at the York Theatre Company. Directed by Fran Soeder with choreography by Janet Watson, the cast featured Ernest Abuba and Kevin Gray. The European premiere was directed by Howard Lloyd-Lewis (Library Theatre, Manchester) at Wythenshawe Forum in 1986 with choreography by Paul Kerryson who subsequently directed the show in 1993 at Leicester Haymarket Theatre. Both productions featured Mitch Sebastian in the role of Commodore Perry. A production was mounted in London by the English National Opera in 1987. The production was recorded in its entirety on CD, preserving nearly the entire libretto as well as the score. Unlike previous productions, this production featured a cast consisting primarily of Caucasian actors and opera singers. A critically acclaimed 2001 Chicago Shakespeare Theater production, directed by Gary Griffin, transferred to the West End Donmar Warehouse, where it ran from June 30, 2003 until September 6, 2003 and received the 2004 Olivier Award for Outstanding Musical Production. In 2002 the New National Theatre of Tokyo presented two limited engagements of their production, which was performed in Japanese with English supertitles. The production ran at Avery Fisher Hall, Lincoln Center from July 9, 2002 through July 13, and then at the Eisenhower Theater, Kennedy Center, from September 3, 2002, through September 8. A Broadway revival by the Roundabout Theatre Company (an English-language mounting of the previous New National Theatre of Tokyo production) ran at Studio 54 from December 2, 2004, to January 30, 2005, directed by Amon Miyamoto and starring BD Wong as the Narrator and several members of the original cast. A new Broadway recording, with new (reduced) orchestrations by Jonathan Tunick was released by PS Classics, with additional material not included on the original cast album. The production was nominated for four Tony Awards, including Best Revival of a Musical. The orchestrations were "scaled back" for a 7-piece orchestra. Variety noted that "the heavy use of traditional lutes and percussion instruments like wood blocks, chimes and drums showcases the craftsmanship behind this distinctly Japanese-flavored score." In 2017, Classic Stage Company revived Pacific Overtures for a limited run Off-Broadway, with a new abridged book by John Weidman and new orchestrations by Jonathan Tunick. This production was directed by current Artistic Director John Doyle and starred George Takei as the Reciter. It began previews on April 6, 2017 and opened on May 4, 2017. Originally scheduled to close on May 27, it was extended twice, and closed on June 18, 2017. This production was a New York Times Critic's Pick, Variety 's 2017 Top 5 NY Theater Production, and Hollywood Reporter 's 2017 Top 10 NY Theater Production. It also received numerous nominations from the Drama Desk, Drama League, Outer Critics Circle, and Lucille Lortel Awards. This version ran as a 90-minute one-act with a 10-member cast in modern-dress and included all the songs from the original production except for "Chrysanthemum Tea" and eliminated the instrumental/dance number "Lion Dance". Plot summary Act I Conceived as a Japanese playwright's version of an American musical about American influences on Japan, Pacific Overtures opens in July 1853. Since the foreigners were expelled from the island empire, explains the Reciter, elsewhere wars are fought and machines are rumbling, but in Nippon they plant rice, exchange bows and enjoy peace and serenity, and there has been nothing to threaten the changeless cycle of their days ("The Advantages of Floating in the Middle of the Sea"). But President Millard Fillmore, determined to open up trade with Japan, has sent Commodore Matthew C. Perry across the Pacific. To the consternation of Lord Abe and the Shogun's other Councillors, the stirrings of trouble begin with the appearance of Manjiro, a fisherman who had been lost at sea and rescued by Americans. He has returned to Japan and now attempts to warn the authorities of the approaching warships, but is instead arrested for consorting with foreigners. A minor samurai, Kayama Yezaemon, is appointed Prefect of Police at Uraga to drive the Americans away - news which leaves his wife Tamate grief-stricken, since Kayama will certainly fail and both will then have to commit seppuku. As he leaves, she expresses her feelings in dance as two Observers describe the scene and sing her thoughts and words ("There Is No Other Way"). As a Fisherman, a Thief, and other locals relate the sight of the "Four Black Dragons" roaring through the sea, an extravagant Oriental caricature of the USS Powhatan pulls into harbor. Kayama is sent to meet with the Americans but he is laughed at and rejected as not being important enough. He enlists the aid of Manjiro, the only man in Japan who has dealt with Americans, and disguised as a great lord Manjiro is able to get an answer from them: Commodore Perry must meet the Shogun within six days or else he will shell the city. Facing this ultimatum, the Shogun refuses to commit himself to an answer and takes to his bed. Exasperated by his indecision and procrastination, his Mother, with elaborate courtesy, poisons him. ("Chrysanthemum Tea"). Kayama devises a plan by which the Americans can be received without technically setting foot on Japanese soil, thanks to a covering of tatami mats and a raised Treaty House, for which he is made Governor of Uraga. He and Manjiro set off for Uraga, forging a bond of friendship through the exchange of "Poems". Kayama has saved Japan, but it is too late to save Tamate: when Kayama arrives at his home, he finds that she is dead, having committed seppuku after having received no news of Kayama for many days. Already events are moving beyond the control of the old order: the two men pass a Madam instructing her inexperienced Oiran girls in the art of seduction as they prepare for the arrival of the foreign devils ("Welcome to Kanagawa"). Commodore Perry and his men disembark and, on their "March to the Treaty House", demonstrate their goodwill by offering such gifts as two bags of Irish potatoes and a copy of Owen's "Geology of Minnesota". The negotiations themselves are observed through the memories of three who were there: a warrior hidden beneath the floor of the Treaty House who could hear the debates, a young boy who could see the action from his perch in the tree outside, and the boy as an old man recalling that without "Someone In a Tree", a silent watcher, history is incomplete. Initially, it seems as if Kayama has won; the Americans depart in peace. But the barbarian figure of Commodore Perry leaps out to perform a traditional Kabuki "Lion Dance", which ends as a strutting, triumphalist, all-American cakewalk. Act II The child emperor (portrayed by a puppet manipulated by his advisors) reacts with pleasure to the departure of the Americans, promoting Lord Abe to Shogun, confirming Kayama as Governor of Uraga and raising Manjiro to the rank of Samurai. The crisis appears to have passed, but to the displeasure of Lord Abe the Americans return to request formal trading arrangements. To the tune of a Sousa march, an American ambassador bids "Please Hello" to Japan and is followed by a Gilbertian British ambassador, a clog-dancing Dutchman, a gloomy Russian and a dandified Frenchman all vying for access to Japan's markets. With the appearance of this new group of westerners, the faction of the Lords of the South grow restless. They send a politically charged gift to the Emperor, a storyteller who tells a vivid, allegorical tale of a brave young emperor who frees himself from his cowardly Shogun. Fifteen years pass as Kayama and Manjiro dress themselves for tea. As Manjiro continues to dress in traditional robes for the tea ceremony, Kayama gradually adopts the manners, culture and dress of the newcomers, proudly displaying a new pocket watch, cutaway coat and "A Bowler Hat". Although Kayama, as stated in his reports to the Shogun, manages to reach an "understanding" with the Western merchants and diplomats, tensions abound between the Japanese and the "barbarians". Three British sailors on shore leave mistake the daughter of a samurai for a geisha ("Pretty Lady"). Though their approach is initially gentle, they grow more persistent to the point where they offer her money; the girl cries for help and her father kills one of the confused sailors. Kayama and Abe travel to the Emperor's court discussing the situation. While on the road, their party is attacked by cloaked assassins sent by the Lords of the South and Abe is assassinated. Kayama is horrified to discover that one of the assassins is his former friend, Manjiro; they fight and Kayama is killed. In the ensuing turmoil, the puppet Emperor seizes real power and vows that Japan will modernize itself. As the country moves from one innovation to the "Next!", the Imperial robes are removed layer by layer to show the Reciter in modern dress. Contemporary Japan - the country of Toyota, Seiko, air and water pollution and market domination - assembles itself around him and its accomplishments are extolled. "Nippon. The Floating | Emperor's court discussing the situation. While on the road, their party is attacked by cloaked assassins sent by the Lords of the South and Abe is assassinated. Kayama is horrified to discover that one of the assassins is his former friend, Manjiro; they fight and Kayama is killed. In the ensuing turmoil, the puppet Emperor seizes real power and vows that Japan will modernize itself. As the country moves from one innovation to the "Next!", the Imperial robes are removed layer by layer to show the Reciter in modern dress. Contemporary Japan - the country of Toyota, Seiko, air and water pollution and market domination - assembles itself around him and its accomplishments are extolled. "Nippon. The Floating Kingdom. There was a time when foreigners were not welcome here. But that was long ago..." says the Reciter. "Welcome to Japan." Original Broadway cast — characters Mako — Reciter, Shogun, Jonathan Goble, Emperor Meiji Soon-Tek Oh — Tamate, Samurai, Storyteller, Swordsman Isao Sato — Kayama Yuki Shimoda — Lord Abe Sab Shimono — Manjiro Ernest Abuba — Samurai, Adams, Noble James Dybas — Councillor, Old Man, French Admiral Timm Fujii — Son, Priest, Kanagawa Girl, Noble, British Sailor Haruki Fujimoto — Servant, Commodore Matthew Calbraith Perry Larry Hama — Williams, Lord of the South, Gangster Ernest Harada — Physician, Madam, British Admiral Alvin Ing — Shogun's Mother, Observer, Merchant, American Admiral Patrick Kinser-Lau — Shogun's Companion, Kanagawa Girl, Dutch Admiral, British Sailor Jae Woo Lee — Fisherman, Sumo Wrestler, Lord of the South Freddy Mao — Councillor, Samurai's Daughter Tom Matsusaka — Imperial Priest Freda Foh Shen — Shogun's Wife Mark Hsu Syers — Samurai, Thief, Soothsayer, Warrior, Russian Admiral, British Sailor Ricardo Tobia — Observer Gedde Watanabe — Priest, Kanagawa Girl, The Boy Conrad Yama — Grandmother, Sumo Wrestler, Japanese Merchant Fusako Yoshida — Shamisen accompaniment Proscenium Servants, Sailors and Townspeople: Kenneth S. Eiland, Timm Fujili, Joey Ginza, Patrick Kinser-Lau, Diane Lam, Tony Marinyo, Kevin Maung, Kim Miyori, Dingo Secretario, Freda Foh Shen, Mark Hsu Seyers, Gedde Watanabe, Leslie Watanabe, Ricardo Tobia 2004 Broadway revival cast — characters BD Wong - Reciter Evan D'Angeles - Observer, Warrior, Officer, British Admiral Joseph Anthony Foronda - Thief, Soothsayer, Samurai, Storyteller Yoko Fumoto - Tamate Alvin Ing - Shogun's Mother, Old Man Fred Isozaki - Noble Francis Jue - Madam, Dutch Admiral Darren Lee - American Admiral, Sailor, Officer Hoon Lee - Sailor, Merchant, Commodore Matthew Calbraith Perry, Lord of the South Michael K. Lee - Kayama Ming Lee - Councilor, Priest, Emperor Priest Telly Leung - Boy, Observer, Sailor, Shogun's Companion, Noble Paolo Montalban - Manjiro Alan Muraoka - Councilor, Grandmother (Muraoka also understudied the Dutch Admiral and performs the role in the 2004 cast recording) Mayumi Omagari - Kanagawa Girl, Daughter Daniel Jay Park - Priest, Kanagawa Girl, French Admiral Hazel Anne Raymundo - Shogun's Wife, Kanagawa Girl Sab Shimono - Lord Abe Yuka Takara - Son, Shogun's Wife's Servant, Kanagawa Girl Scott Watanabe - Fisherman, Russian Admiral, Older Swordsman, Physician, Samurai Bodyguard 2017 Off-Broadway revival cast — characters George Takei - Reciter Karl Josef Co - Fisherman, American Admiral, First Sailor Steven Eng - Kayama Megan Masako Haley - Tamate Ann Harada - Madam, French Admiral Austin Ku - Boy, British Admiral, Third Sailor Kelvin Moon Loh - Warrior, Russian Admiral, Second Sailor Orville Mendoza - Manjiro Marc Oka - Thief, Dutch Admiral Thom Sesma - Lord Abe, Old Man Musical numbers Act One Prologue — Orchestra The Advantages of Floating in the Middle of the Sea — Reciter and Company There Is No Other Way — Tamate, Observers Four Black Dragons — Fisherman, Thief, Reciter, Townspeople Chrysanthemum Tea — Shogun, Shogun's Mother, Shogun's Wife, Soothsayer, Priests, Shogun's Companion, Physician, Sumo Wrestlers Poems — Kayama, Manjiro Welcome to Kanagawa — Madam and Girls March to the Treaty House — Orchestra Someone in a Tree — Old Man, Reciter, Boy, Warrior Lion Dance — Commodore Perry Act Two Please Hello — Abe, Reciter, American, British, Dutch, Russian and French Admirals A Bowler Hat — Kayama Pretty Lady — Three British Sailors Next — Reciter and Company Critical response and analysis "Someone in a Tree", where two witnesses describe negotiations between the Japanese and Americans, is Sondheim's favorite song out of everything he has written. "A Bowler Hat" presents the show's theme, as a samurai gradually becomes more Westernized as he progressively adopts the habits and affectations of the foreigners he is meant to supervise. “Pretty Lady” is a contrapuntal trio of three British sailors who have mistaken a young girl for a geisha and are attempting to woo her. This is, perhaps, the musical fusion highlight of the show as the orchestra and lays descending parallel 4ths and the singers use a counterpoint form established during the Western Renaissance; again the chord progression is often IV to I, again eschewing Pentatonics. The New York Times review of the original 1976 production said "The lyrics are totally Western and—as is the custom with Mr. Sondheim—devilish, wittily and delightfully clever. Mr. Sondheim is the most remarkable man in the Broadway musical today—and here he shows it victoriously...Mr. Prince's staging uses all the familiar Kabuki tricks—often with voices screeching in the air like lonely sea birds—and stylizations with screens and things, and stagehands all masked in black to make them invisible to the audience. Like choreography, the direction is designed to meld Kabuki with Western forms...the attempt is so bold and the achievement so fascinating, that its obvious faults demand to be overlooked. It tries to soar—sometimes it only floats, sometimes it actually sinks—but it tries to soar. And the music and lyrics are as pretty and as well-formed as a bonsai tree. "Pacific Overtures" is very, very different." Walter Kerr's article in The New York Times on the original 1976 production said "But no amount of performing, or of incidental charm, can salvage 'Pacific Overtures.' The occasion is essentially dull and immobile because we are never properly placed in it, drawn neither East nor West, given no specific emotional or cultural bearings." Ruth Mitchell, assistant to Mr. Prince, said in an interview with WPIX that a sense of not belonging was intentional as that was the very point of the show. Frank Rich, reviewing the 1984 revival for The New York Times stated that "the show attempts an ironic marriage of Broadway and Oriental idioms in its staging, its storytelling techniques and, most of all, in its haunting Stephen Sondheim songs. It's a shotgun marriage, to be sure - with results that are variously sophisticated and simplistic, beautiful and vulgar. But if Pacific Overtures is never going to be anyone's favorite Sondheim musical, it is a far more forceful and enjoyable evening at the Promenade than it was eight years ago at the Winter Garden...Many of the songs are brilliant, self-contained playlets. In Four Black Dragons various peasants describe the arrival of the American ships with escalating panic, until finally the nightmarish event does seem to be, as claimed, the end of the world....Someone in a Tree, is a compact Rashomon - and as fine as anything Mr. Sondheim has written...The single Act II triumph, Bowler Hat, could well be a V. S. Naipaul tale set to music and illustrated with spare Japanese brushstrokes...Bowler Hat delivers the point of Pacific Overtures so artfully that the rest of Act II seems superfluous." The 2004 production was not as well received. It was based on a critically praised Japanese language production by director Amon Miyamoto. Ben Brantley, reviewing for The New York Times wrote: "Now Mr. Miyamoto and |
pathogenic fear with a paradoxical wish. Furthermore, by learning to appreciate the humour in their exaggerated responses, individuals observe the non-catastrophic consequences of their fear-inducing stimuli first-hand, accepting the unlikelihood of the feared anxiety-producing outcome occurring. Paradoxical intention is mainly employed to combat discomfort associated with internal causes while fear of external stimuli can still be treated through conventional treatments such as systematic desensitisation, cognitive behavioural therapy, etc. For example, if the patient has a fear of public speaking, paradoxical intention would be employed only if the feelings of apprehension stem from an internal source, e.g. having an increased heart rate leading to a heart attack and not due to external factors such as the size of the crowd, their judgement, etc. In this case, the therapist would prescribe the individual to present to the public while focusing on the most salient aspect of their fear – in this case, trying to increase heart rate. For Phobic and Obsessive compulsion For insomnia Paradoxical intention has been shown as an effective therapy in the treatment of chronic insomnia. It attempts to eradicate the anxiety associated with the inability to sleep by instructing patients to do the opposite and attempt to stay awake. By asking patients to keep their eyes open, while lying comfortably in a dark room without sleeping, they are taught to understand the non-disastrous implications of staying awake and thus, the anxiety associated with it diminishes. Thereby in this manner, by eliminating voluntary sleep effort, paradoxical intention minimises sleep performance anxiety, promoting rapid sleep onset. Similarly, it is also suggested that by diverting attention from sleep performance, it allows for cognitive de-arousal leading to relaxation and sleep. A study investigating the effects of paradoxical intention on sleep effort, sleep anxiety and objective and subjective sleep showed that relative to control conditions, participants allocated to PI displayed noteworthy reductions in sleep effort and sleep performance anxiety. It has also been found that subjectively measured sleep onset latency [SOL] (time taken to fall asleep) is significantly lower in the PI conditions, with SOL change amongst PI participants being strongly associated with sleep effort change. This shows that sleep effort and sleep anxiety are integral mechanisms overridden by PI | paradoxical intention minimises sleep performance anxiety, promoting rapid sleep onset. Similarly, it is also suggested that by diverting attention from sleep performance, it allows for cognitive de-arousal leading to relaxation and sleep. A study investigating the effects of paradoxical intention on sleep effort, sleep anxiety and objective and subjective sleep showed that relative to control conditions, participants allocated to PI displayed noteworthy reductions in sleep effort and sleep performance anxiety. It has also been found that subjectively measured sleep onset latency [SOL] (time taken to fall asleep) is significantly lower in the PI conditions, with SOL change amongst PI participants being strongly associated with sleep effort change. This shows that sleep effort and sleep anxiety are integral mechanisms overridden by PI to achieve normal sleep functions. A 1984 study analysing cases of paradoxical intention as a treatment showed that PI rapidly reduced SOLs and was also successful at maintaining sleep onset and maximising total sleep time. A 2021 meta-analysis conducted a systematic review of randomised control trials and experimental studies comparing PI for insomnia to passive and active comparators. Results showed that relative to passive comparators, PI showed radical improvements in several key insomnia symptoms with moderate improvements as compared to active comparators. It also promoted decreased sleep-related performance anxiety. Additionally, a 2018 meta-analysis contrasted cognitive and behavioural interventions with passive comparators and when compared to recent relations between PI and passive comparators, it is found that the effects of PI on SOL are larger. Recursive anxiety Research has also configured links between the effectiveness of paradoxical intention as a treatment towards recursive anxiety. Patients whose phobias originate from recursive anxiety have shown greater improvement with PI related treatments. This occurs due to paradoxical intention overcoming performance anxiety and facilitating natural sleep, unlike situations where external factors e.g. noise, temperature, etc. affect sleeping ability. Recursive anxiety is also a result of the anticipatory fear that anxiety causes a lack of self-control leading to public embarrassment and judgement. Therefore, recursive anxiety leads to individuals attempting to control their cognitive environment by adjusting thoughts and behaviour to minimize stimuli inhibiting calmness. Dereflection Dereflection refers to diverting the client's attention away from their symptoms. Dereflection has been developed for people suffering from sexual disorders, in which the patient's desire for sexual pleasure becomes an obstruction to achieving it. The therapist discourages intercourse |
present-day New York City. New Netherland Stuyvesant had to wait for his appointment to be confirmed by the Dutch States-General. During that time he married Judith Bayard, who was the daughter of a Huguenot minister, and hailed from Breda. Together, they left Amsterdam in December 1646, and, after stopping at Curaçao, arrived in New Amsterdam by May Kieft's administration of the colony had left the colony in terrible condition. Only a small number of villages remained after Kieft's wars, and many of the inhabitants had been driven away to return home, leaving only 250 to 300 men able to carry arms. Kieft himself had accumulated over 4,000 guilders during his term in office, and had become an alcoholic. With certainty that putting New Netherland to rights was the work which God had saved him for, Stuyvesant began the task of rebuilding the physical and moral state of the colony, returning it to being the kind of well-run place that the Dutch preferred. He told the people "I shall govern you as a father his children." In September 1647, Stuyvesant appointed an advisory council of nine men as representatives of the colonists. In 1648, a conflict started between him and Brant Aertzsz van Slechtenhorst, the commissary of the patroonship Rensselaerwijck, which surrounded Fort Orange (present-day Albany). Stuyvesant claimed he had power over Rensselaerwijck, despite special privileges granted to Kiliaen van Rensselaer in the patroonship regulations of 1629. When Van Slechtenhorst refused, Stuyvesant sent a group of soldiers to enforce his orders. The controversy that followed resulted in the founding of the new settlement, Beverwijck. External threats The colony of New Netherland had severe external problems. The population was too small and contentious, and the Company provided little military support. Stuyvesant was usually the loser. The most serious was the economic rivalry with England regarding trade. Secondarily there were small scale military conflicts with neighboring Indian tribes, involving fights between mobile bands on the one hand, and scattered small Dutch outposts on the other. With a large area and limited population, defense was a major challenge. Stuyvesant's greatest success came in dealing with nearby Swedish colonies, which he defeated and annexed in 1655. Relations with the English colony of Connecticut were strained, with disputes over ownership of land in the Connecticut valley, and in eastern Long island. The treaty of Hartford of 1650 was advantageous to the English, as Stuyvesant gave up claims to the Connecticut Valley while gaining only a small portion of Long island. In any case Connecticut settlers ignored the treaty and steadily poured into the Hudson Valley, where they agitated against Stuyvesant. In 1664, England moved to take over New Netherland. The Dutch colonists refused to fight, forcing Stuyvesant's surrender, demonstrating the dilemma of domestic dissatisfaction, small size, and overwhelming external pressures with inadequate military support from the Company that was fixated on profits. Expansion of the colony Stuyvesant became involved in a dispute with Theophilus Eaton, the governor of English New Haven Colony, over the border of the two colonies. In September 1650, a meeting of the commissioners on boundaries took place in Hartford, Connecticut, called the Treaty of Hartford, to settle the border between New Amsterdam and the English colonies to the north and east. The border was arranged to the dissatisfaction of the Nine Men, who declared that "the governor had ceded away enough territory to found fifty colonies each fifty miles square." Stuyvesant then threatened to dissolve the council. A new plan of municipal government was arranged in the Netherlands, and the name "New Amsterdam" was officially declared on 2 February 1653. Stuyvesant made a speech for the occasion, saying that his authority would remain undiminished. Stuyvesant was then ordered to the Netherlands, but the order was soon revoked under pressure from the States of Holland and the city of Amsterdam. Stuyvesant prepared against an attack by ordering the citizens to dig a ditch from the North River to the East River and to erect a fortification. In 1653, a convention of two deputies from each village in New Netherland demanded reforms, and Stuyvesant commanded that assembly to disperse, saying: "We derive our authority from God and the company, not from a few ignorant subjects." In the summer of 1655, he sailed down the Delaware River with a fleet of seven vessels and about 700 men and took possession of the colony of New Sweden, which was renamed "New Amstel." In his absence, Pavonia was attacked by Native Americans, during the "Peach War" on 15 September 1655. In 1657, the directors of the Dutch West India Company wrote to Stuyvesant to tell him that they were not going to be able to send him all the tradesmen that he requested and that he would have to purchase slaves in addition to the tradesmen he would receive. During the colonial era, New York City became both a site from which fugitives fled bondage and a destination for runaways. The colonies closest to New Netherland, Connecticut and Maryland, encouraged Dutch slaves to escape and refused to return them. In 1650, Governor Petrus Stuyvesant threatened to offer freedom to Maryland slaves unless that colony stopped sheltering runaways from the Dutch outpost. However, he is also noted as having trafficked minority settlers at auction. In 1660, Stuyvesant was quoted as saying that "Nothing is of greater importance than the early instruction of youth." In 1661, New Amsterdam had one grammar school, two free elementary schools, and had licensed 28 schoolmasters. Religious freedom Stuyvesant did not tolerate full religious freedom in the colony, and was strongly committed to the supremacy of the Dutch Reformed Church. In 1657 he refused to allow Lutherans the right to organize a church. When he also issued an ordinance forbidding them from worshiping in their own homes, the directors of the Dutch West Indies Company, three of whom were Lutherans, told him to rescind the order and allow private gatherings of Lutherans. The Company position was that more tolerance led to more trade and more profit. Freedom of religion was further tested when Stuyvesant refused to allow the permanent settlement of Jewish refugees from Dutch Brazil in New Amsterdam (without passports), and join the handful of existing Jewish traders (with passports from Amsterdam). Stuyvesant attempted to have Jews "in a friendly way to depart" the colony. As he wrote to the Amsterdam Chamber of the Dutch West India Company in 1654, he hoped that "the deceitful race, — such hateful enemies and blasphemers of the name of Christ, — be not allowed to further infect and trouble this new colony." He referred to Jews as a "repugnant race" and "usurers", and was concerned that "Jewish settlers should not be granted the same liberties enjoyed by Jews in Holland, lest members of other persecuted minority groups, such as Roman Catholics, be attracted to the colony." Stuyvesant's decision was again rescinded after pressure from the directors of the company. As a result, Jewish immigrants were allowed to stay in the colony as long as their community was self-supporting, however, Stuyvesant and the company would not allow them to build a synagogue, forcing them to worship instead in a private house. In 1657, the Quakers, who were newly arrived in the colony, drew his attention. He ordered the public torture of Robert Hodgson, a 23-year-old Quaker convert who had become an influential preacher. Stuyvesant then made an ordinance, punishable by fine and imprisonment, against anyone found guilty of harboring Quakers. That action led to a protest from the citizens of Flushing, which came to be known as the Flushing Remonstrance, considered by some historians a precursor to | The English were Anglicans, holding to the 39 Articles, a Protestant confession, with bishops. In 1665, Stuyvesant went to the Netherlands to report on his term as governor. On his return to the colony, he spent the remainder of his life on his farm, Stuyvesant Farm, of sixty-two acres outside the city, called the Great Bouwerie, beyond which stretched the woods and swamps of the village of Nieuw Haarlem. A pear tree that he reputedly brought from the Netherlands in 1647 remained at the corner of Thirteenth Street and Third Avenue until 1867 when it was destroyed by a storm, bearing fruit almost to the last. The house was destroyed by fire in 1777. He also built an executive mansion of stone called Whitehall. Personal life In 1645, Stuyvesant married Judith Bayard (–1687) of the Bayard family. Her brother, Samuel Bayard, was the husband of Stuyvesant's sister, Anna Stuyvesant. Petrus and Judith had two sons together: Balthasar Lazarus Stuyvesant (1647–1678), who settled in the West Indies and married Maria Lucas Raapzaat Nicolaes Willem Stuyvesant (1648–1698), who first married Maria Beekman (1650–1679), daughter of Wilhelmus Beekman, and after her death, Elisabeth Slechtenhorst. He died in August 1672 and his body was entombed in the east wall of St. Mark's Church in-the-Bowery, which sits on the site of Stuyvesant's family chapel. Descendants The last acknowledged direct descendant of Peter Stuyvesant to bear his surname was Augustus van Horne Stuyvesant, Jr., who died a bachelor in 1953 at the age of 83 in his mansion at 2 East 79th Street. Rutherfurd Stuyvesant, the 19th-century New York developer, and his descendants are also descended from Peter Stuyvesant; however, Rutherford Stuyvesant's name was changed from Stuyvesant Rutherford in 1863 to satisfy the terms of the 1847 will of Peter Gerard Stuyvesant. His descendants include: Hamilton Fish (1808–1893), the 16th Governor of New York, a United States Senator and United States Secretary of State John Winthrop Chanler (1826–1877), a lawyer and a U.S. Representative from New York Stuyvesant Fish Morris (1843–1928), a prominent physician. Stuyvesant Fish (1851–1923), a President of the Illinois Central Railroad who was prominent in the U.S. Gilded Age Lewis Stuyvesant Chanler (1869–1942), a Lieutenant Governor of New York Edith Stuyvesant Gerry (1873–1958), an American philanthropist who was married to George Washington Vanderbilt II and Peter Goelet Gerry Loudon Wainwright Jr. (1924–1988), an American writer John Smith (1931–1995), the American actor who starred in two NBC western television series Cimarron City, is a direct descendant. Loudon Wainwright III (b. 1946), the American singer-songwriter is a direct descendant through his great-great-grandfather John Howard Wainwright, who married Margaret Stuyvesant. Peter Robinson (Robin) Fish (b. 1969), Deputy Head at Robert Gordon's College Chase Coleman III (b.1975), Hedge fund manager, Tiger Global Management Legacy According to historian Eleanor Bruchey: Peter Stuyvesant was essentially a difficult man thrust into a difficult position. Quick tempered, self-confident, and authoritarian, he was determined...to rule firmly and to repair the fortunes of the company. The company, however, had run the colony solely for trade profits, with scant attention to encouraging immigration and developing local government. Stuyvesant's predecessors...had been dishonest or, at best, inept, so there was no tradition of respect and support for the governorship on which he could build. Furthermore, the colonists were vocal and quick to challenge authority....Throughout his administration there were constant complaints to the company of his tyrannical acts and pressure for more local self-government....His religious intolerance also exacerbated relations with the colonists, most of whom did not share his narrow outlook. Stuyvesant and his family were large landowners in the northeastern portion of New Amsterdam, and the Stuyvesant name is currently associated with four places in Manhattan's East Side, near present-day Gramercy Park: the Stuyvesant Town housing complex; the site of the original Stuyvesant High School, still marked Stuyvesant on its front face, on East 15th Street near First Avenue, Stuyvesant Square, a park in the area; and the Stuyvesant Apartments on East 18th Street. The new Stuyvesant High, a premier public high school, is on Chambers Street near the World Trade Center. His farm, called the "Bouwerij" – the seventeenth-century Dutch word for "farm" – was the source for the name of the Manhattan street and surrounding neighborhood named "The Bowery". The contemporary neighborhood of Bedford–Stuyvesant, Brooklyn includes Stuyvesant Heights and retains its name. Also named after him are the hamlets of Stuyvesant and Stuyvesant Falls in Columbia County, New York, where descendants of the early Dutch settlers still live and where the Dutch Reformed Church remains an important part of the community, as well as shopping centers, yacht clubs and other buildings and facilities throughout the area where the Dutch colony once was. A statue of Stuyvesant by J. Massey Rhind situated at Bergen Square in Jersey City was dedicated in 1915 to mark the 250th anniversary of the Dutch settlement there The World War II Liberty Ship was named in his honor. In popular culture 1809 – A heavily exaggerated Stuyvesant features as the protagonist of the latter three books of Washington Irving's satirical History of New York. 1819 – Stuyvesant is mentioned in Irving's short story "Rip Van Winkle" in the following passage: "...just about the beginning of the government of the good Peter Stuyvesant (may he rest in peace!)..." and a bit later: "...who figured so gallantly in the chivalrous days of Peter Stuyvesant..." 1927–1962 – The passenger ferry Peter Stuyvesant operated on the Hudson River between New York City and New Jersey. In 1963, it was purchased and placed on permanent mooring next to Anthony's Pier 4 in Boston, Massachusetts; it broke free, listed then sank during the Blizzard of 1978. 1938 – Stuyvesant is the major antagonist in the Kurt Weill-Maxwell Anderson musical Knickerbocker Holiday, in which he sings "September Song". In the original stage production he was portrayed by Walter Huston; in the much-altered 1944 film version he was portrayed by Charles Coburn in his only singing role. c.1945 – The old time radio show Duffy's Tavern had an episode which used a newly discovered diary of Stuyvesant as a plot device. 1954–present – A cigarette brand by Philip Morris International and Imperial Tobacco with British American Tobacco is named Peter Stuyvesant. These cigarettes are popular in Germany, Australia, Greece, New Zealand, Zambia, Malaysia and South Africa. 1955 – In the television production of the Rodgers and Hart musical Dearest Enemy, General Howe (Cyril Ritchard) and Captain Copeland (Robert Sterling) sing "Sweet Peter", a less-than-complimentary song about Stuyvesant 1966 – In the last episode of season 3 of My Favorite Martian Tim and Martin travel back in time and meet Peter Stuyvesant. They almost prevent the sale of Manhattan to the Dutch. 1978 – In Charles Bukowski's novel Women, the main character, Henry Chinaski, vomits on Peter Stuyvesant's burial vault cover before a poetry reading at St. Mark's Church. 1986 –The German singer-songwriter Rio Reiser used Peter Stuyvesant founding New York as an example of a real event in his song "Alles Lüge" ("All Lies"), which contrasts real and false events. The song also plays on the namesake cigarette brand. 2001 – Stuyvesant was a key figure in the Belgian comic strip Suske en Wiske ("Spike and Suzy") in episode 269, "De Stugge Stuyvesant". 2005 – In the computer game Civilization IV, Peter Stuyvesant is one of the leaders of the Dutch colonies. Adriaen van der Donck is the other possible Dutch leader. In Sid Meier's Colonization computer game, Stuyvesant can be elected to the Continental Congress, allowing the player to build custom houses which automate trade with the mother country. 2013 – Stuyvesant appears in Jean Zimmerman's novel The Orphanmaster, in which he is portrayed as somewhat tyrannical and not well-liked by the settlers of New Amsterdam. 2016 – In the American animated TV series The Venture Bros., Dean Venture attends Stuyvesant University starting in the |
In a 1996 interview, Fishman denied that the band was named after him, and said the onomatopoeic inspiration behind the name was the sound of an airplane taking off. The band would collaborate with percussionist Marc Daubert, a friend of Anastasio's, in the fall of 1984. Daubert ceased performing with the band in early 1985. Keyboardist Page McConnell met Phish in early 1985, when he arranged for them to play a spring concert at Goddard College, the small university he attended in Plainfield, Vermont. He began performing with the band as a guest shortly thereafter, and made his live debut during the third set of their May 3, 1985 concert at UVM's Redstone Campus. In the summer of 1985, Phish went on a short hiatus while Anastasio and Fishman vacationed in Europe; during this time, McConnell offered to join the band permanently, and moved to Burlington to learn their repertoire from Gordon. McConnell officially joined Phish as a full-time band member in September 1985. Phish performed with a five-piece lineup for about six months after McConnell joined, a period which ended when Holdsworth quit the group in March 1986 following a religious conversion. Anastasio and Fishman relocated in mid-1986 to Goddard College after a recommendation from McConnell. Phish distributed at least six experimental self-titled cassettes during this era, including The White Tape. While based at Goddard College, Phish began to collaborate with fellow students Richard "Nancy" Wright and Jim Pollock. Pollock and Wright were musical collaborators who made experimental recordings on multi-track cassettes, and had been introduced to Phish through McConnell, who co-hosted a radio program on WGDR with Pollock. Phish adopted a number of Nancy's songs into their own set, including "Halley's Comet", "I Didn't Know", and "Dear Mrs. Reagan", the latter song being written by Nancy and Pollock. In his book Heads: A Biography of Psychedelic America, music journalist Jesse Jarnow observed that Wright and his music were highly influential to Phish's early style and experimental sound. Wright amicably ended his association with Phish in 1989, but Pollock has continued to collaborate with Phish over the years, designing some of their album covers and concert posters. By 1985, the group had encountered Burlington luthier Paul Languedoc, who would eventually design custom instruments for Anastasio and Gordon. In October 1986, he began working as their sound engineer. Since then, Languedoc has built exclusively for the two, and his designs and traditional wood choices have given Phish a unique instrumental identity. During the late 1980s, Phish began to play regularly at Nectar's bar and restaurant in downtown Burlington, and performed dozens of concerts across multiple residencies through March 1989. The band's 1992 album A Picture of Nectar was named in honor of the bar's owner, Nectar Rorris, and its cover features his face superimposed onto an orange. As his senior project for Goddard College, Anastasio penned The Man Who Stepped into Yesterday, a nine-song progressive rock concept album that would become Phish's second studio experiment. Recorded between 1987 and 1988, it was submitted in July of that year, accompanied by a written thesis. The song cycle that developed from the project – known as Gamehendge – grew to include an additional eight songs. The band performed the suite in concert on five occasions: in 1988, 1991, 1993, and twice in 1994 without replicating the song list. The Man Who Stepped Into Yesterday has never received an official release, but a bootleg tape has circulated for decades, and songs such as "Wilson" and "The Lizards" remain concert staples for the band. Beginning in the spring of 1988, members of the band began practicing in earnest, sometimes locking themselves in a room and jamming for hours on end. One such jam took place at Anastasio's apartment, with a second at Paul Languedoc's house in August 1989. They called these jam sessions "Oh Kee Pa Ceremonies", a reference to the film A Man Called Horse. In July 1988, the band performed their first concerts outside of the northeastern United States, when they embarked on a seven-date tour in Colorado. These shows are excerpted on their 2006 live compilation Colorado '88. Junta, Lawn Boy, and A Picture of Nectar: 1989–1992 On January 26, 1989, Phish played the Paradise Rock Club in Boston; The owners of the club had never heard of Phish and refused to book them, so the band rented the club for the night. The show sold out due to the caravan of fans that had traveled to see the band. The concert was Phish's breakthrough on the northeastern regional music circuit, and the band began to book concerts at other large rock clubs, theaters, and small auditoriums throughout the area, such as the Somerville Theatre, Worcester Memorial Auditorium and Wetlands Preserve. That spring, the band self-released their debut full-length studio album, Junta, and sold copies on cassette tape at their concerts. The album includes a studio recording of the epic "You Enjoy Myself", which is considered to be the band's signature song. Later in 1989, the band hired Chris Kuroda as their lighting director. Kuroda subsequently became well known for his artistic light shows at the group's concerts. A profile on Phish appeared in the October 1989 issue of the Deadhead magazine Relix, which marked the first time the band had been covered in a major national music periodical. By late 1990, Phish's concerts were becoming more and more intricate, often making a consistent effort to involve the audience in the performance. In a special "secret language", the audience would react in a certain manner based on a particular musical cue from the band. For instance, if Anastasio "teased" a motif from The Simpsons theme song, the audience would yell, "D'oh!" in imitation of . In 1992, Phish introduced a collaboration between audience and band called the "Big Ball Jam" in which each band member would throw a large beach ball into the audience and play a note each time his ball was hit. In so doing, the audience was helping to create an original composition. On occasion, performances of "You Enjoy Myself" and "Mike's Song" involved Gordon and Anastasio performing synchronized maneuvers and jumping on mini-trampolines while simultaneously playing their instruments. Fishman would also regularly step out from behind his drum kit during concerts to sing cover songs, which were often punctuated by him playing an Electrolux vacuum cleaner like an instrument. The band released their second album, Lawn Boy, in September 1990 on Absolute A Go Go, a small independent label that had a distribution deal with the larger Rough Trade Records. The album had been recorded the previous year, after the band had won studio time at engineer Dan Archer's Archer Studios when they came in first place at an April 1989 battle of the bands competition in Burlington. Phish, along with Bob Dylan, the Grateful Dead, and the Beatles, was one of the first bands to have a Usenet newsgroup, rec.music.phish, which launched in 1991. Aware of the band's growing popularity, Elektra Records signed them that year after they were recommended to the record label by A&R representative Sue Drew. In the summer of 1991, the band embarked on a 14-date tour of the eastern United States accompanied by a three-piece horn section dubbed the Giant Country Horns. In August of that year, Phish played an outdoor concert at their friend Amy Skelton's horse farm in Auburn, Maine that acted as a prototype for their later all-day festival events. In 1992, the band released their third studio album, A Picture of Nectar, their first release for the major label Elektra. Subsequently, the label also reissued the band's first two albums. Later in 1992, Phish participated in the first annual H.O.R.D.E. festival, which provided them with their first national tour of major amphitheaters. The lineup, among others, included Phish, Blues Traveler, the Spin Doctors, and Widespread Panic. That summer, the band toured Europe with the Violent Femmes and later toured Europe and the U.S. with Santana. Throughout the latter tour, Carlos Santana regularly invited some or all of the members of Phish to jam with his band during their headlining performances. The band ended 1992 with a New Year's Eve performance at the Matthews Arena in Boston, Massachusetts, a performance that was simulcast throughout the Boston area by radio station WBCN. The concert was filled with several new "secret language" cues they had taught their audience in order to deliberately confuse radio listeners. Rift, Hoist, and A Live One: 1993–1995 Phish began headlining major amphitheaters in the summer of 1993. That year, the group released their fourth album, Rift, a concept album which featured a cover painted by David Welker that referenced almost all of the songs on the record. The album was the band's first to appear on the Billboard 200 album chart, debuting at #51 in February 1993. In March 1994, the band released their fifth studio album Hoist. The album featured an array of guest performers, including country singer Alison Krauss, banjoist Béla Fleck, former Sly & The Family Stone member Rose Stone, actor and trombonist Jonathan Frakes, and the horn section of R&B group Tower of Power. To promote the album, Gordon directed the band's only official music video, for its first single "Down with Disease". The clip gained some MTV airplay starting in June of that year. "Down with Disease" became a minor hit on rock radio in the United States, and was the band's first song to appear on a Billboard music chart when it peaked at #33 on the magazine's Hot Mainstream Rock Tracks chart that summer. To further promote Hoist, the band released an experimental short-subject documentary called Tracking, also directed by Gordon, which depicted the recording sessions for the album. Foreshadowing their future tradition of festivals, Phish coupled camping with their 1994 summer tour finale at Sugarbush North in Warren, Vermont, that show eventually being released as Live Phish Volume 2. On Halloween of that year, the group promised to don a fan-selected "musical costume" by playing an entire album from another band. After an extensive mail-based poll, Phish performed the Beatles' White Album as the second of their three sets at the Glens Falls Civic Center in upstate New York. The "musical costume" concept subsequently became a recurring part of Phish's fall tours, with the band playing a different album whenever they had a concert scheduled for Halloween night. In October 1994, Crimes of the Mind, the debut album by Anastasio's friend and collaborator Steve "The Dude of Life" Pollak, was released by Elektra Records; The album, which had been recorded in 1991, was billed to "The Dude of Life and Phish" and features all four members of Phish acting as Pollak's backing band. On December 30, 1994, the band made their first appearance on national network television when they performed "Chalk Dust Torture" on Late Night with David Letterman. The band would go on to appear on the program seven more times before David Letterman's retirement as host in 2015. For their 1994 New Years Run, Phish played the Civic Centers in Philadelphia and Providence as well as sold-out shows at Madison Square Garden and Boston Garden, which marked their debut performances at both venues. For the December 31 show at the Boston Garden, the band rode around the arena in a float shaped like a hot dog. The stunt was reprised at their 1999 New Year's Eve concert before the hot dog was donated to the Rock and Roll Hall of Fame. At the end of 1994, Phish appeared on Pollstars list of the highest grossing concert tours in the United States for the first time, as the 32nd highest grossing act, with $10.3 million in ticket sales. Following the death of Grateful Dead guitarist Jerry Garcia in the summer of 1995 and the appearance of "Down with Disease" on Beavis and Butt-Head, the band experienced a surge in the growth of their fan base and an increased awareness in popular culture. In their tradition of playing a well-known album by another band for Halloween, Phish contracted a full horn section for their performance of The Who's Quadrophenia in 1995. Phish's first live album, A Live One, was released during the summer of 1995 and featured selections from various concerts from their 1994 winter tour. The album charted at number 18 on the Billboard 200 album chart, and was reported to have sold around 50,000 copies in its first week on sale. A Live One became Phish's first RIAA-certified gold album in November 1995. In 1997, A Live One became the band's first Platinum album, certified for sales of 1 million copies in the United States, and remains their best selling album to date. Billy Breathes, The Story of the Ghost, and The Siket Disc: 1996–1999 Following an appearance at the New Orleans Jazz & Heritage Festival in April 1996, The band spent the summer of that year opening for Santana on their European tour. In August 1996, the band held their first festival, The Clifford Ball, at the decommissioned Plattsburgh Air Force Base on the New York side of Lake Champlain. The festival attracted 70,000 attendees, making it both Phish's biggest concert crowd to that point and the largest single concert by attendance in the United States in 1996. Phish recorded their sixth album Billy Breathes in the winter and spring of 1996, and the album was issued in October of that year. Alongside traditional rock-based crescendos, the album has more acoustic guitar than their previous records, and was regarded by the band and some fans as their crowning studio achievement. The album's first single, "Free", peaked at No. 24 on the Billboard Hot Modern Rock Tracks chart and No. 11 on the Mainstream Rock Tracks chart, and was their most successful song on both charts. By 1997, Phish's concert improvisational ventures were developing into a new funk-inspired long form jamming style. Vermont-based ice cream conglomerate Ben & Jerry's launched "Phish Food" that year. The band officially licensed their name for use with the product, the only time they have ever allowed a third-party company to do so, and were directly involved with the creation of the flavor. Proceeds from the flavor are donated to the band's non-profit charity The WaterWheel Foundation, which raises funds for the preservation of Vermont's Lake Champlain. On August 8, 1997, Phish webcast one of their concerts live over the internet for the first time. On August 16 and 17, 1997, Phish held their second festival, The Great Went, over two days at the Loring Air Force Base in Limestone, Maine, near the Canada–United States border. In October 1997, the band released their second live album Slip Stitch and Pass, which featured selections from their March 1997 concert at the Markthalle Hamburg in Hamburg, Germany. Following the Great Went, the band embarked on a fall tour that was dubbed by fans as the "Phish Destroys America" tour after a 1970s kung fu-inspired poster for the opening date in Las Vegas. The 21-date tour is considered one of the group's most popular and acclaimed tours, and several concerts were later officially released on live album sets such as Live Phish Volume 11 in 2002. Phish ended 1997 as one of the ten highest grossing concert acts in the United States that year. In April 1998, the band embarked on the Island Tour - a four night tour with two shows at the Nassau Coliseum in Uniondale, New York on Long Island and another two at the Providence Civic Center in Providence, Rhode Island. The four concerts are highly regarded by fans due to the band's exploration of a jazz-funk musical style they had been playing for the previous year, which Anastasio dubbed "cowfunk". The band performed the tour in the middle of studio sessions for their seventh album, and were inspired by the quality of their performances to further incorporate the cowfunk style into subsequent sessions. The resulting album, The Story of the Ghost, was released in October 1998. The album's first single "Birds of a Feather", which had been premiered on the Island Tour, became a #14 hit on Billboard's Adult Alternative Songs chart. To promote The Story of the Ghost, Phish performed several songs from the album on the public television music show Sessions at West 54th in October 1998, and were interviewed for the program by its host David Byrne of Talking Heads. In the summer of 1998, The band held Lemonwheel, their second festival at Loring Air Force Base in Maine. The two-day event attracted 60,000 attendees. The band played another summer festival in 1999, called Camp Oswego and held at the Oswego County Airport in Volney, New York. Unlike other Phish festivals, Camp Oswego featured a prominent second stage of additional performers aside from Phish, including Del McCoury, The Slip and Ozomatli. In July 1999, the band released an album of improvisational instrumentals titled The Siket Disc. The band followed that release with Hampton Comes Alive, a six-disc box set released in November 1999, which contained the entirety of their performances on November 20 and 21, 1998 at the Hampton Coliseum in Hampton, Virginia. The set marked the first time that complete recordings of Phish concerts were officially released by Elektra Records. To celebrate the new millennium, Phish hosted a two-day outdoor festival at the Big Cypress Seminole Indian Reservation in Florida in December 1999. The festival's climactic New Year's Eve concert, referred to by fans as simply "The Show," started at 11:35 p.m. on December 31, 1999, and continued through to sunrise on January 1, 2000, approximately eight hours later. The band's performance of the song "Heavy Things" at the festival was broadcast live as part of ABC's 2000 Today millennium coverage, giving the band their biggest television audience up to that point. 75,000 people attended the sold-out two-day festival. In 2017, Rolling Stone named the Big Cypress festival one of the "50 Greatest Concerts of the Last 50 Years." Farmhouse and hiatus: 2000–2002 Following the Big Cypress festival, the band issued their ninth studio album Farmhouse in May 2000. "Heavy Things", which was released as the album's first single, became the band's only song to appear on a mainstream pop radio format, reaching #29 on Billboard'''s Adult Top 40 chart that July. The song also became the band's biggest hit to date on the Adult Alternative Songs chart, reaching #2 there. In June 2000, the band embarked on a seven-date headlining tour of Japan. In July, they taped an appearance on the PBS music show Austin City Limits, which was aired in October. In the summer of 2000, the band announced that they would take their first "extended time-out" following their upcoming fall tour. Anastasio officially announced the impending hiatus to the band's fans during their September 30 concert at the Thomas & Mack Center in Paradise, Nevada. During the tour's last concert on October 7, at the Shoreline Amphitheatre in Mountain View, California, the band made no reference to the hiatus, and left the stage without saying a word following their encore performance of "You Enjoy Myself", as The Beatles' "Let It Be" played over the venue's sound system.Bittersweet Motel, a documentary film about the band directed by Todd Phillips, was released in August 2000, shortly before the hiatus began. The documentary captures the band's 1997 | which Anastasio dubbed "cowfunk". The band performed the tour in the middle of studio sessions for their seventh album, and were inspired by the quality of their performances to further incorporate the cowfunk style into subsequent sessions. The resulting album, The Story of the Ghost, was released in October 1998. The album's first single "Birds of a Feather", which had been premiered on the Island Tour, became a #14 hit on Billboard's Adult Alternative Songs chart. To promote The Story of the Ghost, Phish performed several songs from the album on the public television music show Sessions at West 54th in October 1998, and were interviewed for the program by its host David Byrne of Talking Heads. In the summer of 1998, The band held Lemonwheel, their second festival at Loring Air Force Base in Maine. The two-day event attracted 60,000 attendees. The band played another summer festival in 1999, called Camp Oswego and held at the Oswego County Airport in Volney, New York. Unlike other Phish festivals, Camp Oswego featured a prominent second stage of additional performers aside from Phish, including Del McCoury, The Slip and Ozomatli. In July 1999, the band released an album of improvisational instrumentals titled The Siket Disc. The band followed that release with Hampton Comes Alive, a six-disc box set released in November 1999, which contained the entirety of their performances on November 20 and 21, 1998 at the Hampton Coliseum in Hampton, Virginia. The set marked the first time that complete recordings of Phish concerts were officially released by Elektra Records. To celebrate the new millennium, Phish hosted a two-day outdoor festival at the Big Cypress Seminole Indian Reservation in Florida in December 1999. The festival's climactic New Year's Eve concert, referred to by fans as simply "The Show," started at 11:35 p.m. on December 31, 1999, and continued through to sunrise on January 1, 2000, approximately eight hours later. The band's performance of the song "Heavy Things" at the festival was broadcast live as part of ABC's 2000 Today millennium coverage, giving the band their biggest television audience up to that point. 75,000 people attended the sold-out two-day festival. In 2017, Rolling Stone named the Big Cypress festival one of the "50 Greatest Concerts of the Last 50 Years." Farmhouse and hiatus: 2000–2002 Following the Big Cypress festival, the band issued their ninth studio album Farmhouse in May 2000. "Heavy Things", which was released as the album's first single, became the band's only song to appear on a mainstream pop radio format, reaching #29 on Billboard'''s Adult Top 40 chart that July. The song also became the band's biggest hit to date on the Adult Alternative Songs chart, reaching #2 there. In June 2000, the band embarked on a seven-date headlining tour of Japan. In July, they taped an appearance on the PBS music show Austin City Limits, which was aired in October. In the summer of 2000, the band announced that they would take their first "extended time-out" following their upcoming fall tour. Anastasio officially announced the impending hiatus to the band's fans during their September 30 concert at the Thomas & Mack Center in Paradise, Nevada. During the tour's last concert on October 7, at the Shoreline Amphitheatre in Mountain View, California, the band made no reference to the hiatus, and left the stage without saying a word following their encore performance of "You Enjoy Myself", as The Beatles' "Let It Be" played over the venue's sound system.Bittersweet Motel, a documentary film about the band directed by Todd Phillips, was released in August 2000, shortly before the hiatus began. The documentary captures the band's 1997 and 1998 tours, the Great Went festival and the recording of The Story of the Ghost. Phish were nominated in two categories at the 43rd Annual Grammy Awards in 2001: Best Boxed Recording Package for Hampton Comes Alive and Best Instrumental Rock Performance for "First Tube" from Farmhouse. During Phish's hiatus, Elektra Records continued to issue archival releases of the band's concerts on compact disc. Between September 2001 and May 2003, the label released 20 entries in the Live Phish Series. These multi-disc sets featured complete soundboard recordings of concerts that were particularly popular with the band and their fanbase, similar to the Grateful Dead's Dick's Picks archival series. In November 2002, the label released the band's first concert DVD, Phish: Live in Vegas, which featured the entirety of the September 2000 concert at which Anastasio announced the hiatus. In April 2002, Phish guest starred on the episode "Weekend at Burnsie's" of the animated series The Simpsons. The episode marked the band's first appearance together, albeit as animated characters, since the hiatus began. Phish provided their own voices for the episode and performed a snippet of "Run Like an Antelope". Return, Round Room, Undermind, and disbandment: 2002–2004 In August 2002, Phish's manager John Paluska announced the band planned to end their hiatus that December with a New Year's Eve concert at Madison Square Garden. They also recorded Round Room in only four days and released it on December 10. The band had initially planned to record the new album live at the Madison Square Garden concert, but instead felt that demos they had recorded of the material were strong enough to merit release as a studio album. Four days after the release of Round Room, the band made their only appearance as a musical guest on Saturday Night Live, where they debuted the song "46 Days" and appeared in two comedy sketches. During their return concert on December 31, McConnell's brother was introduced as actor Tom Hanks. The impostor sang a line of the song "Wilson", prompting some media outlets to report that the actor had appeared at the concert. Phish’s 2003 winter tour commenced in February in Inglewood, California, and included the song The Cover of "Rolling Stone" — foreshadowing their actual appearance on the cover of Rolling Stone’s March 3, 2003 issue. At the end of the 2003 summer tour, Phish returned to Limestone, Maine for It, their first festival since Big Cypress. The event drew crowds of over 60,000 fans, and was the band's final festival to be held at Loring Air Force Base. Highlights from the festival were released on a DVD set, also called It, in October 2004. In November and December 2003, the band celebrated its 20th anniversary with a four-show mini-tour of shows in New York, Pennsylvania, and Massachusetts. The December 1 show at Pepsi Arena featured a guest appearance by former member Jeff Holdsworth, who sat in with the band on five songs, including his compositions "Possum" and "Camel Walk". On May 25, 2004, Anastasio announced on the Phish website that the band would disband at the end of their 2004 summer tour. He wrote that he had met with the other members earlier that month to discuss the "Strong feelings I’ve been having that Phish has run its course, and that we should end it now while it’s still on a high note." By the end of the meeting, he said, "We realized that after almost twenty-one years together, we were faced with the opportunity to graciously step away in unison, as a group, united in our friendship and our feelings of gratitude." The band's eleventh – and at the time final – studio album Undermind was released in June 2004. The band's summer 2004 began with two concerts at Keyspan Park in Brooklyn, New York. The first concert was recorded for the live album and concert documentary Phish: Live in Brooklyn, while the second featured a guest appearance by rapper Jay-Z, who performed two songs with the band. Later that summer, the band appeared on the Late Show with David Letterman and performed a seven-song set from atop the marquee of the Ed Sullivan Theater for fans who had gathered on the street. The 2004 tour finished ended with the band's seventh summer festival on August 14 and 15, which were billed as their final performances. The Coventry festival was named for the town in Vermont that hosted the event, which was held at the nearby Newport State Airport. After Coventry, the members of the band admitted they were disappointed with their performance at the festival; In the official book Phish: The Biography, Anastasio expressed that "Coventry itself was a nightmare. It was emotional, but it was not like we were at our finest. I certainly wasn't". Post-disbandment and interim: 2004–2008 Following the break-up, the band's members remained in amicable communication with one another. The members also occasionally appeared on each other's solo albums and collaborated on side-projects. In December 2006, Anastasio was arrested in Whitehall, New York for drug possession and driving while intoxicated, and was sentenced to 14 months in a drug court program. In 2007, while Anastasio was undergoing rehabilitation, the other members of Phish surprised him on his birthday with an instrumental recording they had made for him to play along with on guitar. During his rehabilitation, Anastasio said he "spent 24 hours a day thinking about nothing but Phish" and began discussing a reunion with the other members of the band. In 2005, Phish formed their own record label, JEMP Records, to release archival CD and DVD sets. The label's first release was Phish: New Year's Eve 1995 – Live at Madison Square Garden, which was released in conjunction with Rhino Records in December 2005. The album was named the 42nd greatest live album of all time by Rolling Stone in April 2015. The label subsequently released several other archival live box sets, including Colorado '88 (2006), Vegas 96 (2007), At the Roxy (2008) and The Clifford Ball (2009). Phish received the Jammys Lifetime Achievement Award on May 7, 2008, at The Theater at Madison Square Garden. All four members attended the ceremony and gave a speech, and both McConnell and Anastasio performed, although not together. In response to a June 2008 rumor that Phish had reunited to record a new album, McConnell wrote a letter on the band's website updating fans on the current relations between the band's members. McConnell wrote that while the members remained friends, they were currently busy with other projects and the reunion rumors were premature. He added, "Later this year we hope to spend some time together and take a look at what possible futures we might enjoy." That September, the band played three songs at the wedding of their former tour manager Brad Sands. Later in 2008, the band reconvened at The Barn, Anastasio's farmhouse studio in Burlington, Vermont, for jamming sessions and rehearsals. Reunion and Joy: 2008–2011 On October 1, 2008, the band announced on their website that they had officially reunited, and would play their first shows in five years in March 2009 at the Hampton Coliseum in Hampton, Virginia. The three reunion concerts were held on March 6, 7, and 8, 2009, with "Fluffhead" being the first song the band played onstage at the first show. Approximately 14,000 people attended the concerts over the course of three days, and the band made the shows available for free download on their LivePhish website for a limited time, in order to accommodate fans who were unable to attend. When the band decided to reunite, the members agreed to limit their touring schedule, and they have typically performed about 50 concerts a year since. Following the reunion weekend, Phish embarked on a summer tour which began in May with a concert at Fenway Park in Boston. The Fenway show was followed by a 25-date tour which included performances at the 2009 edition of the Bonnaroo Music Festival in Tennessee and a four date stand at Red Rocks Amphitheatre in Colorado. At Bonnaroo, Phish was joined by Bruce Springsteen on guitar for three songs. Phish's fourteenth studio album, Joy, produced by Steve Lillywhite, was released September 8, 2009. In October, the band held Festival 8, their first multi-day festival event since Coventry in 2004, at the Empire Polo Club in Indio, California. In March 2010, Anastasio inducted Genesis, one of his favorite bands, into the Rock and Roll Hall of Fame at the museum's annual ceremony in New York City. In addition to Anastasio's speech, Phish performed the Genesis songs "Watcher of the Skies" and "No Reply at All" at the event. Phish toured in the summer and fall of 2010, and their August 10 concert at the Utica Memorial Auditorium was released on the DVD/CD box-set Live in Utica the following May. Fuego and Big Boat: 2011–2016 Phish's ninth festival, Super Ball IX, took place at the Watkins Glen International race track in Watkins Glen, New York on July 1–3, 2011. It was the first concert to take place at Watkins Glen International since Summer Jam at Watkins Glen in 1973. In September, the band played a benefit concert in Essex Junction, Vermont which raised $1.2 million for Vermont flood victim relief in the aftermath of Hurricane Irene. In June 2012, Phish headlined Bonnaroo 2012 with the Red Hot Chili Peppers and Radiohead. During their 2013 Halloween concert at Boardwalk Hall in Atlantic City, New Jersey, the band played twelve new songs from their upcoming album, which at the time had the working title Wingsuit and would later be renamed Fuego. Phish ended 2013 with a New Year's Eve concert that also celebrated their 30th anniversary, as they had played their first concert in December 1983. The concert featured a nine-minute montage film celebrating the band's career, and the band performed an entire set in the middle of the arena from atop an equipment truck. Phish released Fuego, their first studio album in five years, on June 24, 2014. The album peaked at number 7 on the Billboard 200 album chart, and became their highest charting album since Billy Breathes reached the same position in 1996. During their Halloween 2014 concert at MGM Grand Las Vegas, the band performed a set consisting of ten original songs inspired by the 1964 Walt Disney Records sound effects album Chilling, Thrilling Sounds of the Haunted House.In 2015, Phish performed both a summer tour and their tenth multi-day festival event, Magnaball, was held at the Watkins International Speedway in New York in August. Phish's fourteenth studio album, Big Boat, was released on October 7, 2016. The Baker's Dozen and Kasvot Växt: 2015–2019 Phish played a 13-night concert residency at New York City's Madison Square Garden from July 21 to August 6, 2017, dubbed "The Baker's Dozen". Each concert featured a loose theme with performances of unique cover songs and a special doughnut served each night to the audience by Federal Donuts of Philadelphia. No songs were repeated during the Baker's Dozen run, with a total of 237 individual songs performed across the 13 concerts. The complete Baker's Dozen residency was released as a limited edition 36-disc box set in November 2018. A scaled-down triple CD set featuring 13 song performances, titled The Baker’s Dozen: Live at Madison Square Garden, was issued simultaneously with the box set. Phish planned to hold an eleventh summer festival, Curveball, in Watkins Glen, New York in 2018, but the festival was canceled by New York Department of Health officials, one day before it was scheduled to begin, due to water quality issues from flooding in the Watkins Glen, New York area. At their Halloween concert that October at the MGM Grand in Las Vegas, the band performed a set of all-new original material that they promoted as a "cover" of í rokk by Kasvot Växt, a fictional 1980s Scandinavian progressive rock band they had created. The Kasvot Växt set was released as a standalone live album on Spotify on November 10, 2018. All four concerts in the 2018 Halloween run were livestreamed in 4K resolution, which marked the first time that a major musical act had ever offered a 4K livestreaming option.Between Me and My Mind, a documentary film directed by Steven Cantor about Anastasio's life, his Ghosts of the Forest side-project and Phish's 2017 New Year's Eve concert, was screened at the Tribeca Film Festival in April 2019. In June 2019, SiriusXM launched Phish Radio, a satellite radio station dedicated to the band's music. Sigma Oasis and recent activity: 2019–present Due to the COVID-19 pandemic, Phish postponed their 2020 summer tour until 2021. Before 2020, Phish had embarked on a summer tour every year since their 2009 reunion. During the COVID-19 pandemic, Phish hosted free weekly "Dinner and a Movie" webcasts of archival performances on Tuesday evenings until Labor Day weekend, after which they were hosted monthly. Phish released their fifteenth studio album Sigma Oasis on April 2, 2020. The album was premiered through a listening party on their LivePhish app, SiriusXM radio station and Facebook page. The album consists entirely of material the band had been performing in concert over the course of the previous decade, but had yet to appear on a studio release. In January 2021, Anastasio told Pollstar that the band was unable to perform or rehearse together due to COVID-19 restrictions and quarantine rules currently in place in the New England states, but said "As soon as it's feasible, we'll be back." Phish performed their first concert since the start of the pandemic on July 28, 2021, having not performed since February 23, 2020. Beginning with their concerts at The Gorge Amphitheatre in late August, the band began requiring attendees to show proof of vaccination or a negative test for COVID-19. During their 2021 Halloween concert, Phish debuted a set of new original science fiction-themed material under the guise of the fictional band Sci-Fi Soldier. According to Pollstar, Phish were the ninth highest grossing concert act in the world in 2021, with a $44.4 million gross from 35 concerts. Phish also had the fifth highest concert ticket sales in the world in 2021, with 572,626 tickets sold. Due to an increase of cases of the Omicron variant of COVID-19 in New York City, Phish postponed their 2021 New Year's Eve concerts at Madison Square Garden from December 2021 to April 2022. On Dec 31, 2021, Phish performed a three-set New Year's Eve concert without an audience from "The Ninth Cube." Reception and legacy Phish's popularity grew in the 1990s due to fans sharing concert recordings that had been taped by audience members and distributed online for free. Phish were among the first musical acts to utilize the internet to grow their fanbase, with fans using file-sharing websites such as etree and BitTorrent to share concerts. In 1998, Rolling Stone described Phish as "the most important band of the '90s." Phish have been named as an influence by other acts in the jam band scene, including Umphrey's McGee and the Disco Biscuits Other musicians have also counted Phish as an influence, including Adam Levine and James Valentine of Maroon 5, Ed O'Brien of Radiohead, Brandon Boyd of Incubus, and reggae musician Matisyahu. Phish's festival events in the 1990s inspired the foundation of the Bonnaroo Music Festival in Tennessee, which was first held in 2002. Co-founder Rick Farman, a Phish fan, consulted Phish managers Richard Glasgow and John Paluska about festival infrastructure during the early stages of planning. The festivals also inspired other jam band-oriented concert events, such as the Disco Biscuits' Camp Bisco, Electric Forest Festival, and the Big Ears Festival. While Phish has had eight of their singles appear on Billboards Adult Alternative Songs chart since its inception in 1996, even the band's most successful songs would not be recognizable to the average music listener. Phish are well known to their loyal fans, called Phishheads, but the group's music and fan culture are otherwise polarizing to general audiences. The tribal nature of Phish supporters has encouraged comparisons of Phishheads to the Juggalos, followers of the hip-hop duo Insane Clown Posse. Phish heavily contributes to music based tourism with their "traveling communities" of fans, and they have been simultaneously hailed and criticized for their near-constant tour dates, which bring with them the capital value of tourism and necessitates the increased security and community planning that come with any music festival. Jordan Hoffman of Thrillist explains "the solace many find in attending religious services is somewhat mirrored for me in seeing Phish," and even though Phish fans are generally considered welcoming and friendly, the reception of the group from the outside is often one of unease and confusion. The BBC listed Phish as one of "Eight smash US acts that Britain never understood" along with fellow jam bands Dave Matthews Band and Blues Traveler. In describing the band to a British audience, BBC journalist Stephen Dowling wrote "Attending a Phish gig has become a rite of summer passage for American teens in the same way that attending Glastonbury has for British teenagers." Phish has performed 64 concerts at Madison Square Garden since their debut performance there in 1994. The band have performed the third-most concerts at the venue of any musical act, behind only Billy Joel and Elton John. In 2019, Billboard ranked Phish as the 33rd highest-grossing concert touring act of the 2010s. Musical style and influences According to The New Rolling Stone Album Guide, the music of Phish is "oriented around group improvisation and super-extended grooves". Their songs draw on a range of rock-oriented influences, including funk, jazz fusion, progressive rock, bluegrass, and psychedelic rock. Some Phish songs use different vocal approaches, such as a cappella (unaccompanied) sections of barbershop quartet-style vocal harmonies. The band began to include barbershop segments in their concerts in 1993, when the four members began taking lessons from McConnell's landlord, who was a judge at barbershop competitions. In the 1997 official biography, The Phish Book, Anastasio coined the term "cow-funk" to describe the band's late 1990s funk and jazz-funk-influenced playing style, observing that "What we’re doing now is really more about groove than funk. Good funk, real funk, is not played by four white guys from Vermont." Phish were often compared to the Grateful Dead during the 1990s, a comparison that the band members often resisted or distanced themselves from. The two bands were compared due to their emphasis on live performances, improvisational jamming style, musical similarities, and traveling fanbase. In November 1995, Anastasio told the Baltimore Sun "When we first came into the awareness of the media, it would always be the Dead or Zappa they'd compare us to. All of these bands I love, you know? But I got very sensitive about it." Early in their career, Phish would occasionally cover Grateful Dead songs in concert, but the band stopped doing so by the late 1980s. In Phish: The Biography, Parke Puterbaugh observed "The bottom line is while it's impossible to imagine Phish without the Grateful Dead as forebears, many other musicians figured as influences upon them. Some of them - such as Carlos Santana and Frank Zappa - were arguably |
product prototypes in 1986. The first processors were introduced in products during 1986. It has thirty-two 32-bit integer registers and sixteen 64-bit floating-point registers. The number of floating-point registers was doubled in the 1.1 version to 32 once it became apparent that 16 were inadequate and restricted performance. The architects included Allen Baum, Hans Jeans, Michael J. Mahon, Ruby Bei-Loh Lee, Russel Kao, Steve Muchnick, Terrence C. Miller, David Fotland, and William S. Worley. The first implementation was the TS1, a central processing unit built from discrete transistor–transistor logic (74F TTL) devices. Later implementations were multi-chip VLSI designs fabricated in NMOS processes (NS1 and NS2) and CMOS (CS1 and PCX). They were first used in a new series of HP 3000 machines in the late 1980s – the 930 and 950, commonly known at the time as Spectrum systems, the name given to them in the development labs. These machines ran MPE-XL. The HP 9000 machines were soon upgraded with the PA-RISC processor as well, running the HP-UX version of UNIX. Other operating systems ported to the PA-RISC architecture include Linux, OpenBSD, NetBSD and NeXTSTEP. An interesting aspect of the PA-RISC line is that most of its generations have no Level 2 cache. Instead large Level 1 caches are used, formerly as separate chips connected by a bus, and now integrated on-chip. Only the PA-7100LC and PA-7300LC have L2 caches. Another innovation of the PA-RISC is the addition of vector instructions (SIMD) in the form of MAX, which were first introduced on the PA-7100LC. Precision RISC Organization, an industry group led by HP, was founded in 1992, to promote the PA-RISC architecture. Members included Convex, Hitachi, Hughes Aircraft, Mitsubishi, NEC, OKI, Prime, Stratus, Yokogawa, Red Brick Software, and Allegro Consultants, Inc. The ISA was extended in 1996 to 64 bits, with this revision named PA-RISC 2.0. PA-RISC 2.0 also added fused multiply–add instructions, which help certain floating-point intensive algorithms, and the MAX-2 SIMD extension, which provides instructions for accelerating multimedia applications. The first PA-RISC 2.0 implementation was the PA-8000, which was introduced in January 1996. CPU specifications See | most of its generations have no Level 2 cache. Instead large Level 1 caches are used, formerly as separate chips connected by a bus, and now integrated on-chip. Only the PA-7100LC and PA-7300LC have L2 caches. Another innovation of the PA-RISC is the addition of vector instructions (SIMD) in the form of MAX, which were first introduced on the PA-7100LC. Precision RISC Organization, an industry group led by HP, was founded in 1992, to promote the PA-RISC architecture. Members included Convex, Hitachi, Hughes Aircraft, Mitsubishi, NEC, OKI, Prime, Stratus, Yokogawa, Red Brick Software, and Allegro Consultants, Inc. The ISA was extended in 1996 to 64 bits, with this revision named PA-RISC 2.0. PA-RISC 2.0 also added fused multiply–add instructions, which help certain floating-point intensive algorithms, and the MAX-2 SIMD extension, which provides instructions for accelerating multimedia applications. The first PA-RISC 2.0 implementation was the PA-8000, which was introduced in January 1996. CPU specifications See also Hombre chipset – A PA-7150-based chipset with a complete multimedia system for Commodore Amiga References External links LostCircuits Hewlett Packard PA8800 RISC Processor overview HP's documentation – page down for PA-RISC, architecture PDFs available OpenPA.net Comprehensive PA-RISC chip and computer information chipdb.org Images of different PA-RISC processors HP microprocessors Instruction set architectures Computer-related introductions |
of Death answering only to "He who sits on the throne"; a disfigured suicide attempt survivor turned rock-star named Arseface; a serial-killer called the 'Reaver-Cleaver'; The Grail, a secret organization controlling the governments of the world and protecting the bloodline of Jesus; Herr Starr, primary enforcer for The Grail, a megalomaniac with a penchant for prostitutes, who wishes to use Custer for his own ends; several fallen angels; and Jesse's own redneck family — particularly his nasty Cajun grandmother, her mighty bodyguard Jody, and the Zoophilic T.C. Characters Collected editions Additionally, the book Preacher: Dead or Alive () collects Fabry's covers to the series. Adaptation attempts Garth Ennis, feeling Preacher would translate perfectly as a film, sold the film rights to Electric Entertainment. Rachel Talalay was hired to direct, with Ennis writing the script. Rupert Harvey and Tom Astor were set as producers. By May 1998, Ennis completed three drafts of the script, based largely on the Gone to Texas story arc. The filmmakers found it difficult financing Preacher because investors found the idea religiously controversial. Ennis approached Kevin Smith and Scott Mosier to help finance the film under their View Askew Productions banner. Ennis, Smith and Mosier pitched Preacher to Bob Weinstein at Miramax Films. Weinstein was confused by the characterization of Jesse Custer. Miramax also did not want to share the box office gross with Electric Entertainment, ultimately dropping the pitch. By May 2000, Smith and Mosier were still attached to produce with Talalay directing, but Smith did not know the status of Preacher, feeling it would languish in development hell. By then, Storm Entertainment, a UK-based production company known for their work on independent films, joined the production with Electric Entertainment. In September 2001, the two companies announced Preacher had been greenlighted to commence pre-production, with filming to begin in November and Talalay still directing Ennis' script. The production and start dates were pushed back because of financial issues of the $25 million projected budget. James Marsden was cast in the lead role as Jesse Custer sometime in 2002. He explained, "It was something I never knew anything about, but once I got my hands on the comic books, I was blown away by it." In a March 2004 interview, Marsden said the filmmakers were hoping for filming to start the following August. With the full-length film adaptation eventually abandoned with budgetary concerns, HBO announced in November 2006 that they commissioned Mark Steven Johnson and Howard Deutch to produce a television pilot. Johnson was to write with Deutch directing. Impressed with Johnson's pilot script, HBO had him write the series bible for the first season. Johnson originally planned "to turn each comic book issue into a single episode" on a shot-for-shot basis. "I gave [HBO] the comics, and I said, 'Every issue is an hour'. Garth Ennis said 'You don't have to be so beholden to the comic'. And I'm like, 'No, no, no. It's got to be like the comic'." Johnson also wanted to make sure that one-shots were included as well. Johnson changed his position, citing new storylines conceived by Ennis. "Well, there would be nothing new to add if we did that, so Garth [Ennis] and I have been creating new stories for the series," he said. "I love the book so much and I was telling Garth that he has to make the stories we are coming up with as comics because I want to see them." By August 2008, new studio executives at HBO decided to abandon the idea, finding it too stylistically dark and religiously controversial. Columbia Pictures then purchased the film rights in October 2008 with | the supernatural creature named Genesis, the infant of the unauthorized, unnatural coupling of an angel and a demon. The incident flattens Custer's church and kills his entire congregation. Genesis has no sense of individual will, but since it is composed of both pure goodness and pure evil, its power might rival that of God Himself, making Jesse Custer, bonded to Genesis, potentially the most powerful being in the universe. Driven by a strong sense of right and wrong, Custer journeys across the United States attempting to literally find God, who abandoned Heaven the moment Genesis was born. He also begins to discover the truth about his new powers. They allow him, when he wills it, to command the obedience of those who hear and comprehend his words. He is joined by his old girlfriend Tulip O'Hare, as well as a hard-drinking Irish vampire named Cassidy. During the course of their journeys, the three encounter enemies and obstacles both sacred and profane, including The Saint of Killers, an invincible, quick-drawing, perfect-aiming, come-lately Angel of Death answering only to "He who sits on the throne"; a disfigured suicide attempt survivor turned rock-star named Arseface; a serial-killer called the 'Reaver-Cleaver'; The Grail, a secret organization controlling the governments of the world and protecting the bloodline of Jesus; Herr Starr, primary enforcer for The Grail, a megalomaniac with a penchant for prostitutes, who wishes to use Custer for his own ends; several fallen angels; and Jesse's own redneck family — particularly his nasty Cajun grandmother, her mighty bodyguard Jody, and the Zoophilic T.C. Characters Collected editions Additionally, the book Preacher: Dead or Alive () collects Fabry's covers to the series. Adaptation attempts Garth Ennis, feeling Preacher would translate perfectly as a film, sold the film rights to Electric Entertainment. Rachel Talalay was hired to direct, with Ennis writing the script. Rupert Harvey and Tom Astor were set as producers. By May 1998, Ennis completed three drafts of the script, based largely on the Gone to Texas story arc. The filmmakers found it difficult financing Preacher because investors found the idea religiously controversial. Ennis approached Kevin Smith and Scott Mosier to help finance the film under their View Askew Productions banner. Ennis, Smith and Mosier pitched Preacher to Bob Weinstein at Miramax Films. Weinstein was confused by the characterization of Jesse Custer. Miramax also did not want to share the box office gross with Electric Entertainment, ultimately dropping the pitch. By May 2000, Smith and Mosier were still attached to produce with Talalay directing, but Smith did not know the status of Preacher, feeling it would languish in development hell. By then, Storm Entertainment, a UK-based production company known for their work on independent films, joined the production with Electric Entertainment. In September 2001, the two companies announced Preacher had been greenlighted to commence pre-production, with filming to begin in November and Talalay still directing Ennis' script. The production and start dates were pushed back because of financial issues of the $25 million projected budget. James Marsden was cast in the lead role as Jesse Custer sometime in 2002. He explained, "It was something I never knew anything about, but once I got my hands on the comic books, I was blown away by it." In a March 2004 interview, Marsden said the filmmakers were |
late medieval Catholicism. In most denominations, modern preaching is kept below about 40 minutes, but historic preachers of all denominations could at times speak for well over an hour, sometimes for two or three hours, and use techniques of rhetoric and theatre that are today somewhat out of fashion in mainline churches. In many churches in the United States, the title "Preacher" is synonymous with "pastor" or "minister", and the church's minister is often referred to simply as "our/the preacher" or by name such as "Preacher Smith". However, among some Chinese churches, preacher (Chinese: 傳道) is different from pastor (Chinese: 牧師). A preacher in the Protestant church is one of the younger clergy, but they are not officially recognised as pastors until they can prove their capability of leading the church. Other uses Preacher is also the author of the Book of Ecclesiastes according to the King James Version. Preacher is one translation of the Hebrew word קהלת (Qoheleth). There is much debate about the identity of this | three hours, and use techniques of rhetoric and theatre that are today somewhat out of fashion in mainline churches. In many churches in the United States, the title "Preacher" is synonymous with "pastor" or "minister", and the church's minister is often referred to simply as "our/the preacher" or by name such as "Preacher Smith". However, among some Chinese churches, preacher (Chinese: 傳道) is different from pastor (Chinese: 牧師). A preacher in the Protestant church is one of the younger clergy, but they are not officially recognised as pastors until they can prove their capability of leading the church. Other uses Preacher is also the author of the Book of Ecclesiastes according to the King James Version. Preacher is one translation of the Hebrew word קהלת (Qoheleth). There is much debate about the identity |
States), from 8:00p.m. to 11:00p.m. (Eastern and Pacific Time) or 7:00p.m. to 10:00p.m. (Central and Mountain Time). In India and some Middle Eastern countries, prime time consists of the programmes that are aired on TV between 8:00p.m. and 11:00p.m. local time. Asia Bangladesh In Bangladesh, the 19:00-to-22:00 time slot is known as Prime Time. Several national broadcasters like Maasranga Television, Gazi TV, Channel 9, Channel i broadcast their prime-time shows from 20:00 to 23:00 after their Primetime news at 19:00. During Islamic Holidays Season, most of the TV Stations broadcast their especially produced shows and World Television Premiers starting from 15:00 to midnight. In Ramadan, the broadcasters also air special Religious and Cooking shows starting from 14:00 to 20:00 affecting the primetime hours. Besides, After blameways, Late Night Talkshows are also aired from 01:00 to 04:00 with Ramadan being exception. Religious shows are also broadcast simultaneously from 01:00 along with Talk shows and News Analysis. China In television in China, the 19:00-to-22:00 time slot is known as Golden Time Also Known "Party time",(Traditional Chinese: 黄金時間; Simplified Chinese: 黄金时间; Pinyin: Huángjīn shíjiān). The term also influenced a nickname of a strip of holidays in known as Golden Week. Hong Kong and Macau Prime time usually takes place from 19:00 until 22:00. After that, programs classified as "PG" (Parental Guidance) are allowed to be broadcast. Frontline dramas appear during this time slot in Cantonese, as well as movies in English. India In India, prime time occurs between 19:00 and 23:30. Usually, programmes during prime time are domestic dramas, talent shows and reality shows. Indonesia Prime time usually takes place from 16:00 to 0:00 time in Indonesia and Sinetrons (soap operas) dominate majority of the programming grids, before 2018 daily evening newscasts used to kick-off primetime at 17:00~18:00 although some channels notably SCTV broadcast their daily evening newscasts earlier, usually at 16:00 or 16:30 but the practice of airing news at primetime ended in 2018 in favor of adding more Sinetrons to the schedule, except for TVRI and Trans 7 which have kept their newscasts Klik Indonesia Petang (at 18:00) and Redaksi Malam (at 23:45) on primetime respectively. After prime time, programs classified as Adult, as well as cigarette commercials, are allowed to be broadcast. Like another Islam by country, there is also a 'midnight prime time' during suhur time in a month of Ramadan. It takes place from 02:00 (or 02:30 in some channels) and ends at the Fajr prayer call, varies between 04:30 and 05:00. The time slot is usually filled with entertainment and religious programming. Iraq In Iraq, prime time runs from 20:00 to 23:00. The main news programs are broadcast at 20:00 and the highest-rated television program airs at 21:00. Japan In Japanese television, prime time runs from 19:00 to 23:00. Especially, the 19:00-to-22:00 time slot is also known as . The term also influenced a nickname of a strip of holidays in Japan known as Golden Week. Malaysia Malaysia prime time starts with the main news from 20:00 to 20:30 (now 20:00 to 21:00) and ends either at 23:00 or 1:00, or possibly later. Usually, programmes during prime time are domestic dramas, foreign drama series (mostly American), films and entertainment programmes. Programmes classified as 18 are not allowed to be broadcast before 10:00 p.m. but on Radio Televisyen Malaysia, most programmes on this slot are rated U (U means Umum in Malay and literally General Viewing or General Audiences in English) throughout the whole day. However, programmes broadcast after 23:00 are still considered prime time. As of 2019, NTV7's prime time continues until 12:00 a.m. Programmes during prime time may have longer commercial breaks due to number of viewers. Some domestic prime-time productions may be affected because of certain major sporting events such as FIFA World Cup. However, only FIFA World Cup held in the Americas do not affect the domestic prime-time programmes but only during daytime. Pakistan In Pakistan, prime time is from 20:00 to 22:00 Pakistan Standard Time. During this time the majority of the local channels broadcast news and drama serials. However, state channels broadcast Khabarnama (New Bulletin) from past many decades. Philippines In the Philippines, prime-time blocks begin at 18:00 (now 16:30 or 17:00) and run until about 23:00 (or 23:30) on weekdays, and 19:00 to 23:00 on weekends. The weekday prime-time blocks usually consists of local Philippine television drama (soap operas) and foreign television series. The network's highest-rated programs are usually aired right after the evening newscast at 18:30 or 20:00, while a foreign series (usually a Korean Drama) usually airs before the evening newscast or precedes the late night newscast. On weekends, non-scripted programming such as comedy series, talent shows, reality shows and current affairs shows air in prime time. For the minor networks, prime time consists of American television series on weekdays, with encores of those shows on weekends. Prime time originally started earlier at around 19:00, but the evening newscasts were lengthened to 90 minutes and now start at 18:30, instead of the original one-hour newscast that starts at 18:00. Singapore In Singapore, prime time begins at 18:00 on Channel 5, 18:30 on Channel 8 and 19:00 on Channel U, CNA, Suria, Vasantham. which are also the main (Free-to-air) television channels in Singapore. On Channel 8, prime time ends at midnight or 0:15 on weekdays, at 0:30 on Saturday nights and at 23:30 on Sunday nights. On Channel 5, prime time ends at 0:00 on weekdays, at 1:30 (or later) on Saturday nights and at 0:30 on Sunday nights. On Suria, prime time ends at 22:30 on Monday to Thursday nights, 23:30 on Friday nights, 23:00 on weekends and at 00:30 or 01:00 on eve and actual days of Public Holidays. On Vasantham, prime time ends at 23:00 on Mondays to Thursdays, midnight (or later) on Friday and Saturday nights and at 23:30 on Sunday nights. On Channel NewsAsia, prime time ends at 23:01, immediately after the news headlines, seven days a week and on Channel U, prime time ends at 23:00 seven days a week. Generally, however, prime time is considered to be from 18:00 to 00:00. South Korea In South Korea, prime time usually runs from 19:30 to 23:00 during weekdays, while on Saturdays and Sundays, it runs from 18:00 to 23:00. Family-oriented television shows are broadcast before 22:00, and adult-oriented television shows air after 22:00. Taiwan In Taiwan, prime time (called bādiǎn dàng—八點檔—in Mandarin Chinese, literally eight o'clock slot) starts at 18:00 in the evening. Taiwanese drama series played then are called 8 o'clock series and are expected to have high viewer ratings.Also,the evening news usually start from 18:00 or 19:00. Thailand In Thailand, prime time dramas (ละคร; la-korn) air from 20:30 to 22:30. Most dramas are soap operas. Prime time dramas are popular and influential to Thai society. Vietnam In Vietnam, prime time is also known as Golden Time (Vietnamese: Giờ vàng), prime time starts at 20:00 in the evening and ends at 23:00. Europe Austria In Austria, prime time usually starts at 20:15 after the news broadcast of the first "Staatlich-rechtliches Fernsehen" or ORF1, event though ORF2 has its news from 19:30 to 20:00, they also start broadcasting prime time content at 20:15. The same applies for nearly all channels seated in Austria or Germany, that are broadcast in Austria. Bosnia and Herzegovina In Bosnia and Herzegovina, prime time starts at 20:00 and finishes at 22:00. It is preceded by a daily newscast (Dnevnik) at 19:00 and followed by a late night newscast (Vijesti) at 22:00. Bulgaria In Bulgaria, prime time starts at 20:00 o’clock every day (including weekends). Usually, the programmes aired are Bulgarian or Turkish series and reality shows, followed by a late newscast. The Bulgarian National Television broadcasts Po Sveta i u Nas at 20:00 and shows cultural and political programmes from 21:00 to 22:00 with next being series and late-night news at 23:00. Croatia In Croatia, prime time starts between 20:00 and 20:15. Croatian public broadcaster Hrvatska radiotelevizija broadcasts a daily newscast from 19:00 to 20:00. Also, many private broadcasters have daily newscasts either before or after the HTY newscast, at around 20.05, followed by the start of their own prime time. Many broadcasters without daily newscasts start their prime time at 20:00. Prime time generally ends between 22:00 and 23:00, followed by the late night edition of the network newscast and adult-oriented programming. Denmark In Denmark, prime time starts at 20:00. Finland In Finland, prime time starts at 21:00. It is preceded by a daily newscast at 20:30. France In France prime time starts at 21:10 (20:35 in 1980s, 20:50 in 1990s and 2000s, 21:05 in 2010s). Georgia In Georgia, prime time starts between 18:45 and 20:00 and generally ends at midnight. However, on Friday night / Saturday morning prime time usually continues until 1:00. Germany At 20:00 each evening Das Erste (The First), Germany's oldest public television network, airs the country's most-watched news broadcast, the main edition of the Tagesschau—which is also simulcast on most of its other specialist and regional channels (The Third). The conclusion of the bulletin 15 minutes later marks the beginning of prime time, as it has since the 1950s. In consequence, most channels also choose to start their prime time at 20:15. In the 1990s, the commercial channel Sat.1 suffered a significant loss of audience share when it tried moving the start of its prime time to 20:00. Greece In Greece, prime time runs from 21:00 (usually following the news) to midnight. Hungary In Hungary, prime time on weekdays on the two big commercial stations (RTL Klub and TV2) starts at 19:00 with game shows, tabloid and docu-reality programmes. At 21:00, two popular soap operas air: Barátok közt and Jóban Rosszban, which follows at 21:30. American and other series, movies, talk-shows and magazines run until 23:30. The prime-time lineup is preceded by daily news programmes at 18:30. At weekends prime time begins at 19:00, with blockbuster movies and television shows. Before 15 March 2015, the public television station M1 began its prime time with a game show at 18:30, which was followed by the daily news programme Híradó at 19:30. After the news, the channel broadcast American and other series, talk shows, magazines, and news programmes until 22:00, after which came the daily news magazine Este and the late edition of Híradó. From 15 March 2015, Duna began broadcasting all of the entertainment programming transferred to it from that date from M1, meaning that prime time on Duna now begins at 18:00, starting with the simulcast of the 18:00 edition of Híradó from the newly re-launched news channel, M1. Iceland In Iceland, prime time starts at 19:30. It is preceded by a daily newscast at 19:00. Ireland In Ireland, prime starts | used to kick-off primetime at 17:00~18:00 although some channels notably SCTV broadcast their daily evening newscasts earlier, usually at 16:00 or 16:30 but the practice of airing news at primetime ended in 2018 in favor of adding more Sinetrons to the schedule, except for TVRI and Trans 7 which have kept their newscasts Klik Indonesia Petang (at 18:00) and Redaksi Malam (at 23:45) on primetime respectively. After prime time, programs classified as Adult, as well as cigarette commercials, are allowed to be broadcast. Like another Islam by country, there is also a 'midnight prime time' during suhur time in a month of Ramadan. It takes place from 02:00 (or 02:30 in some channels) and ends at the Fajr prayer call, varies between 04:30 and 05:00. The time slot is usually filled with entertainment and religious programming. Iraq In Iraq, prime time runs from 20:00 to 23:00. The main news programs are broadcast at 20:00 and the highest-rated television program airs at 21:00. Japan In Japanese television, prime time runs from 19:00 to 23:00. Especially, the 19:00-to-22:00 time slot is also known as . The term also influenced a nickname of a strip of holidays in Japan known as Golden Week. Malaysia Malaysia prime time starts with the main news from 20:00 to 20:30 (now 20:00 to 21:00) and ends either at 23:00 or 1:00, or possibly later. Usually, programmes during prime time are domestic dramas, foreign drama series (mostly American), films and entertainment programmes. Programmes classified as 18 are not allowed to be broadcast before 10:00 p.m. but on Radio Televisyen Malaysia, most programmes on this slot are rated U (U means Umum in Malay and literally General Viewing or General Audiences in English) throughout the whole day. However, programmes broadcast after 23:00 are still considered prime time. As of 2019, NTV7's prime time continues until 12:00 a.m. Programmes during prime time may have longer commercial breaks due to number of viewers. Some domestic prime-time productions may be affected because of certain major sporting events such as FIFA World Cup. However, only FIFA World Cup held in the Americas do not affect the domestic prime-time programmes but only during daytime. Pakistan In Pakistan, prime time is from 20:00 to 22:00 Pakistan Standard Time. During this time the majority of the local channels broadcast news and drama serials. However, state channels broadcast Khabarnama (New Bulletin) from past many decades. Philippines In the Philippines, prime-time blocks begin at 18:00 (now 16:30 or 17:00) and run until about 23:00 (or 23:30) on weekdays, and 19:00 to 23:00 on weekends. The weekday prime-time blocks usually consists of local Philippine television drama (soap operas) and foreign television series. The network's highest-rated programs are usually aired right after the evening newscast at 18:30 or 20:00, while a foreign series (usually a Korean Drama) usually airs before the evening newscast or precedes the late night newscast. On weekends, non-scripted programming such as comedy series, talent shows, reality shows and current affairs shows air in prime time. For the minor networks, prime time consists of American television series on weekdays, with encores of those shows on weekends. Prime time originally started earlier at around 19:00, but the evening newscasts were lengthened to 90 minutes and now start at 18:30, instead of the original one-hour newscast that starts at 18:00. Singapore In Singapore, prime time begins at 18:00 on Channel 5, 18:30 on Channel 8 and 19:00 on Channel U, CNA, Suria, Vasantham. which are also the main (Free-to-air) television channels in Singapore. On Channel 8, prime time ends at midnight or 0:15 on weekdays, at 0:30 on Saturday nights and at 23:30 on Sunday nights. On Channel 5, prime time ends at 0:00 on weekdays, at 1:30 (or later) on Saturday nights and at 0:30 on Sunday nights. On Suria, prime time ends at 22:30 on Monday to Thursday nights, 23:30 on Friday nights, 23:00 on weekends and at 00:30 or 01:00 on eve and actual days of Public Holidays. On Vasantham, prime time ends at 23:00 on Mondays to Thursdays, midnight (or later) on Friday and Saturday nights and at 23:30 on Sunday nights. On Channel NewsAsia, prime time ends at 23:01, immediately after the news headlines, seven days a week and on Channel U, prime time ends at 23:00 seven days a week. Generally, however, prime time is considered to be from 18:00 to 00:00. South Korea In South Korea, prime time usually runs from 19:30 to 23:00 during weekdays, while on Saturdays and Sundays, it runs from 18:00 to 23:00. Family-oriented television shows are broadcast before 22:00, and adult-oriented television shows air after 22:00. Taiwan In Taiwan, prime time (called bādiǎn dàng—八點檔—in Mandarin Chinese, literally eight o'clock slot) starts at 18:00 in the evening. Taiwanese drama series played then are called 8 o'clock series and are expected to have high viewer ratings.Also,the evening news usually start from 18:00 or 19:00. Thailand In Thailand, prime time dramas (ละคร; la-korn) air from 20:30 to 22:30. Most dramas are soap operas. Prime time dramas are popular and influential to Thai society. Vietnam In Vietnam, prime time is also known as Golden Time (Vietnamese: Giờ vàng), prime time starts at 20:00 in the evening and ends at 23:00. Europe Austria In Austria, prime time usually starts at 20:15 after the news broadcast of the first "Staatlich-rechtliches Fernsehen" or ORF1, event though ORF2 has its news from 19:30 to 20:00, they also start broadcasting prime time content at 20:15. The same applies for nearly all channels seated in Austria or Germany, that are broadcast in Austria. Bosnia and Herzegovina In Bosnia and Herzegovina, prime time starts at 20:00 and finishes at 22:00. It is preceded by a daily newscast (Dnevnik) at 19:00 and followed by a late night newscast (Vijesti) at 22:00. Bulgaria In Bulgaria, prime time starts at 20:00 o’clock every day (including weekends). Usually, the programmes aired are Bulgarian or Turkish series and reality shows, followed by a late newscast. The Bulgarian National Television broadcasts Po Sveta i u Nas at 20:00 and shows cultural and political programmes from 21:00 to 22:00 with next being series and late-night news at 23:00. Croatia In Croatia, prime time starts between 20:00 and 20:15. Croatian public broadcaster Hrvatska radiotelevizija broadcasts a daily newscast from 19:00 to 20:00. Also, many private broadcasters have daily newscasts either before or after the HTY newscast, at around 20.05, followed by the start of their own prime time. Many broadcasters without daily newscasts start their prime time at 20:00. Prime time generally ends between 22:00 and 23:00, followed by the late night edition of the network newscast and adult-oriented programming. Denmark In Denmark, prime time starts at 20:00. Finland In Finland, prime time starts at 21:00. It is preceded by a daily newscast at 20:30. France In France prime time starts at 21:10 (20:35 in 1980s, 20:50 in 1990s and 2000s, 21:05 in 2010s). Georgia In Georgia, prime time starts between 18:45 and 20:00 and generally ends at midnight. However, on Friday night / Saturday morning prime time usually continues until 1:00. Germany At 20:00 each evening Das Erste (The First), Germany's oldest public television network, airs the country's most-watched news broadcast, the main edition of the Tagesschau—which is also simulcast on most of its other specialist and regional channels (The Third). The conclusion of the bulletin 15 minutes later marks the beginning of prime time, as it has since the 1950s. In consequence, most channels also choose to start their prime time at 20:15. In the 1990s, the commercial channel Sat.1 suffered a significant loss of audience share when it tried moving the start of its prime time to 20:00. Greece In Greece, prime time runs from 21:00 (usually following the news) to midnight. Hungary In Hungary, prime time on weekdays on the two big commercial stations (RTL Klub and TV2) starts at 19:00 with game shows, tabloid and docu-reality programmes. At 21:00, two popular soap operas air: Barátok közt and Jóban Rosszban, which follows at 21:30. American and other series, movies, talk-shows and magazines run until 23:30. The prime-time lineup is preceded by daily news programmes at 18:30. At weekends prime time begins at 19:00, with blockbuster movies and television shows. Before 15 March 2015, the public television station M1 began its prime time with a game show at 18:30, which was followed by the daily news programme Híradó at 19:30. After the news, the channel broadcast American and other series, talk shows, magazines, and news programmes until 22:00, after which came the daily news magazine Este and the late edition of Híradó. From 15 March 2015, Duna began broadcasting all of the entertainment programming transferred to it from that date from M1, meaning that prime time on Duna now begins at 18:00, starting with the simulcast of the 18:00 edition of Híradó from the newly re-launched news channel, M1. Iceland In Iceland, prime time starts at 19:30. It is preceded by a daily newscast at 19:00. Ireland In Ireland, prime starts at 18:30 and ends at 22:00. Italy In Italy, prime time (called "prima serata") starts between 21:00 and 21:45 (main channels) and ends between 23:30 and 00:30. On Friday and Saturday night some shows last until 01:30–02:00. It usually follows news and, on some networks (like Rai 1 and Canale 5), a slot called "access prime time". Shows, movies, and sport events are usually shown during prime time. Netherlands Much like in Germany, prime time in the Netherlands usually begins at 20:30 in order to not compete with Nederlanse Omroep Stichting's flagship 20:00 newscast. Norway In Norway, prime time starts at 19:45. On the NRK1 channel it is preceded by the daily newscast Dagsrevyen at 19:00. Locally, prime time is called (lit. "best time for broadcasting"). Poland In Poland, prime time starts around 20:00 (sometimes 20:30). On TVP1 it is preceded by a daily newscast at 19:30, on TVN the newscast is aired at 19:00 followed by the newsmagazine Uwaga at 19:50 (weekdays) or 19:45 (weekends), and then the soap opera Na Wspólnej at 20:05 (Monday to Thursday), from Friday to Sunday (at 20:00) various: movies on Friday, serials or films (Winter and Summer) at Saturday, and programme or films(Winter and Summer) at Sunday), on (Polsat) the news is aired at 18:50, followed by a sitcom Świat według Kiepskich at 19:30. Russia In Russia television prime time is between 19:00 and 23:00 on working days and from 15:00 to 01:00 on holidays. On radio stations there are morning, |
unlike gas turbines that operate with compressible fluid." Applications Pelton wheels are the preferred turbine for hydro-power where the available water source has relatively high hydraulic head at low flow rates. Pelton wheels are made in all sizes. There exist multi-ton Pelton wheels mounted on vertical oil pad bearings in hydroelectric plants. The largest units – the Bieudron Hydroelectric Power Station at the Grande Dixence Dam complex in Switzerland – are over 400 megawatts. The smallest Pelton wheels are only a few inches across, and can be used to tap power from mountain streams having flows of a few gallons per minute. Some of these systems use household plumbing fixtures for water delivery. These small units are recommended for use with or more of head, in order to generate significant power levels. Depending on water flow and design, Pelton wheels operate best with heads from , although there is no theoretical limit. Design rules The specific speed parameter is independent of a particular turbine's size. Compared to other turbine designs, the relatively low specific speed of the Pelton wheel, implies that the geometry is inherently a "low gear" design. Thus it is most suitable to being fed by a hydro source with a low ratio of flow to pressure, (meaning relatively low flow and/or relatively high pressure). The specific speed is the main criterion for matching a specific hydro-electric site with the optimal turbine type. It also allows a new turbine design to be scaled from an existing design of known performance. (dimensionless parameter), where: = Frequency of rotation (rpm) = Power (W) = Water head (m) = Density (kg/m3) The formula implies that the Pelton turbine is geared most suitably for applications with relatively high hydraulic head H, due to the 5/4 exponent being greater than unity, and given the characteristically low specific speed of the Pelton. Turbine physics and derivation Energy and initial jet velocity In the ideal (frictionless) case, all of the hydraulic potential energy (Ep = mgh) is converted into kinetic energy (Ek = mv2/2) (see Bernoulli's principle). Equating these two equations and solving for the initial jet velocity (Vi) indicates that the theoretical (maximum) jet velocity is Vi = . For simplicity, assume that all of the velocity vectors are parallel to each other. Defining the velocity of the wheel runner as: (u), then as the jet approaches the runner, the initial jet velocity relative to the runner is: (Vi − u). The initial velocity of jet is Vi Final jet velocity Assuming that the jet velocity is higher than the runner velocity, if the water is not to become backed-up in runner, then due to conservation of mass, the mass entering the runner must equal the mass leaving the runner. The fluid is assumed to be incompressible (an accurate assumption for most liquids). Also it is assumed that the cross-sectional area of the jet is constant. The jet speed remains constant relative to the runner. So as the jet recedes from the runner, the jet velocity relative to the runner is: −(Vi − u) = −Vi + u. In the standard reference frame (relative to the earth), the final velocity is then: Vf = (−Vi + u) + u = −Vi + 2u. Optimal wheel speed The ideal runner speed will cause all | wheels are only a few inches across, and can be used to tap power from mountain streams having flows of a few gallons per minute. Some of these systems use household plumbing fixtures for water delivery. These small units are recommended for use with or more of head, in order to generate significant power levels. Depending on water flow and design, Pelton wheels operate best with heads from , although there is no theoretical limit. Design rules The specific speed parameter is independent of a particular turbine's size. Compared to other turbine designs, the relatively low specific speed of the Pelton wheel, implies that the geometry is inherently a "low gear" design. Thus it is most suitable to being fed by a hydro source with a low ratio of flow to pressure, (meaning relatively low flow and/or relatively high pressure). The specific speed is the main criterion for matching a specific hydro-electric site with the optimal turbine type. It also allows a new turbine design to be scaled from an existing design of known performance. (dimensionless parameter), where: = Frequency of rotation (rpm) = Power (W) = Water head (m) = Density (kg/m3) The formula implies that the Pelton turbine is geared most suitably for applications with relatively high hydraulic head H, due to the 5/4 exponent being greater than unity, and given the characteristically low specific speed of the Pelton. Turbine physics and derivation Energy and initial jet velocity In the ideal (frictionless) case, all of the hydraulic potential energy (Ep = mgh) is converted into kinetic energy (Ek = mv2/2) (see Bernoulli's principle). Equating these two equations and solving for the initial jet velocity (Vi) indicates that the theoretical (maximum) jet velocity is Vi = . For simplicity, assume that all of the velocity vectors are parallel to each other. Defining the velocity of the wheel runner as: (u), then as the jet approaches the runner, the initial jet velocity relative to the runner is: (Vi − u). The initial velocity of jet is Vi Final jet velocity Assuming that the jet velocity is higher than the runner velocity, if the water is not to become backed-up in runner, then due to conservation of mass, the mass entering the runner must equal the mass leaving the runner. The fluid is assumed to be incompressible (an accurate assumption for most liquids). Also it is assumed that the cross-sectional area of the jet is constant. The jet speed remains constant relative to the runner. So as the jet recedes from the runner, the jet velocity relative to the runner is: −(Vi − u) = −Vi + u. In the standard reference frame (relative to the earth), the final velocity is then: Vf = (−Vi + u) + u = −Vi + 2u. Optimal wheel speed The ideal runner speed will cause all of the kinetic energy in the jet to be transferred to the wheel. In this case the final jet velocity must be zero. If −Vi + 2u = 0, then the optimal runner speed will be u = Vi /2, or half the initial jet velocity. Torque By Newton's second and third laws, the force F imposed by the jet on the runner is equal but opposite to the rate of momentum change of the fluid, so F = −m(Vf − Vi)/t = −ρQ[(−Vi + 2u) − Vi] = −ρQ(−2Vi + 2u) = 2ρQ(Vi − u), where ρ is the density, and Q is the volume rate of flow of fluid. If D is the wheel diameter, the torque on the runner is T = F(D/2) = ρQD(Vi − u). The torque is maximal when the runner is stopped (i.e. when u = 0, T = ρQDVi). When the speed of the runner is equal to the initial jet velocity, the torque is zero (i.e. when u = Vi, then T = 0). On a plot of torque versus runner speed, the torque curve is straight between these two points: (0, pQDVi) and (Vi, 0). Nozzle efficiency is the ratio of the jet power to the water power at the base of the nozzle. Power The power P = Fu = Tω, where ω is the angular velocity of the wheel. Substituting for F, we have P = 2ρQ(Vi − u)u. To find the runner speed at maximum power, take the derivative of P with respect to u and set it equal to zero, [dP/du = 2ρQ(Vi − 2u)]. Maximum power occurs when u = Vi /2. Pmax = ρQVi2/2. Substituting the initial jet power Vi = , this simplifies to Pmax = ρghQ. This quantity exactly equals the kinetic power of the jet, so in this ideal |
companies doing the development, mostly due to the wartime beginnings of the field, and in the interests of securing profitable patents. New materials were the first to be developed—quartz crystals were the first commercially exploited piezoelectric material, but scientists searched for higher-performance materials. Despite the advances in materials and the maturation of manufacturing processes, the United States market did not grow as quickly as Japan's did. Without many new applications, the growth of the United States' piezoelectric industry suffered. In contrast, Japanese manufacturers shared their information, quickly overcoming technical and manufacturing challenges and creating new markets. In Japan, a temperature stable crystal cut was developed by Issac Koga. Japanese efforts in materials research created piezoceramic materials competitive to the United States materials but free of expensive patent restrictions. Major Japanese piezoelectric developments included new designs of piezoceramic filters for radios and televisions, piezo buzzers and audio transducers that can connect directly to electronic circuits, and the piezoelectric igniter, which generates sparks for small engine ignition systems and gas-grill lighters, by compressing a ceramic disc. Ultrasonic transducers that transmit sound waves through air had existed for quite some time but first saw major commercial use in early television remote controls. These transducers now are mounted on several car models as an echolocation device, helping the driver determine the distance from the car to any objects that may be in its path. Mechanism The nature of the piezoelectric effect is closely related to the occurrence of electric dipole moments in solids. The latter may either be induced for ions on crystal lattice sites with asymmetric charge surroundings (as in BaTiO3 and PZTs) or may directly be carried by molecular groups (as in cane sugar). The dipole density or polarization (dimensionality [C·m/m3] ) may easily be calculated for crystals by summing up the dipole moments per volume of the crystallographic unit cell. As every dipole is a vector, the dipole density P is a vector field. Dipoles near each other tend to be aligned in regions called Weiss domains. The domains are usually randomly oriented, but can be aligned using the process of poling (not the same as magnetic poling), a process by which a strong electric field is applied across the material, usually at elevated temperatures. Not all piezoelectric materials can be poled. Of decisive importance for the piezoelectric effect is the change of polarization P when applying a mechanical stress. This might either be caused by a reconfiguration of the dipole-inducing surrounding or by re-orientation of molecular dipole moments under the influence of the external stress. Piezoelectricity may then manifest in a variation of the polarization strength, its direction or both, with the details depending on: 1. the orientation of P within the crystal; 2. crystal symmetry; and 3. the applied mechanical stress. The change in P appears as a variation of surface charge density upon the crystal faces, i.e. as a variation of the electric field extending between the faces caused by a change in dipole density in the bulk. For example, a 1 cm3 cube of quartz with 2 kN (500 lbf) of correctly applied force can produce a voltage of 12500 V. Piezoelectric materials also show the opposite effect, called the converse piezoelectric effect, where the application of an electrical field creates mechanical deformation in the crystal. Mathematical description Linear piezoelectricity is the combined effect of The linear electrical behavior of the material: where D is the electric flux density (electric displacement), ε is the permittivity (free-body dielectric constant), E is the electric field strength, and . Hooke's law for linear elastic materials: where S is the linearized strain, s is compliance under short-circuit conditions, T is stress, and , where u is the displacement vector. These may be combined into so-called coupled equations, of which the strain-charge form is: where is the piezoelectric tensor and the superscript t stands for its transpose. Due to the symmetry of , . In matrix form, where [d] is the matrix for the direct piezoelectric effect and [d] is the matrix for the converse piezoelectric effect. The superscript E indicates a zero, or constant, electric field; the superscript T indicates a zero, or constant, stress field; and the superscript t stands for transposition of a matrix. Notice that the third order tensor maps vectors into symmetric matrices. There are no non-trivial rotation-invariant tensors that have this property, which is why there are no isotropic piezoelectric materials. The strain-charge for a material of the 4mm (C4v) crystal class (such as a poled piezoelectric ceramic such as tetragonal PZT or BaTiO3) as well as the 6mm crystal class may also be written as (ANSI IEEE 176): where the first equation represents the relationship for the converse piezoelectric effect and the latter for the direct piezoelectric effect. Although the above equations are the most used form in literature, some comments about the notation are necessary. Generally, D and E are vectors, that is, Cartesian tensors of rank 1; and permittivity ε is a Cartesian tensor of rank 2. Strain and stress are, in principle, also rank-2 tensors. But conventionally, because strain and stress are all symmetric tensors, the subscript of strain and stress can be relabeled in the following fashion: 11 → 1; 22 → 2; 33 → 3; 23 → 4; 13 → 5; 12 → 6. (Different conventions may be used by different authors in literature. For example, some use 12 → 4; 23 → 5; 31 → 6 instead.) That is why S and T appear to have the "vector form" of six components. Consequently, s appears to be a 6-by-6 matrix instead of a rank-3 tensor. Such a relabeled notation is often called Voigt notation. Whether the shear strain components S4, S5, S6 are tensor components or engineering strains is another question. In the equation above, they must be engineering strains for the 6,6 coefficient of the compliance matrix to be written as shown, i.e., 2(s − s). Engineering shear strains are double the value of the corresponding tensor shear, such as S6 = 2S12 and so on. This also means that s66 = , where G12 is the shear modulus. In total, there are four piezoelectric coefficients, dij, eij, gij, and hij defined as follows: where the first set of four terms corresponds to the direct piezoelectric effect and the second set of four terms corresponds to the converse piezoelectric effect. The equality between the direct piezoelectric tensor and the transpose of the converse piezoelectric tensor originates from the Maxwell relations of thermodynamics. For those piezoelectric crystals for which the polarization is of the crystal-field induced type, a formalism has been worked out that allows for the calculation of piezoelectrical coefficients dij from electrostatic lattice constants or higher-order Madelung constants. Crystal classes Of the 32 crystal classes, 21 are non-centrosymmetric (not having a centre of symmetry), and of these, 20 exhibit direct piezoelectricity (the 21st is the cubic class 432). Ten of these represent the polar crystal classes, which show a spontaneous polarization without mechanical stress due to a non-vanishing electric dipole moment associated with their unit cell, and which exhibit pyroelectricity. If the dipole moment can be reversed by applying an external electric field, the material is said to be ferroelectric. The 10 polar (pyroelectric) crystal classes: 1, 2, m, mm2, 4, , 3, 3m, 6, . The other 10 piezoelectric crystal classes: 222, , 422, 2m, 32, , 622, 2m, 23, 3m. For polar crystals, for which P ≠ 0 holds without applying a mechanical load, the piezoelectric effect manifests itself by changing the magnitude or the direction of P or both. For the nonpolar but piezoelectric crystals, on the other hand, a polarization P different from zero is only elicited by applying a mechanical load. For them the stress can be imagined to transform the material from a nonpolar crystal class (P = 0) to a polar one, having P ≠ 0. Materials Many materials exhibit piezoelectricity. Crystalline materials Langasite (La3Ga5SiO14) – a quartz-analogous crystal Gallium orthophosphate (GaPO4) – a quartz-analogous crystal Lithium niobate (LiNbO3) Lithium tantalate (LiTaO3) Quartz Berlinite (AlPO4) – a rare phosphate mineral that is structurally identical to quartz Rochelle salt Topaz – Piezoelectricity in Topaz can probably be attributed to ordering of the (F,OH) in its lattice, which is otherwise centrosymmetric: orthorhombic bipyramidal (mmm). Topaz has anomalous optical properties which are attributed to such ordering. Tourmaline-group minerals Lead titanate (PbTiO3) – Although it occurs in nature as mineral macedonite, it is synthesized for research and applications. Ceramics Ceramics with randomly oriented grains must be ferroelectric to exhibit piezoelectricity. The occurrence of abnormal grain growth (AGG) in sintered polycrystalline piezoelectric ceramics has detrimental effects on the piezoelectric performance in such systems and should be avoided, as the microstructure in piezoceramics exhibiting AGG tends to consist of few abnormally large elongated grains in a matrix of randomly oriented finer grains. Macroscopic piezoelectricity is possible in textured polycrystalline non-ferroelectric piezoelectric materials, such as AlN and ZnO. The families of ceramics with perovskite, tungsten-bronze, and related structures exhibit piezoelectricity: Lead zirconate titanate ( with 0 ≤ x ≤ 1) – more commonly known as PZT, the most common piezoelectric ceramic in use today. Potassium niobate (KNbO3) Sodium tungstate (Na2WO3) Ba2NaNb5O5 Pb2KNb5O15 Zinc oxide (ZnO) – Wurtzite structure. While single crystals of ZnO are piezoelectric and pyroelectric, polycrystalline (ceramic) ZnO with randomly oriented grains exhibits neither piezoelectric nor pyroelectric effect. Not being ferroelectric, polycrystalline ZnO cannot be poled like barium titanate or PZT. Ceramics and polycrystalline thin films of ZnO may exhibit macroscopic piezoelectricity and pyroelectricity only if they are textured (grains are preferentially oriented), such that the piezoelectric and pyroelectric responses of all individual grains do not cancel. This is readily accomplished in polycrystalline thin films. Lead-free piezoceramics Sodium potassium niobate ((K,Na)NbO3). This material is also known as NKN or KNN. In 2004, a group of Japanese researchers led by Yasuyoshi Saito discovered a sodium potassium niobate composition with properties close | indicates a zero, or constant, electric field; the superscript T indicates a zero, or constant, stress field; and the superscript t stands for transposition of a matrix. Notice that the third order tensor maps vectors into symmetric matrices. There are no non-trivial rotation-invariant tensors that have this property, which is why there are no isotropic piezoelectric materials. The strain-charge for a material of the 4mm (C4v) crystal class (such as a poled piezoelectric ceramic such as tetragonal PZT or BaTiO3) as well as the 6mm crystal class may also be written as (ANSI IEEE 176): where the first equation represents the relationship for the converse piezoelectric effect and the latter for the direct piezoelectric effect. Although the above equations are the most used form in literature, some comments about the notation are necessary. Generally, D and E are vectors, that is, Cartesian tensors of rank 1; and permittivity ε is a Cartesian tensor of rank 2. Strain and stress are, in principle, also rank-2 tensors. But conventionally, because strain and stress are all symmetric tensors, the subscript of strain and stress can be relabeled in the following fashion: 11 → 1; 22 → 2; 33 → 3; 23 → 4; 13 → 5; 12 → 6. (Different conventions may be used by different authors in literature. For example, some use 12 → 4; 23 → 5; 31 → 6 instead.) That is why S and T appear to have the "vector form" of six components. Consequently, s appears to be a 6-by-6 matrix instead of a rank-3 tensor. Such a relabeled notation is often called Voigt notation. Whether the shear strain components S4, S5, S6 are tensor components or engineering strains is another question. In the equation above, they must be engineering strains for the 6,6 coefficient of the compliance matrix to be written as shown, i.e., 2(s − s). Engineering shear strains are double the value of the corresponding tensor shear, such as S6 = 2S12 and so on. This also means that s66 = , where G12 is the shear modulus. In total, there are four piezoelectric coefficients, dij, eij, gij, and hij defined as follows: where the first set of four terms corresponds to the direct piezoelectric effect and the second set of four terms corresponds to the converse piezoelectric effect. The equality between the direct piezoelectric tensor and the transpose of the converse piezoelectric tensor originates from the Maxwell relations of thermodynamics. For those piezoelectric crystals for which the polarization is of the crystal-field induced type, a formalism has been worked out that allows for the calculation of piezoelectrical coefficients dij from electrostatic lattice constants or higher-order Madelung constants. Crystal classes Of the 32 crystal classes, 21 are non-centrosymmetric (not having a centre of symmetry), and of these, 20 exhibit direct piezoelectricity (the 21st is the cubic class 432). Ten of these represent the polar crystal classes, which show a spontaneous polarization without mechanical stress due to a non-vanishing electric dipole moment associated with their unit cell, and which exhibit pyroelectricity. If the dipole moment can be reversed by applying an external electric field, the material is said to be ferroelectric. The 10 polar (pyroelectric) crystal classes: 1, 2, m, mm2, 4, , 3, 3m, 6, . The other 10 piezoelectric crystal classes: 222, , 422, 2m, 32, , 622, 2m, 23, 3m. For polar crystals, for which P ≠ 0 holds without applying a mechanical load, the piezoelectric effect manifests itself by changing the magnitude or the direction of P or both. For the nonpolar but piezoelectric crystals, on the other hand, a polarization P different from zero is only elicited by applying a mechanical load. For them the stress can be imagined to transform the material from a nonpolar crystal class (P = 0) to a polar one, having P ≠ 0. Materials Many materials exhibit piezoelectricity. Crystalline materials Langasite (La3Ga5SiO14) – a quartz-analogous crystal Gallium orthophosphate (GaPO4) – a quartz-analogous crystal Lithium niobate (LiNbO3) Lithium tantalate (LiTaO3) Quartz Berlinite (AlPO4) – a rare phosphate mineral that is structurally identical to quartz Rochelle salt Topaz – Piezoelectricity in Topaz can probably be attributed to ordering of the (F,OH) in its lattice, which is otherwise centrosymmetric: orthorhombic bipyramidal (mmm). Topaz has anomalous optical properties which are attributed to such ordering. Tourmaline-group minerals Lead titanate (PbTiO3) – Although it occurs in nature as mineral macedonite, it is synthesized for research and applications. Ceramics Ceramics with randomly oriented grains must be ferroelectric to exhibit piezoelectricity. The occurrence of abnormal grain growth (AGG) in sintered polycrystalline piezoelectric ceramics has detrimental effects on the piezoelectric performance in such systems and should be avoided, as the microstructure in piezoceramics exhibiting AGG tends to consist of few abnormally large elongated grains in a matrix of randomly oriented finer grains. Macroscopic piezoelectricity is possible in textured polycrystalline non-ferroelectric piezoelectric materials, such as AlN and ZnO. The families of ceramics with perovskite, tungsten-bronze, and related structures exhibit piezoelectricity: Lead zirconate titanate ( with 0 ≤ x ≤ 1) – more commonly known as PZT, the most common piezoelectric ceramic in use today. Potassium niobate (KNbO3) Sodium tungstate (Na2WO3) Ba2NaNb5O5 Pb2KNb5O15 Zinc oxide (ZnO) – Wurtzite structure. While single crystals of ZnO are piezoelectric and pyroelectric, polycrystalline (ceramic) ZnO with randomly oriented grains exhibits neither piezoelectric nor pyroelectric effect. Not being ferroelectric, polycrystalline ZnO cannot be poled like barium titanate or PZT. Ceramics and polycrystalline thin films of ZnO may exhibit macroscopic piezoelectricity and pyroelectricity only if they are textured (grains are preferentially oriented), such that the piezoelectric and pyroelectric responses of all individual grains do not cancel. This is readily accomplished in polycrystalline thin films. Lead-free piezoceramics Sodium potassium niobate ((K,Na)NbO3). This material is also known as NKN or KNN. In 2004, a group of Japanese researchers led by Yasuyoshi Saito discovered a sodium potassium niobate composition with properties close to those of PZT, including a high TC. Certain compositions of this material have been shown to retain a high mechanical quality factor (Qm ≈ 900) with increasing vibration levels, whereas the mechanical quality factor of hard PZT degrades in such conditions. This fact makes NKN a promising replacement for high power resonance applications, such as piezoelectric transformers. Bismuth ferrite (BiFeO3) – a promising candidate for the replacement of lead-based ceramics. Sodium niobate (NaNbO3) Barium titanate (BaTiO3) – Barium titanate was the first piezoelectric ceramic discovered. Bismuth titanate (Bi4Ti3O12) Sodium bismuth titanate (NaBi(TiO3)2) The fabrication of lead-free piezoceramics pose multiple challenges, from an environmental standpoint and their ability to replicate the properties of their lead-based counterparts. By removing the lead component of the piezoceramic, the risk of toxicity to humans decreases, but the mining and extraction of the materials can be harmful to the environment. Analysis of the environmental profile of PZT versus sodium potassium niobate (NKN or KNN) shows that across the four indicators considered (primary energy consumption, toxicological footprint, eco-indicator 99, and input-output upstream greenhouse gas emissions), KNN is actually more harmful to the environment. Most of the concerns with KNN, specifically its Nb2O5 component, are in the early phase of its life cycle before it reaches manufacturers. Since the harmful impacts are focused on these early phases, some actions can be taken to minimize the effects. Returning the land as close to its original form after Nb2O5 mining via dam deconstruction or replacing a stockpile of utilizable soil are known aids for any extraction event. For minimizing air quality effects, modeling and simulation still needs to occur to fully understand what mitigation methods are required. The extraction of lead-free piezoceramic components has not grown to a significant scale at this time, but from early analysis, experts encourage caution when it comes to environmental effects. Fabricating lead-free piezoceramics faces the challenge of maintaining the performance and stability of their lead-based counterparts. In general, the main fabrication challenge is creating the "morphotropic phase boundaries (MPBs)" that provide the materials with their stable piezoelectric properties without introducing the "polymorphic phase boundaries (PPBs)" that decrease the temperature stability of the material. New phase boundaries are created by varying additive concentrations so that the phase transition temperatures converge at room temperature. The introduction of the MPB improves piezoelectric properties, but if a PPB is introduced, the material becomes negatively affected by temperature. Research is ongoing to control the type of phase boundaries that are introduced through phase engineering, diffusing phase transitions, domain engineering, and chemical modification. III–V and II–VI semiconductors A piezoelectric potential can be created in any bulk or nanostructured semiconductor crystal having non central symmetry, such as the Group III–V and II–VI materials, due to polarization of ions under applied stress and strain. This property is common to both the zincblende and wurtzite crystal structures. To first order, there is only one independent piezoelectric coefficient in zincblende, called e14, coupled to shear components of the strain. In wurtzite, there are instead three independent piezoelectric coefficients: e31, e33 and e15. The semiconductors where the strongest piezoelectricity is observed are those commonly found in the wurtzite structure, i.e. GaN, InN, AlN and ZnO (see piezotronics). Since 2006, there have also been a number of reports of strong non linear piezoelectric effects in polar semiconductors. Such effects are generally recognized to be at least important if not of the same order of magnitude as the first order approximation. Polymers The piezo-response of polymers is not as high as the response for ceramics; however, polymers hold properties that ceramics do not. Over the last few decades, non-toxic, piezoelectric polymers have been studied and applied due to their flexibility and smaller acoustical impedance. Other properties that make these materials significant include their biocompatibility, biodegradability, low cost, and low power consumption compared to other piezo-materials (ceramics, etc.). Piezoelectric polymers and non-toxic polymer composites can be used given their different physical properties. Piezoelectric polymers can be classified by bulk polymers, voided charged polymers ("piezoelectrets"), and polymer composites. A piezo-response observed by bulk polymers is mostly due to its molecular structure. There are two types of bulk polymers: amorphous and semi-crystalline. Examples of semi-crystalline polymers are Polyvinylidene Fluoride (PVDF) and its copolymers, Polyamides, and Parylene-C. Non-crystalline polymers, such as Polyimide and Polyvinylidene Chloride (PVDC), fall under amorphous bulk polymers. Voided charged polymers exhibit the piezoelectric effect due to charge induced by poling of a porous polymeric film. Under an electric field, charges form on the surface of the voids forming dipoles. Electric responses can be caused by any deformation of these voids. The piezoelectric effect can also be observed in polymer composites by integrating piezoelectric ceramic particles into a polymer film. A polymer does not have to be piezo-active to be an effective material for a polymer composite. In this case, a material could be made up of an inert matrix with a separate piezo-active component. PVDF exhibits piezoelectricity several times greater than quartz. The piezo-response observed from PVDF is about 20–30 pC/N. That is an order of 5–50 times less than that of piezoelectric ceramic lead zirconate titanate (PZT). The thermal stability of the piezoelectric effect of polymers in the PVDF family (i.e. vinylidene fluoride co-poly trifluoroethylene) goes up to 125 °C. Some applications of PVDF are pressure sensors, hydrophones, and shock wave sensors. Due to their flexibility, piezoelectric composites have been proposed as energy harvesters and nanogenerators. In 2018, it was reported by Zhu et al. that a piezoelectric response of about 17 pC/N could be obtained from PDMS/PZT nanocomposite at 60% porosity. Another PDMS nanocomposite was reported in 2017, in which BaTiO3 was integrated into PDMS to make a stretchable, transparent nanogenerator for self-powered physiological monitoring. In 2016, polar molecules were introduced into a polyurethane foam in which high responses of up to 244 pC/N were reported. Other materials Most materials exhibit at least weak piezoelectric responses. Trivial examples include sucrose (table sugar), DNA, viral proteins, including those from bacteriophage. An actuator based on wood fibers, called cellulose fibers, has been reported. D33 responses for cellular polypropylene are around 200 pC/N. Some applications of cellular polypropylene are musical key pads, microphones, and ultrasound-based echolocation systems. Recently, single amino acid such as β-glycine also displayed high piezoelectric (178 pmV−1) as compared to other biological materials. Application Currently, industrial and manufacturing is the largest application market for piezoelectric devices, followed by the automotive industry. Strong demand also comes from medical instruments as well as information and telecommunications. The global demand for piezoelectric devices was valued at approximately US$21.6 billion in 2015. The largest material group for piezoelectric devices is piezoceramics, and piezopolymer is experiencing the fastest growth due to its low weight and small size. Piezoelectric crystals are now used in numerous ways: High voltage and power sources Direct piezoelectricity of some substances, like quartz, can generate potential differences of thousands of volts. The best-known application is the electric cigarette lighter: pressing the button causes a spring-loaded hammer to hit a piezoelectric crystal, producing a sufficiently high-voltage electric current that flows across a small spark gap, thus heating and igniting the gas. The portable sparkers used to ignite gas stoves work the same way, and many types of gas burners now have built-in piezo-based ignition systems. A similar idea is being researched by DARPA in the United States in a project called energy harvesting, which includes an attempt to power battlefield equipment by piezoelectric generators embedded in soldiers' boots. However, these energy harvesting sources by association affect the body. DARPA's effort to harness 1–2 watts from continuous shoe impact while walking were abandoned due to the impracticality and the discomfort from the additional energy expended by a person wearing the shoes. Other energy harvesting ideas include harvesting the energy |
can be multiplied in another way, called the convolution. If then the integral is well defined and is called the convolution. Under the Fourier transform, convolution becomes point-wise function multiplication. Polynomial rings The product of two polynomials is given by the following: with Products in linear algebra There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product, exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections. Scalar multiplication By the very definition of a vector space, one can form the product of any scalar with any vector, giving a map . Scalar product A scalar product is a bi-linear map: with the following conditions, that for all . From the scalar product, one can define a norm by letting . The scalar product also allows one to define an angle between two vectors: In -dimensional Euclidean space, the standard scalar product (called the dot product) is given by: Cross product in 3-dimensional space The cross product of two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors. The cross product can also be expressed as the formal determinant: Composition of linear mappings A linear mapping can be defined as a function f between two vector spaces V and W with underlying field F, satisfying If one only considers finite dimensional vector spaces, then in which bV and bW denote the bases of V and W, and vi denotes the component of v on bVi, and Einstein summation convention is applied. Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U. Then one can get Or in matrix form: in which the i-row, j-column element of F, denoted by Fij, is fji, and Gij=gji. The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication. Product of two matrices Given two matrices and their product is given by Composition of linear functions as matrix product There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim(U), s = dim(V) and t = dim(W) be the (finite) dimensions of vector spaces U, V and W. Let be a basis of U, be a basis of V and be a basis of W. In terms of this basis, let be the matrix representing f : U → V and be the matrix representing g : V → W. Then is the matrix representing . In other words: the matrix product is the description in coordinates of the composition of linear functions. Tensor product of vector spaces Given two finite dimensional vector spaces V and W, the tensor product of them can be defined as a (2,0)-tensor satisfying: where V* and W* denote the dual spaces of V and W. For infinite-dimensional vector spaces, one also has the: Tensor product of Hilbert spaces Topological tensor product. The tensor product, outer product and | for every real number there is a set of rational number such that is the least upper bound of the elements of : If is another real number that is the least upper bound of , the product is defined as This definition does not depend of a particular choice of and . That is, if they are changed without changing their least upper bound, then the least upper bound defining is not changed. Product of two complex numbers Two complex numbers can be multiplied by the distributive law and the fact that , as follows: Geometric meaning of complex multiplication Complex numbers can be written in polar coordinates: Furthermore, from which one obtains The geometric meaning is that the magnitudes are multiplied and the arguments are added. Product of two quaternions The product of two quaternions can be found in the article on quaternions. Note, in this case, that and are in general different. Product of a sequence The product operator for the product of a sequence is denoted by the capital Greek letter pi Π (in analogy to the use of the capital Sigma Σ as summation symbol). For example, the expression is another way of writing . The product of a sequence consisting of only one number is just that number itself; the product of no factors at all is known as the empty product, and is equal to 1. Commutative rings Commutative rings have a product operation. Residue classes of integers Residue classes in the rings can be added: and multiplied: Convolution Two functions from the reals to itself can be multiplied in another way, called the convolution. If then the integral is well defined and is called the convolution. Under the Fourier transform, convolution becomes point-wise function multiplication. Polynomial rings The product of two polynomials is given by the following: with Products in linear algebra There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product, exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections. Scalar multiplication By the very definition of a vector space, one can form the product of any scalar with any vector, giving a map . Scalar product A scalar product is a bi-linear map: with the following conditions, that for all . From the scalar product, one can define a norm by letting . The scalar product also allows one to define an angle between two vectors: In -dimensional Euclidean space, the standard scalar product (called the dot product) is given by: Cross product in 3-dimensional space The cross product of two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors. The cross product can also be expressed as the formal determinant: Composition of linear mappings A linear mapping can be defined as a function f between two vector spaces V and W with underlying field F, satisfying If one only considers finite dimensional vector spaces, then in which bV and bW denote the |
of regular polyhedron. A convex 4-polytope is semi-regular if it has a symmetry group under which all vertices are equivalent (vertex-transitive) and its cells are regular polyhedra. The cells may be of two or more kinds, provided that they have the same kind of face. There are only 3 cases identified by Thorold Gosset in 1900: the rectified 5-cell, rectified 600-cell, and snub 24-cell. A 4-polytope is uniform if it has a symmetry group under which all vertices are equivalent, and its cells are uniform polyhedra. The faces of a uniform 4-polytope must be regular. A 4-polytope is scaliform if it is vertex-transitive, and has all equal length edges. This allows cells which are not uniform, such as the regular-faced convex Johnson solids. A regular 4-polytope which is also convex is said to be a convex regular 4-polytope. A 4-polytope is prismatic if it is the Cartesian product of two or more lower-dimensional polytopes. A prismatic 4-polytope is uniform if its factors are uniform. The hypercube is prismatic (product of two squares, or of a cube and line segment), but is considered separately because it has symmetries other than those inherited from its factors. A tiling or honeycomb of 3-space is the division of three-dimensional Euclidean space into a repetitive grid of polyhedral cells. Such tilings or tessellations are infinite and do not bound a "4D" volume, and are examples of infinite 4-polytopes. A uniform tiling of 3-space is one whose vertices are congruent and related by a space group and whose cells are uniform polyhedra. Classes The following lists the various categories of 4-polytopes classified according to the criteria above: Uniform 4-polytope (vertex-transitive): Convex uniform 4-polytopes (64, plus two infinite families) 47 non-prismatic convex uniform 4-polytope including: 6 Convex regular 4-polytope Prismatic uniform 4-polytopes: {} × {p,q} : 18 polyhedral hyperprisms (including cubic hyperprism, the regular hypercube) Prisms built on antiprisms (infinite family) {p} × {q} : duoprisms (infinite family) Non-convex uniform 4-polytopes (10 + unknown) 10 (regular) Schläfli-Hess polytopes 57 hyperprisms built on nonconvex uniform polyhedra Unknown total number of nonconvex uniform 4-polytopes: Norman Johnson and other collaborators have identified 2189 known cases (convex and star, excluding the infinite families), all constructed by vertex figures by Stella4D software. Other convex 4-polytopes: Polyhedral pyramid Polyhedral prism Infinite uniform 4-polytopes of Euclidean 3-space (uniform tessellations of convex uniform cells) 28 convex uniform honeycombs: uniform convex polyhedral tessellations, including: 1 regular tessellation, cubic honeycomb: {4,3,4} Infinite uniform 4-polytopes of hyperbolic 3-space (uniform tessellations of convex uniform cells) 76 Wythoffian convex uniform honeycombs in hyperbolic space, including: 4 regular tessellation of compact hyperbolic 3-space: {3,5,3}, {4,3,5}, {5,3,4}, {5,3,5} Dual uniform 4-polytope (cell-transitive): 41 unique dual convex uniform 4-polytopes 17 unique dual convex uniform polyhedral prisms infinite family of dual convex uniform duoprisms (irregular tetrahedral cells) 27 unique convex dual uniform honeycombs, including: Rhombic dodecahedral honeycomb Disphenoid tetrahedral honeycomb Others: Weaire–Phelan structure periodic space-filling honeycomb with irregular cells Abstract regular 4-polytopes: 11-cell 57-cell These categories include only the 4-polytopes that exhibit a high degree of symmetry. Many other 4-polytopes are possible, but they have not been studied as extensively as the ones included in these categories. See also Regular 4-polytope 3-sphere – analogue of a sphere in 4-dimensional space. This is not a 4-polytope, since it is not bounded by polyhedral cells. The duocylinder is a figure in 4-dimensional space related to the duoprisms. It is also not a 4-polytope because its bounding volumes are not polyhedral. References Notes Bibliography H.S.M. Coxeter: H.S.M. Coxeter, M.S. Longuet-Higgins and J.C.P. Miller: Uniform Polyhedra, Philosophical Transactions of the Royal Society of London, Londne, 1954 Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559–591] (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3–45] J.H. Conway and M.J.T. Guy: Four-Dimensional Archimedean Polytopes, Proceedings of the Colloquium on Convexity at Copenhagen, page 38 und 39, 1965 N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 Four-dimensional Archimedean Polytopes (German), Marco Möller, 2004 PhD dissertation External links Uniform Polychora, Jonathan Bowers Uniform polychoron Viewer - Java3D Applet with sources | Betti numbers. Similarly, the notion of orientability of a polyhedron is insufficient to characterise the surface twistings of toroidal 4-polytopes, and this led to the use of torsion coefficients. Classification Criteria Like all polytopes, 4-polytopes may be classified based on properties like "convexity" and "symmetry". A 4-polytope is convex if its boundary (including its cells, faces and edges) does not intersect itself and the line segment joining any two points of the 4-polytope is contained in the 4-polytope or its interior; otherwise, it is non-convex. Self-intersecting 4-polytopes are also known as star 4-polytopes, from analogy with the star-like shapes of the non-convex star polygons and Kepler–Poinsot polyhedra. A 4-polytope is regular if it is transitive on its flags. This means that its cells are all congruent regular polyhedra, and similarly its vertex figures are congruent and of another kind of regular polyhedron. A convex 4-polytope is semi-regular if it has a symmetry group under which all vertices are equivalent (vertex-transitive) and its cells are regular polyhedra. The cells may be of two or more kinds, provided that they have the same kind of face. There are only 3 cases identified by Thorold Gosset in 1900: the rectified 5-cell, rectified 600-cell, and snub 24-cell. A 4-polytope is uniform if it has a symmetry group under which all vertices are equivalent, and its cells are uniform polyhedra. The faces of a uniform 4-polytope must be regular. A 4-polytope is scaliform if it is vertex-transitive, and has all equal length edges. This allows cells which are not uniform, such as the regular-faced convex Johnson solids. A regular 4-polytope which is also convex is said to be a convex regular 4-polytope. A 4-polytope is prismatic if it is the Cartesian product of two or more lower-dimensional polytopes. A prismatic 4-polytope is uniform if its factors are uniform. The hypercube is prismatic (product of two squares, or of a cube and line segment), but is considered separately because it has symmetries other than those inherited from its factors. A tiling or honeycomb of 3-space is the division of three-dimensional Euclidean space into a repetitive grid of polyhedral cells. Such tilings or tessellations are infinite and do not bound a "4D" volume, and are examples of infinite 4-polytopes. A uniform tiling of 3-space is one whose vertices are congruent and related by a space group and whose cells are uniform polyhedra. Classes The following lists the various categories of 4-polytopes classified according to the criteria above: Uniform 4-polytope (vertex-transitive): Convex uniform 4-polytopes (64, plus two infinite families) 47 non-prismatic convex uniform 4-polytope including: 6 Convex regular 4-polytope Prismatic uniform 4-polytopes: {} × {p,q} : 18 polyhedral hyperprisms (including cubic hyperprism, the regular hypercube) Prisms built on antiprisms (infinite family) {p} × {q} : duoprisms (infinite family) Non-convex uniform 4-polytopes (10 + unknown) 10 (regular) Schläfli-Hess polytopes 57 hyperprisms built on nonconvex uniform polyhedra Unknown total number of nonconvex uniform 4-polytopes: Norman Johnson and other collaborators have identified 2189 known cases (convex and star, excluding the infinite families), all constructed by vertex figures by Stella4D software. Other convex 4-polytopes: Polyhedral pyramid Polyhedral prism Infinite uniform 4-polytopes of Euclidean 3-space (uniform tessellations of convex uniform cells) 28 convex uniform honeycombs: uniform convex polyhedral tessellations, including: 1 regular tessellation, cubic honeycomb: {4,3,4} Infinite uniform 4-polytopes of hyperbolic 3-space (uniform tessellations of convex uniform cells) 76 Wythoffian convex uniform honeycombs in hyperbolic space, including: 4 regular tessellation of compact hyperbolic 3-space: {3,5,3}, {4,3,5}, {5,3,4}, {5,3,5} Dual uniform 4-polytope (cell-transitive): 41 unique dual convex uniform 4-polytopes 17 unique dual convex uniform polyhedral prisms infinite family of dual convex uniform duoprisms (irregular tetrahedral cells) 27 unique convex dual uniform honeycombs, including: Rhombic dodecahedral honeycomb Disphenoid tetrahedral honeycomb Others: Weaire–Phelan structure periodic space-filling honeycomb with irregular cells Abstract regular 4-polytopes: 11-cell 57-cell These categories include only the 4-polytopes that exhibit a high degree of symmetry. Many other 4-polytopes are possible, but they have not been studied as extensively as the ones included in these categories. See also Regular 4-polytope 3-sphere – analogue of a sphere in 4-dimensional space. This is not a 4-polytope, since it is not bounded by polyhedral cells. The duocylinder is a figure in 4-dimensional space related to the duoprisms. It is also not a 4-polytope because its bounding volumes are not polyhedral. References Notes Bibliography H.S.M. Coxeter: H.S.M. Coxeter, M.S. Longuet-Higgins and J.C.P. |
between generations does not accumulate. Instead, over time, the species wobbles about its phenotypic mean. Jonathan Weiner's The Beak of the Finch describes this very process." Hierarchical evolution Punctuated equilibrium has also been cited as contributing to the hypothesis that species are Darwinian individuals, and not just classes, thereby providing a stronger framework for a hierarchical theory of evolution. Common misconceptions Much confusion has arisen over what proponents of punctuated equilibrium actually argued, what mechanisms they advocated, how fast the punctuations were, what taxonomic scale their theory applied to, how revolutionary their claims were intended to be, and how punctuated equilibrium related to other ideas like saltationism, quantum evolution, and mass extinction. Saltationism The punctuational nature of punctuated equilibrium has engendered perhaps the most confusion over Eldredge and Gould's theory. Gould's sympathetic treatment of Richard Goldschmidt, the controversial geneticist who advocated the idea of "hopeful monsters," led some biologists to conclude that Gould's punctuations were occurring in single-generation jumps. This interpretation has frequently been used by creationists to characterize the weakness of the paleontological record, and to portray contemporary evolutionary biology as advancing neo-saltationism. In an often quoted remark, Gould stated, "Since we proposed punctuated equilibria to explain trends, it is infuriating to be quoted again and again by creationists—whether through design or stupidity, I do not know—as admitting that the fossil record includes no transitional forms. Transitional forms are generally lacking at the species level, but they are abundant between larger groups." Although there exist some debate over how long the punctuations last, supporters of punctuated equilibrium generally place the figure between 50,000 and 100,000 years. Quantum evolution Quantum evolution was a controversial hypothesis advanced by Columbia University paleontologist George Gaylord Simpson, regarded by Gould as "the greatest and most biologically astute paleontologist of the twentieth century." Simpson's conjecture was that according to the geological record, on very rare occasions evolution would proceed very rapidly to form entirely new families, orders, and classes of organisms. This hypothesis differs from punctuated equilibrium in several respects. First, punctuated equilibrium was more modest in scope, in that it was addressing evolution specifically at the species level. Simpson's idea was principally concerned with evolution at higher taxonomic groups. Second, Eldredge and Gould relied upon a different mechanism. Where Simpson relied upon a synergistic interaction between genetic drift and a shift in the adaptive fitness landscape, Eldredge and Gould relied upon ordinary speciation, particularly Ernst Mayr's concept of allopatric speciation. Lastly, and perhaps most significantly, quantum evolution took no position on the issue of stasis. Although Simpson acknowledged the existence of stasis in what he called the bradytelic mode, he considered it (along with rapid evolution) to be unimportant in the larger scope of evolution. In his Major Features of Evolution Simpson stated, "Evolutionary change is so nearly the universal rule that a state of motion is, figuratively, normal in evolving populations. The state of rest, as in bradytely, is the exception and it seems that some restraint or force must be required to maintain it." Despite such differences between the two models, earlier critiques—from such eminent commentators as Sewall Wright as well as Simpson himself—have argued that punctuated equilibrium is little more than quantum evolution relabeled. Multiple meanings of gradualism Punctuated equilibrium is often portrayed to oppose the concept of gradualism, when it is actually a form of gradualism. This is because even though evolutionary change appears instantaneous between geological sedimentary layers, change is still occurring incrementally, with no great change from one generation to the next. To this end, Gould later commented that "Most of our paleontological colleagues missed this insight because they had not studied evolutionary theory and either did not know about allopatric speciation or had not considered its translation to geological time. Our evolutionary colleagues also failed to grasp the implication(s), primarily because they did not think at geological scales". Richard Dawkins dedicates a chapter in The Blind Watchmaker to correcting, in his view, the wide confusion regarding rates of change. His first point is to argue that phyletic gradualism—understood in the sense that evolution proceeds at a single uniform speed, called "constant speedism" by Dawkins—is a "caricature of Darwinism" and "does not really exist". His second argument, which follows from the first, is that once the caricature of "constant speedism" is dismissed, we are left with one logical alternative, which Dawkins terms "variable speedism". Variable speedism may also be distinguished one of two ways: "discrete variable speedism" and "continuously variable speedism". Eldredge and Gould, proposing that evolution jumps between stability and relative rapidity, are described as "discrete variable speedists", and "in this respect they are genuinely radical." They assert that evolution generally proceeds in bursts, or not at all. "Continuously variable speedists", on the other hand, advance that "evolutionary rates fluctuate continuously from very fast to very slow and stop, with all intermediates. They see no particular reason to emphasize certain speeds more than others. In particular, stasis, to them, is just an extreme case of ultra-slow evolution. To a punctuationist, there is something very special about stasis." Criticism Richard Dawkins regards the apparent gaps represented in the fossil record as documenting migratory events rather than evolutionary events. According to Dawkins, evolution certainly occurred but "probably gradually" elsewhere. However, the punctuational equilibrium model may still be inferred from both the observation of stasis and examples of rapid and episodic speciation events documented in the fossil record. Dawkins also emphasizes that punctuated equilibrium has been "oversold by some journalists", but partly due to Eldredge and Gould's "later writings". Dawkins contends that the hypothesis "does not deserve a particularly large measure of publicity". It is a "minor gloss," an "interesting but minor wrinkle on the surface of neo-Darwinian theory," and "lies firmly within the neo-Darwinian synthesis". In his book Darwin's Dangerous Idea, philosopher Daniel Dennett is especially critical of Gould's presentation of punctuated equilibrium. Dennett argues that Gould alternated between revolutionary and conservative claims, and that each time Gould made a revolutionary statement—or appeared to do so—he was criticized, and thus retreated to a traditional neo-Darwinian position. Gould responded to Dennett's claims in The New York Review of Books, and in his technical volume The Structure of Evolutionary Theory. English professor Heidi Scott argues that Gould's talent for writing vivid prose, his use of metaphor, and his success in building a popular audience of nonspecialist readers altered the "climate of specialized scientific discourse" favorably in his promotion of punctuated equilibrium. While Gould is celebrated for the color and energy of his prose, as well as his interdisciplinary knowledge, critics such as Scott, Richard Dawkins, and Daniel Dennett have concerns that the theory has gained undeserved credence among non-scientists because of Gould's rhetorical skills. Philosopher John Lyne and biologist Henry Howe believed punctuated equilibrium's success has much more to do with the nature of the geological record than the nature of Gould's rhetoric. They state, a "re-analysis of existing fossil data has shown, to the increasing satisfaction of the paleontological community, that Eldredge and Gould were correct in identifying periods of evolutionary stasis which are interrupted by much shorter periods of evolutionary change." Some critics jokingly referred to the theory of punctuated equilibrium as "evolution by jerks", which reportedly prompted punctuationists to describe phyletic gradualism as "evolution by creeps." Darwin's theory The sudden appearance of most species in the geologic record and the lack of evidence of substantial gradual change in most species—from their initial appearance until their extinction—has long been noted, including by Charles Darwin who appealed to the imperfection of the record as the favored explanation. When presenting his ideas against the prevailing influences of catastrophism and progressive creationism, which envisaged species being supernaturally created at intervals, Darwin needed to forcefully stress the gradual nature of evolution in accordance with the gradualism promoted by his friend Charles Lyell. He privately expressed concern, noting in the margin of his 1844 Essay, "Better begin with this: If species really, after catastrophes, created in showers world over, my theory false." It is often incorrectly assumed that he insisted that the rate of change must be constant, or nearly so, but even the first edition of On the Origin of Species states that "Species of different genera and classes have not changed at the same rate, or in the same degree. In the oldest tertiary beds a few living shells may still be found in the midst of a multitude of extinct forms... The Silurian Lingula differs but little from the living species of this genus". Lingula is among the few brachiopods surviving today but also known from fossils over 500 million years old. In the fourth edition (1866) of On the Origin of Species Darwin wrote that "the periods during which species have undergone modification, though long as measured in years, have probably been short in comparison with the periods during which they retain the same form." Thus punctuationism in general is consistent with Darwin's conception of evolution. According to early versions of punctuated equilibrium, "peripheral isolates" are considered to be of critical importance for speciation. However, Darwin wrote, "I can by no means agree ... that immigration and isolation are necessary elements.... Although isolation is of great importance in the production of new species, on the whole I am inclined to believe that largeness of area is | evolution generally occurs uniformly by the steady and gradual transformation of whole lineages (anagenesis). In 1972, paleontologists Niles Eldredge and Stephen Jay Gould published a landmark paper developing their theory and called it punctuated equilibria. Their paper built upon Ernst Mayr's model of geographic speciation, I. Michael Lerner's theories of developmental and genetic homeostasis, and their own empirical research. Eldredge and Gould proposed that the degree of gradualism commonly attributed to Charles Darwin is virtually nonexistent in the fossil record, and that stasis dominates the history of most fossil species. History Punctuated equilibrium originated as a logical consequence of Ernst Mayr's concept of genetic revolutions by allopatric and especially peripatric speciation as applied to the fossil record. Although the sudden appearance of species and its relationship to speciation was proposed and identified by Mayr in 1954, historians of science generally recognize the 1972 Eldredge and Gould paper as the basis of the new paleobiological research program. Punctuated equilibrium differs from Mayr's ideas mainly in that Eldredge and Gould placed considerably greater emphasis on stasis, whereas Mayr was concerned with explaining the morphological discontinuity (or "sudden jumps") found in the fossil record. Mayr later complimented Eldredge and Gould's paper, stating that evolutionary stasis had been "unexpected by most evolutionary biologists" and that punctuated equilibrium "had a major impact on paleontology and evolutionary biology." A year before their 1972 Eldredge and Gould paper, Niles Eldredge published a paper in the journal Evolution which suggested that gradual evolution was seldom seen in the fossil record and argued that Ernst Mayr's standard mechanism of allopatric speciation might suggest a possible resolution. The Eldredge and Gould paper was presented at the Annual Meeting of the Geological Society of America in 1971. The symposium focused its attention on how modern microevolutionary studies could revitalize various aspects of paleontology and macroevolution. Tom Schopf, who organized that year's meeting, assigned Gould the topic of speciation. Gould recalls that "Eldredge's 1971 publication [on Paleozoic trilobites] had presented the only new and interesting ideas on the paleontological implications of the subject—so I asked Schopf if we could present the paper jointly." According to Gould "the ideas came mostly from Niles, with yours truly acting as a sounding board and eventual scribe. I coined the term punctuated equilibrium and wrote most of our 1972 paper, but Niles is the proper first author in our pairing of Eldredge and Gould." In his book Time Frames Eldredge recalls that after much discussion the pair "each wrote roughly half. Some of the parts that would seem obviously the work of one of us were actually first penned by the other—I remember for example, writing the section on Gould's snails. Other parts are harder to reconstruct. Gould edited the entire manuscript for better consistency. We sent it in, and Schopf reacted strongly against it—thus signaling the tenor of the reaction it has engendered, though for shifting reasons, down to the present day." John Wilkins and Gareth Nelson have argued that French architect Pierre Trémaux proposed an "anticipation of the theory of punctuated equilibrium of Gould and Eldredge." Evidence from the fossil record The fossil record includes well documented examples of both phyletic gradualism and punctuational evolution. As such, much debate persists over the prominence of stasis in the fossil record. Before punctuated equilibrium, most evolutionists considered stasis to be rare or unimportant. The paleontologist George Gaylord Simpson, for example, believed that phyletic gradual evolution (called horotely in his terminology) comprised 90% of evolution. More modern studies, including a meta-analysis examining 58 published studies on speciation patterns in the fossil record showed that 71% of species exhibited stasis, and 63% were associated with punctuated patterns of evolutionary change. According to Michael Benton, "it seems clear then that stasis is common, and that had not been predicted from modern genetic studies." A paramount example of evolutionary stasis is the fern Osmunda claytoniana. Based on paleontological evidence it has remained unchanged, even at the level of fossilized nuclei and chromosomes, for at least 180 million years. Theoretical mechanisms Punctuational change When Eldredge and Gould published their 1972 paper, allopatric speciation was considered the "standard" model of speciation. This model was popularized by Ernst Mayr in his 1954 paper "Change of genetic environment and evolution," and his classic volume Animal Species and Evolution (1963). Allopatric speciation suggests that species with large central populations are stabilized by their large volume and the process of gene flow. New and even beneficial mutations are diluted by the population's large size and are unable to reach fixation, due to such factors as constantly changing environments. If this is the case, then the transformation of whole lineages should be rare, as the fossil record indicates. Smaller populations on the other hand, which are isolated from the parental stock, are decoupled from the homogenizing effects of gene flow. In addition, pressure from natural selection is especially intense, as peripheral isolated populations exist at the outer edges of ecological tolerance. If most evolution happens in these rare instances of allopatric speciation then evidence of gradual evolution in the fossil record should be rare. This hypothesis was alluded to by Mayr in the closing paragraph of his 1954 paper: Although punctuated equilibrium generally applies to sexually reproducing organisms, some biologists have applied the model to non-sexual species like viruses, which cannot be stabilized by conventional gene flow. As time went on biologists like Gould moved away from wedding punctuated equilibrium to allopatric speciation, particularly as evidence accumulated in support of other modes of speciation. Gould, for example, was particularly attracted to Douglas Futuyma's work on the importance of reproductive isolating mechanisms. Stasis Many hypotheses have been proposed to explain the putative causes of stasis. Gould was initially attracted to I. Michael Lerner's theories of developmental and genetic homeostasis. However this hypothesis was rejected over time, as evidence accumulated against it. Other plausible mechanisms which have been suggested include: habitat tracking, stabilizing selection, the Stenseth-Maynard Smith stability hypothesis, constraints imposed by the nature of subdivided populations, normalizing clade selection, and koinophilia. Evidence for stasis has also been corroborated from the genetics of sibling species, species which are morphologically indistinguishable, but whose proteins have diverged sufficiently to suggest they have been separated for millions of years. Fossil evidence of reproductively isolated extant species of sympatric Olive Shells (Amalda sp.) also confirm morphological stasis in multiple lineages over three million years. According to Gould, "stasis may emerge as the theory's most important contribution to evolutionary science." Philosopher Kim Sterelny in clarifying the meaning of stasis adds, "In claiming that species typically undergo no further evolutionary change once speciation is complete, they are not claiming that there is no change at all between one generation and the next. Lineages do change. But the change between generations does not accumulate. Instead, over time, the species wobbles about its phenotypic mean. Jonathan Weiner's The Beak of the Finch describes this very process." Hierarchical evolution Punctuated equilibrium has also been cited as contributing to the hypothesis that species are Darwinian individuals, and not just classes, thereby providing a stronger framework for a hierarchical theory of evolution. Common misconceptions Much confusion has arisen over what proponents of punctuated equilibrium actually argued, what mechanisms they advocated, how fast the punctuations were, what taxonomic scale their theory applied to, how revolutionary their claims were intended to be, and how punctuated equilibrium related to other ideas like saltationism, quantum evolution, and mass extinction. Saltationism The punctuational nature of punctuated equilibrium has engendered perhaps the most confusion over Eldredge |
too cold for life. Hurtling underneath the ring plane, the probe sent back pictures of Saturn's rings. The rings, which normally seem bright when observed from Earth, appeared dark in the Pioneer pictures, and the dark gaps in the rings seen from Earth appeared as bright rings. Interstellar mission On February 25, 1990, Pioneer 11 became the 4th man-made object to pass beyond the orbit of the planets. NASA ends operations By 1995, Pioneer 11 could no longer power any of its detectors, so the decision was made to shut it down. On September 29, 1995, NASA's Ames Research Center, responsible for managing the project, issued a press release that began, "After nearly 22 years of exploration out to the farthest reaches of the Solar System, one of the most durable and productive space missions in history will come to a close." It indicated NASA would use its Deep Space Network antennas to listen "once or twice a month" for the spacecraft's signal, until "some time in late 1996" when "its transmitter will fall silent altogether." NASA Administrator Daniel Goldin characterized Pioneer 11 as "the little spacecraft that could, a venerable explorer that has taught us a great deal about the Solar System and, in the end, about our own innate drive to learn. Pioneer 11 is what NASA is all about – exploration beyond the frontier." Besides announcing the end of operations, the dispatch provided a historical list of Pioneer 11 mission achievements. NASA terminated routine contact with the spacecraft on September 30, 1995, but continued to make contact for about 2 hours every 2 to 4 weeks. Scientists received a few minutes of good engineering data on November 24, 1995, but then lost final contact once Earth moved out of view of the spacecraft's antenna. Current status On January 30, 2019, Pioneer 11 was from the Earth and from the Sun; and traveling at (relative to the Sun) and traveling outward at about 2.37 AU per year. The spacecraft is heading in the direction of the constellation Scutum near the current position (August 2017) RA 18h 50m dec -8° 39.5' (J2000.0), close to Messier 26. In 928,000 years, it will pass within 0.25pc of the K dwarf TYC 992-192-1. Pioneer 11 has now been overtaken by the two Voyager probes launched in 1977, and Voyager 1 is now the most distant object built by humans. Pioneer anomaly Analysis of the radio tracking data from the Pioneer 10 and 11 spacecraft at distances between 20 and 70 AU from the Sun has consistently indicated the presence of a small but anomalous Doppler frequency drift. The drift can be interpreted as due to a constant acceleration of directed towards the Sun. Although it is suspected that there is a systematic origin to the effect, none was found. As a result, there is sustained interest in the nature of this so-called "Pioneer anomaly". Extended analysis of mission data by Slava Turyshev and colleagues has determined the source of the anomaly to be asymmetric thermal radiation and the resulting thermal recoil force acting on the face of the Pioneers away from the Sun, and in July 2012 the group of researchers published their results in the Physical Review Letters scientific journal. Pioneer plaque Pioneer 10 and 11 both carry a gold-anodized aluminum plaque in the event that either spacecraft is ever found by intelligent lifeforms from other planetary systems. The plaques feature the nude figures of | TRW and managed as part of the Pioneer program by NASA Ames Research Center. A backup unit, Pioneer H, is currently on display in the "Milestones of Flight" exhibit at the National Air and Space Museum in Washington, D.C. Many elements of the mission proved to be critical in the planning of the Voyager program. Spacecraft design The Pioneer 11 bus measures deep and with six panels forming the hexagonal structure. The bus houses propellant to control the orientation of the probe and eight of the twelve scientific instruments. The spacecraft has a mass of 259 kilograms. Attitude control and propulsion Orientation of the spacecraft was maintained with six 4.5-N, hydrazine monopropellant thrusters: pair one maintains a constant spin-rate of 4.8 rpm, pair two controls the forward thrust, pair three controls attitude. Information for the orientation is provided by performing conical scanning maneuvers to track Earth in its orbit, a star sensor able to reference Canopus, and two Sun sensors. Communications The space probe includes a redundant system transceivers, one attached to the high-gain antenna, the other to an omni-antenna and medium-gain antenna. Each transceiver is 8 watts and transmits data across the S-band using 2110 MHz for the uplink from Earth and 2292 MHz for the downlink to Earth with the Deep Space Network tracking the signal. Prior to transmitting data, the probe uses a convolutional encoder to allow correction of errors in the received data on Earth. Power Pioneer 11 uses four SNAP-19 radioisotope thermoelectric generators (RTGs) (see diagram). They are positioned on two three-rod trusses, each in length and 120 degrees apart. This was expected to be a safe distance from the sensitive scientific experiments carried on board. Combined, the RTGs provided 155 watts at launch, and decayed to 140 W in transit to Jupiter. The spacecraft requires 100 W to power all systems. Computer Much of the computation for the mission was performed on Earth and transmitted to the probe, where it is able to retain in memory, up to five commands of the 222 possible entries by ground controllers. The spacecraft includes two command decoders and a command distribution unit, a very limited form of a processor, to direct operations on the spacecraft. This system requires that mission operators prepare commands long in advance of transmitting them to the probe. A data storage unit is included to record up to 6,144 bytes of information gathered by the instruments. The digital telemetry unit is then used to prepare the collected data in one of the thirteen possible formats before transmitting it back to Earth. Scientific instruments Pioneer 11 has one additional instrument more than Pioneer 10, a flux-gate magnetometer. Mission profile Launch and trajectory The Pioneer 11 probe was launched on April 6, 1973 at 02:11:00 UTC, by the National Aeronautics and Space Administration from Space Launch Complex 36A at Cape Canaveral, Florida aboard an Atlas-Centaur launch vehicle, with a Star 37E propulsion module. Its twin probe, Pioneer 10, had been launched a year earlier on March 3, 1972. Pioneer 11 was launched on a trajectory directly aimed at Jupiter without any prior gravitational assists. In May 1974, Pioneer was retargeted to fly past Jupiter on a |
can measure when they can find the operations by which they may meet the necessary criteria; psychologists have but to do the same. They need not worry about the mysterious differences between the meaning of measurement in the two sciences (Reese, 1943, p. 49). These divergent responses are reflected in alternative approaches to measurement. For example, methods based on covariance matrices are typically employed on the premise that numbers, such as raw scores derived from assessments, are measurements. Such approaches implicitly entail Stevens's definition of measurement, which requires only that numbers are assigned according to some rule. The main research task, then, is generally considered to be the discovery of associations between scores, and of factors posited to underlie such associations. On the other hand, when measurement models such as the Rasch model are employed, numbers are not assigned based on a rule. Instead, in keeping with Reese's statement above, specific criteria for measurement are stated, and the goal is to construct procedures or operations that provide data that meet the relevant criteria. Measurements are estimated based on the models, and tests are conducted to ascertain whether the relevant criteria have been met. Instruments and procedures The first psychometric instruments were designed to measure intelligence. One early approach to measuring intelligence was the test developed in France by Alfred Binet and Theodore Simon. That test was known as the .The French test was adapted for use in the U. S. by Lewis Terman of Stanford University, and named the Stanford-Binet IQ test. Another major focus in psychometrics has been on personality testing. There have been a range of theoretical approaches to conceptualizing and measuring personality, though there is no widely agreed upon theory. Some of the better known instruments include the Minnesota Multiphasic Personality Inventory, the Five-Factor Model (or "Big 5") and tools such as Personality and Preference Inventory and the Myers–Briggs Type Indicator. Attitudes have also been studied extensively using psychometric approaches. An alternative method involves the application of unfolding measurement models, the most general being the Hyperbolic Cosine Model (Andrich & Luo, 1993). Theoretical approaches Psychometricians have developed a number of different measurement theories. These include classical test theory (CTT) and item response theory (IRT). An approach which seems mathematically to be similar to IRT but also quite distinctive, in terms of its origins and features, is represented by the Rasch model for measurement. The development of the Rasch model, and the broader class of models to which it belongs, was explicitly founded on requirements of measurement in the physical sciences. Psychometricians have also developed methods for working with large matrices of correlations and covariances. Techniques in this general tradition include: factor analysis, a method of determining the underlying dimensions of data. One of the main challenges faced by users of factor analysis is a lack of consensus on appropriate procedures for determining the number of latent factors. A usual procedure is to stop factoring when eigenvalues drop below one because the original sphere shrinks. The lack of the cutting points concerns other multivariate methods, also. Multidimensional scaling is a method for finding a simple representation for data with a large number of latent dimensions. Cluster analysis is an approach to finding objects that are like each other. Factor analysis, multidimensional scaling, and cluster analysis are all multivariate descriptive methods used to distill from large amounts of data simpler structures. More recently, structural equation modeling and path analysis represent more sophisticated approaches to working with large covariance matrices. These methods allow statistically sophisticated models to be fitted to data and tested to determine if they are adequate fits. Because at a granular level psychometric research is concerned with the extent and nature of multidimensionality in each of the items of interest, a relatively new procedure known as bi-factor analysis can be helpful. Bi-factor analysis can decompose "an item's systematic variance in terms of, ideally, two sources, a general factor and one source of additional systematic variance." Key concepts Key concepts in classical test theory are reliability and validity. A reliable measure is one that measures a construct consistently across time, individuals, and situations. A valid measure is one that measures what it is intended to measure. Reliability is necessary, but not sufficient, for validity. Both reliability and validity can be assessed statistically. Consistency over repeated measures of the same test can be assessed with the Pearson correlation coefficient, and is often called test-retest reliability. Similarly, the equivalence of different versions of the same measure can be indexed by a Pearson correlation, and is called equivalent forms reliability or a similar term. Internal consistency, which addresses the homogeneity of a single test form, may be assessed by correlating performance on two halves of a test, which is termed split-half reliability; the value of this Pearson product-moment correlation coefficient for two half-tests is adjusted with the Spearman–Brown prediction formula to correspond to the correlation between two full-length tests. Perhaps the most commonly used index of reliability is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Other approaches include the intra-class correlation, which is the ratio of variance of measurements of a given target to the variance of all targets. There are a number of different forms of validity. Criterion-related validity refers to the extent to which a test or scale predicts a sample of behavior, i.e., the criterion, that is "external to the measuring instrument itself." That external sample of behavior can be many things including another test; college grade point average as when the high school SAT is used to predict performance in college; and even behavior that occurred in the past, for example, when a test of current psychological symptoms is used to predict the occurrence of past victimization (which would accurately represent postdiction). When the criterion measure is collected at the same time as the measure being validated the goal is to establish concurrent validity; when the criterion is collected later the goal is to establish predictive validity. A measure has construct validity if it is related to measures of other constructs as required by theory. Content validity is a demonstration that the items of a test do an adequate job of covering the domain being measured. In a personnel selection example, test content is based on a defined statement or set of statements of knowledge, skill, ability, or other characteristics obtained from a job analysis. Item response theory models the relationship between latent traits and responses to test items. Among other advantages, IRT provides a basis for obtaining an estimate of the location of a test-taker on a given latent trait as well as the standard error of measurement of that location. For example, a university student's knowledge of history can be deduced from his or her score on a university test and then be compared reliably with a high school student's knowledge deduced from a less difficult test. Scores derived | psychometrics. In 1859, Darwin published his book On the Origin of Species. Darwin described the role of natural selection in the emergence, over time, of different populations of species of plants and animals. The book showed how individual members of a species differ among themselves and how they possess characteristics that are more or less adaptive to their environment. Those with more adaptive characteristics are more likely to survive to procreate and give rise to another generation. Those with less adaptive characteristics are less likely. These ideas stimulated Galton's interest in the study of human beings and how they differ one from another and, more importantly, how to measure those differences. Galton wrote a book entitled Hereditary Genius. The book described different characteristics that people possess and how those characteristics make some more "fit" than others. Today these differences, such as sensory and motor functioning (reaction time, visual acuity, and physical strength), are important domains of scientific psychology. Much of the early theoretical and applied work in psychometrics was undertaken in an attempt to measure intelligence. Galton, often referred to as "the father of psychometrics," devised and included mental tests among his anthropometric measures. James McKeen Cattell, a pioneer in the field of psychometrics, went on to extend Galton's work. Cattell coined the term mental test, and is responsible for research and knowledge that ultimately led to the development of modern tests. German stream The origin of psychometrics also has connections to the related field of psychophysics. Around the same time that Darwin, Galton, and Cattell were making their discoveries, Herbart was also interested in "unlocking the mysteries of human consciousness" through the scientific method. Herbart was responsible for creating mathematical models of the mind, which were influential in educational practices in years to come. E.H. Weber built upon Herbart's work and tried to prove the existence of a psychological threshold, saying that a minimum stimulus was necessary to activate a sensory system. After Weber, G.T. Fechner expanded upon the knowledge he gleaned from Herbart and Weber, to devise the law that the strength of a sensation grows as the logarithm of the stimulus intensity. A follower of Weber and Fechner, Wilhelm Wundt is credited with founding the science of psychology. It is Wundt's influence that paved the way for others to develop psychological testing. 20th century In 1936, the psychometrician L. L. Thurstone, founder and first president of the Psychometric Society, developed and applied a theoretical approach to measurement referred to as the law of comparative judgment, an approach that has close connections to the psychophysical theory of Ernst Heinrich Weber and Gustav Fechner. In addition, Spearman and Thurstone both made important contributions to the theory and application of factor analysis, a statistical method developed and used extensively in psychometrics. In the late 1950s, Leopold Szondi made an historical and epistemological assessment of the impact of statistical thinking onto psychology during previous few decades: "in the last decades, the specifically psychological thinking has been almost completely suppressed and removed, and replaced by a statistical thinking. Precisely here we see the cancer of testology and testomania of today." More recently, psychometric theory has been applied in the measurement of personality, attitudes, and beliefs, and academic achievement. These latent constructs cannot truly be measured, and much of the research and science in this discipline has been developed in an attempt to measure these constructs as close to the true score as possible. Figures who made significant contributions to psychometrics include Karl Pearson, Henry F. Kaiser, Carl Brigham, L. L. Thurstone, E. L. Thorndike, Georg Rasch, Eugene Galanter, Johnson O'Connor, Frederic M. Lord, Ledyard R Tucker, Louis Guttman, and Jane Loevinger. Definition of measurement in the social sciences The definition of measurement in the social sciences has a long history. A current widespread definition, proposed by Stanley Smith Stevens, is that measurement is "the assignment of numerals to objects or events according to some rule." This definition was introduced in a 1946 Science article in which Stevens proposed four levels of measurement. Although widely adopted, this definition differs in important respects from the more classical definition of measurement adopted in the physical sciences, namely that scientific measurement entails "the estimation or discovery of the ratio of some magnitude of a quantitative attribute to a unit of the same attribute" (p. 358) Indeed, Stevens's definition of measurement was put forward in response to the British Ferguson Committee, whose chair, A. Ferguson, was a physicist. The committee was appointed in 1932 by the British Association for the Advancement of Science to investigate the possibility of quantitatively estimating sensory events. Although its chair and other members were physicists, the committee also included several psychologists. The committee's report highlighted the importance of the definition of measurement. While Stevens's response was to propose a new definition, which has had considerable influence in the field, this was by no means the only response to the report. Another, notably different, response was to accept the classical definition, as reflected in the following statement: Measurement in psychology and physics are in no sense different. Physicists can measure when they can find the operations by which they may meet the necessary criteria; psychologists have but to do the same. They need not worry about the mysterious differences between the meaning of measurement in the two sciences (Reese, 1943, p. 49). These divergent responses are reflected in alternative approaches to measurement. For example, methods based on covariance matrices are typically employed on the premise that numbers, such as raw scores derived from assessments, are measurements. Such approaches implicitly entail Stevens's definition of measurement, which requires only that numbers are assigned according to some rule. The main research task, then, is generally considered to be the discovery of associations between scores, and of factors posited to underlie such associations. On the other hand, when measurement models such as the Rasch model are employed, numbers are not assigned based on a rule. Instead, in keeping with Reese's statement above, specific criteria for measurement are stated, and the goal is to construct procedures or operations that provide data that meet the relevant criteria. Measurements are estimated based on the models, and tests are conducted to ascertain whether the relevant criteria have been met. Instruments and procedures The first psychometric instruments were designed to measure intelligence. One early approach to measuring intelligence was the test developed in France by Alfred Binet and Theodore Simon. That test was known as the .The French test was adapted for use in the U. S. by Lewis Terman of Stanford University, and named the Stanford-Binet IQ test. Another major focus in psychometrics has been on personality testing. There have been a range of theoretical approaches to conceptualizing and measuring personality, though there is no widely agreed upon theory. Some of the better known instruments include the Minnesota Multiphasic Personality Inventory, the Five-Factor Model (or "Big 5") and tools such as Personality and Preference Inventory and the Myers–Briggs Type Indicator. Attitudes have also been studied extensively using psychometric approaches. An alternative method involves the application of unfolding measurement models, the most general being the Hyperbolic Cosine Model (Andrich & Luo, 1993). Theoretical approaches Psychometricians have developed a number of different measurement theories. These include classical test theory (CTT) and item response theory (IRT). An approach which seems mathematically to be similar to IRT but also quite distinctive, in terms of its origins and features, is represented by the Rasch model for measurement. The development of the Rasch model, and the broader class of models to which it belongs, was explicitly founded on requirements of measurement in the physical sciences. Psychometricians have also developed methods for working with large matrices of correlations and |
how to have a successful life by practicing an active and socially interactive lifestyle. John Locke Date: 1632–1704 In Some Thoughts Concerning Education and Of the Conduct of the Understanding Locke composed an outline on how to educate this mind in order to increase its powers and activity: "The business of education is not, as I think, to make them perfect in any one of the sciences, but so to open and dispose their minds as may best make them capable of any, when they shall apply themselves to it." "If men are for a long time accustomed only to one sort or method of thoughts, their minds grow stiff in it, and do not readily turn to another. It is therefore to give them this freedom, that I think they should be made to look into all sorts of knowledge, and exercise their understandings in so wide a variety and stock of knowledge. But I do not propose it as a variety and stock of knowledge, but a variety and freedom of thinking, as an increase of the powers and activity of the mind, not as an enlargement of its possessions." Locke expressed the belief that education maketh the man, or, more fundamentally, that the mind is an "empty cabinet", with the statement, "I think I may say that of all the men we meet with, nine parts of ten are what they are, good or evil, useful or not, by their education." Locke also wrote that "the little and almost insensible impressions on our tender infancies have very important and lasting consequences." He argued that the "associations of ideas" that one makes when young are more important than those made later because they are the foundation of the self: they are, put differently, what first mark the tabula rasa. In his Essay, in which is introduced both of these concepts, Locke warns against, for example, letting "a foolish maid" convince a child that "goblins and sprites" are associated with the night for "darkness shall ever afterwards bring with it those frightful ideas, and they shall be so joined, that he can no more bear the one than the other." "Associationism", as this theory would come to be called, exerted a powerful influence over eighteenth-century thought, particularly educational theory, as nearly every educational writer warned parents not to allow their children to develop negative associations. It also led to the development of psychology and other new disciplines with David Hartley's attempt to discover a biological mechanism for associationism in his Observations on Man (1749). Jean-Jacques Rousseau Date: 1712–1778 Rousseau, though he paid his respects to Plato's philosophy, rejected it as impractical due to the decayed state of society. Rousseau also had a different theory of human development; where Plato held that people are born with skills appropriate to different castes (though he did not regard these skills as being inherited), Rousseau held that there was one developmental process common to all humans. This was an intrinsic, natural process, of which the primary behavioral manifestation was curiosity. This differed from Locke's 'tabula rasa' in that it was an active process deriving from the child's nature, which drove the child to learn and adapt to its surroundings. Rousseau wrote in his book Emile that all children are perfectly designed organisms, ready to learn from their surroundings so as to grow into virtuous adults, but due to the malign influence of corrupt society, they often fail to do so. Rousseau advocated an educational method which consisted of removing the child from society—for example, to a country home—and alternately conditioning him through changes to his environment and setting traps and puzzles for him to solve or overcome. Rousseau was unusual in that he recognized and addressed the potential of a problem of legitimation for teaching. He advocated that adults always be truthful with children, and in particular that they never hide the fact that the basis for their authority in teaching was purely one of physical coercion: "I'm bigger than you." Once children reached the age of reason, at about 12, they would be engaged as free individuals in the ongoing process of their own. He once said that a child should grow up without adult interference and that the child must be guided to suffer from the experience of the natural consequences of his own acts or behaviour. When he experiences the consequences of his own acts, he advises himself. "Rousseau divides development into five stages (a book is devoted to each). Education in the first two stages seeks to the senses: only when Émile is about 12 does the tutor begin to work to develop his mind. Later, in Book 5, Rousseau examines the education of Sophie (whom Émile is to marry). Here he sets out what he sees as the essential differences that flow from sex. 'The man should be strong and active; the woman should be weak and passive' (Everyman edn: 322). From this difference comes a contrasting education. They are not to be brought up in ignorance and kept to housework: Nature means them to think, to will, to love to cultivate their minds as well as their persons; she puts these weapons in their hands to make up for their lack of strength and to enable them to direct the strength of men. They should learn many things, but only such things as suitable' (Everyman edn.: 327)." Émile Mortimer Jerome Adler Date: 1902–2001 Mortimer Jerome Adler was an American philosopher, educator, and popular author. As a philosopher he worked within the Aristotelian and Thomistic traditions. He lived for the longest stretches in New York City, Chicago, San Francisco, and San Mateo, California. He worked for Columbia University, the University of Chicago, Encyclopædia Britannica, and Adler's own Institute for Philosophical Research. Adler was married twice and had four children. Adler was a proponent of educational perennialism. Harry S. Broudy Date: 1905–1998 Broudy's philosophical views were based on the tradition of classical realism, dealing with truth, goodness, and beauty. However he was also influenced by the modern philosophy existentialism and instrumentalism. In his textbook Building a Philosophy of Education he has two major ideas that are the main points to his philosophical outlook: The first is truth and the second is universal structures to be found in humanity's struggle for education and the good life. Broudy also studied issues on society's demands on school. He thought education would be a link to unify the diverse society and urged the society to put more trust and a commitment to the schools and a good education. Scholasticism John Milton Date: 1608–1674 The objective of medieval education was an overtly religious one, primarily concerned with uncovering transcendental truths that would lead a person back to God through a life of moral and religious choice (Kreeft 15). The vehicle by which these truths were uncovered was dialectic: To the medieval mind, debate was a fine art, a serious science, and a fascinating entertainment, much more than it is to the modern mind, because the medievals believed, like Socrates, that dialectic could uncover truth. Thus a 'scholastic disputation' was not a personal contest in cleverness, nor was it 'sharing opinions'; it was a shared journey of discovery (Kreeft 14–15). Pragmatism John Dewey Date: 1859–1952 In Democracy and Education: An Introduction to the Philosophy of Education, Dewey stated that education, in its broadest sense, is the means of the "social continuity of life" given the "primary ineluctable facts of the birth and death of each one of the constituent members in a social group". Education is therefore a necessity, for "the life of the group goes on." Dewey was a proponent of Educational Progressivism and was a relentless campaigner for reform of education, pointing out that the authoritarian, strict, pre-ordained knowledge approach of modern traditional education was too concerned with delivering knowledge, and not enough with understanding students' actual experiences. William Heard Kilpatrick Date: 1871–1965 William Heard Kilpatrick was a US American philosopher of education and a colleague and a successor of John Dewey. He was a major figure in the progressive education movement of the early 20th century. Kilpatrick developed the Project Method for early childhood education, which was a form of Progressive Education organized curriculum and classroom activities around a subject's central theme. He believed that the role of a teacher should be that of a "guide" as opposed to an authoritarian figure. Kilpatrick believed that children should direct their own learning according to their interests and should be allowed to explore their environment, experiencing their learning through the natural senses. Proponents of Progressive Education and the Project Method reject traditional schooling that focuses on memorization, rote learning, strictly organized classrooms (desks in rows; students always seated), and typical forms of assessment. Nel Noddings Date: 1929– Noddings' first sole-authored book Caring: A Feminine Approach to Ethics and Moral Education (1984) followed close on the 1982 publication of Carol Gilligan’s ground-breaking work in the ethics of care In a Different Voice. While her work on ethics continued, with the publication of Women and Evil (1989) and later works on moral education, most of her later publications have been on the philosophy of education and educational theory. Her most significant works in these areas have been Educating for Intelligent Belief or Unbelief (1993) and Philosophy of Education (1995). Noddings' contribution to education philosophy centers around the ethic of care. Her belief was that a caring teacher-student relationship will result in the teacher designing a differentiated curriculum for each student, and that this curriculum would be based around the students' particular interests and needs. The teacher's claim to care must not be based on a one time virtuous decision but an ongoing interest in the students' welfare. Existentialist The existentialist sees the world as one's personal subjectivity, where goodness, truth, and reality are individually defined. Reality is a world of existing, truth subjectively chosen, and goodness a matter of freedom. The subject matter of existentialist classrooms should be a matter of personal choice. Teachers view the individual as an entity within a social context in which the learner must confront others' views to clarify his or her own. Character development emphasizes individual responsibility for decisions. Real answers come from within the individual, not from outside authority. Examining life through authentic thinking involves students in genuine learning experiences. Existentialists are opposed to thinking about students as objects to be measured, tracked, or standardized. Such educators want the educational experience to focus on creating opportunities for self-direction and self-actualization. They start with the student, rather than on curriculum content. Critical theory Paulo Freire Date: 1921–1997 A Brazilian philosopher and educator committed to the cause of educating the impoverished peasants of his nation and collaborating with them in the pursuit of their liberation from what he regarded as "oppression," Freire is best known for his attack on what he called the "banking concept of education," in which the student was viewed as an empty account to be filled by the teacher. Freire also suggests that a deep reciprocity be inserted into our notions of teacher and student; he comes close to suggesting that the teacher-student dichotomy be completely abolished, instead promoting the roles of the participants in the classroom as the teacher-student (a teacher who learns) and the student-teacher (a learner who teaches). In its early, strong form this kind of classroom has sometimes been criticized on the grounds that it can mask rather than overcome the teacher's authority. Aspects of the Freirian philosophy have been highly influential in academic debates over "participatory development" and development more generally. Freire's emphasis on what he describes as "emancipation" through interactive participation has been used as a rationale for the participatory focus of development, as it is held that 'participation' in any form can lead to empowerment of poor or marginalised groups. Freire was a proponent of critical pedagogy. "He participated in the import of European doctrines and ideas into Brazil, assimilated them to the needs of a specific socio-economic situation, and thus expanded and refocused them in a thought-provoking way" Other Continental thinkers Martin Heidegger Date: 1889–1976 Heidegger's philosophizing about education was primarily related to higher education. He believed that teaching and research in the university should be unified and aim towards testing and interrogating the "ontological assumptions presuppositions which implicitly guide research in each domain of knowledge." Michel Foucault Date: 1926–1984 Michel Foucault understood education as an inherently political act involving power relationships. He called upon his readers to transform modern education in the direction of egalitarian relationships. Normative educational philosophies "Normative philosophies or theories of education may make use of the results of philosophical thought and of factual inquiries about human beings and the psychology of learning, but in any | the mortal to be that which is subject and servant?On this premise, Plato advocated removing children from their mothers' care and raising them as wards of the state, with great care being taken to differentiate children suitable to the various castes, the highest receiving the most education, so that they could act as guardians of the city and care for the less able. Education would be holistic, including facts, skills, physical discipline, and music and art, which he considered the highest form of endeavor. Plato believed that talent was distributed non-genetically and thus must be found in children born in any social class. He built on this by insisting that those suitably gifted were to be trained by the state so that they might be qualified to assume the role of a ruling class. What this established was essentially a system of selective public education premised on the assumption that an educated minority of the population were, by virtue of their education (and inborn educability), sufficient for healthy governance. Plato's writings contain some of the following ideas: Elementary education would be confined to the guardian class till the age of 18, followed by two years of compulsory military training and then by higher education for those who qualified. While elementary education made the soul responsive to the environment, higher education helped the soul to search for truth which illuminated it. Both boys and girls receive the same kind of education. Elementary education consisted of music and gymnastics, designed to train and blend gentle and fierce qualities in the individual and create a harmonious person. At the age of 20, a selection was made. The best students would take an advanced course in mathematics, geometry, astronomy and harmonics. The first course in the scheme of higher education would last for ten years. It would be for those who had a flair for science. At the age of 30 there would be another selection; those who qualified would study dialectics and metaphysics, logic and philosophy for the next five years. After accepting junior positions in the army for 15 years, a man would have completed his theoretical and practical education by the age of 50. Immanuel Kant Date: 1724–1804 Immanuel Kant believed that education differs from training in that the former involves thinking whereas the latter does not. In addition to educating reason, of central importance to him was the development of character and teaching of moral maxims. Kant was a proponent of public education and of learning by doing. Realism Aristotle Date: 384 BC – 322 BC Only fragments of Aristotle's treatise On Education are still in existence. We thus know of his philosophy of education primarily through brief passages in other works. Aristotle considered human nature, habit and reason to be equally important forces to be cultivated in education. Thus, for example, he considered repetition to be a key tool to develop good habits. The teacher was to lead the student systematically; this differs, for example, from Socrates' emphasis on questioning his listeners to bring out their own ideas (though the comparison is perhaps incongruous since Socrates was dealing with adults). Aristotle placed great emphasis on balancing the theoretical and practical aspects of subjects taught. Subjects he explicitly mentions as being important included reading, writing and mathematics; music; physical education; literature and history; and a wide range of sciences. He also mentioned the importance of play. One of education's primary missions for Aristotle, perhaps its most important, was to produce good and virtuous citizens for the polis. All who have meditated on the art of governing mankind have been convinced that the fate of empires depends on the education of youth. Ibn Sina Date: 980 AD – 1037 AD In the medieval Islamic world, an elementary school was known as a maktab, which dates back to at least the 10th century. Like madrasahs (which referred to higher education), a maktab was often attached to a mosque. In the 11th century, Ibn Sina (known as Avicenna in the West), wrote a chapter dealing with the maktab entitled "The Role of the Teacher in the Training and Upbringing of Children", as a guide to teachers working at maktab schools. He wrote that children can learn better if taught in classes instead of individual tuition from private tutors, and he gave a number of reasons for why this is the case, citing the value of competition and emulation among pupils as well as the usefulness of group discussions and debates. Ibn Sina described the curriculum of a maktab school in some detail, describing the curricula for two stages of education in a maktab school. Ibn Sina wrote that children should be sent to a maktab school from the age of 6 and be taught primary education until they reach the age of 14. During which time, he wrote that they should be taught the Qur'an, Islamic metaphysics, language, literature, Islamic ethics, and manual skills (which could refer to a variety of practical skills). Ibn Sina refers to the secondary education stage of maktab schooling as the period of specialization, when pupils should begin to acquire manual skills, regardless of their social status. He writes that children after the age of 14 should be given a choice to choose and specialize in subjects they have an interest in, whether it was reading, manual skills, literature, preaching, medicine, geometry, trade and commerce, craftsmanship, or any other subject or profession they would be interested in pursuing for a future career. He wrote that this was a transitional stage and that there needs to be flexibility regarding the age in which pupils graduate, as the student's emotional development and chosen subjects need to be taken into account. The empiricist theory of 'tabula rasa' was also developed by Ibn Sina. He argued that the "human intellect at birth is rather like a tabula rasa, a pure potentiality that is actualized through education and comes to know" and that knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" which is developed through a "syllogistic method of reasoning; observations lead to prepositional statements, which when compounded lead to further abstract concepts." He further argued that the intellect itself "possesses levels of development from the material intellect (al-‘aql al-hayulani), that potentiality that can acquire knowledge to the active intellect (al-‘aql al-fa‘il), the state of the human intellect in conjunction with the perfect source of knowledge." Ibn Tufail Date: c. 1105 – 1185 In the 12th century, the Andalusian-Arabian philosopher and novelist Ibn Tufail (known as "Abubacer" or "Ebn Tophail" in the West) demonstrated the empiricist theory of 'tabula rasa' as a thought experiment through his Arabic philosophical novel, Hayy ibn Yaqzan, in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. Some scholars have argued that the Latin translation of his philosophical novel, Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in "An Essay Concerning Human Understanding". Montaigne Child education was among the psychological topics that Michel de Montaigne wrote about. His essays On the Education of Children, On Pedantry, and On Experience explain the views he had on child education. Some of his views on child education are still relevant today. Montaigne's views on the education of children were opposed to the common educational practices of his day. He found fault both with what was taught and how it was taught. Much of the education during Montaigne's time was focused on the reading of the classics and learning through books.Montaigne disagreed with learning strictly through books. He believed it was necessary to educate children in a variety of ways. He also disagreed with the way information was being presented to students. It was being presented in a way that encouraged students to take the information that was taught to them as absolute truth. Students were denied the chance to question the information. Therefore, students could not truly learn. Montaigne believed that, to learn truly, a student had to take the information and make it their own. At the foundation Montaigne believed that the selection of a good tutor was important for the student to become well educated. Education by a tutor was to be conducted at the pace of the student.He believed that a tutor should be in dialogue with the student, letting the student speak first. The tutor also should allow for discussions and debates to be had. Such a dialogue was intended to create an environment in which students would teach themselves. They would be able to realize their mistakes and make corrections to them as necessary. Individualized learning was integral to his theory of child education. He argued that the student combines information already known with what is learned and forms a unique perspective on the newly learned information. Montaigne also thought that tutors should encourage the natural curiosity of students and allow them to question things.He postulated that successful students were those who were encouraged to question new information and study it for themselves, rather than simply accepting what they had heard from the authorities on any given topic. Montaigne believed that a child's curiosity could serve as an important teaching tool when the child is allowed to explore the things that the child is curious about. Experience also was a key element to learning for Montaigne. Tutors needed to teach students through experience rather than through the mere memorization of information often practised in book learning.He argued that students would become passive adults, blindly obeying and lacking the ability to think on their own. Nothing of importance would be retained and no abilities would be learned. He believed that learning through experience was superior to learning through the use of books. For this reason he encouraged tutors to educate their students through practice, travel, and human interaction. In doing so, he argued that students would become active learners, who could claim knowledge for themselves. Montaigne's views on child education continue to have an influence in the present. Variations of Montaigne's ideas on education are incorporated into modern learning in some ways. He argued against the popular way of teaching in his day, encouraging individualized |
Attributional style has been assessed by the Attributional Style Questionnaire, the Expanded Attributional Style Questionnaire, the Attributions Questionnaire, the Real Events Attributional Style Questionnaire and the Attributional Style Assessment Test. Achievement style theory focuses upon identification of an individual's Locus of Control tendency, such as by Rotter's evaluations, and was found by Cassandra Bolyard Whyte to provide valuable information for improving academic performance of students. Individuals with internal control tendencies are likely to persist to better academic performance levels, presenting an achievement personality, according to Cassandra B. Whyte. Recognition that the tendency to believe that hard work and persistence often results in attainment of life and academic goals has influenced formal educational and counseling efforts with students of various ages and in various settings since the 1970s research about achievement. Counseling aimed toward encouraging individuals to design ambitious goals and work toward them, with recognition that there are external factors that may impact, often results in the incorporation of a more positive achievement style by students and employees, whatever the setting, to include higher education, workplace, or justice programming. Walter Mischel (1999) has also defended a cognitive approach to personality. His work refers to "Cognitive Affective Units", and considers factors such as encoding of stimuli, affect, goal-setting, and self-regulatory beliefs. The term "Cognitive Affective Units" shows how his approach considers affect as well as cognition. Cognitive-Experiential Self-Theory (CEST) is another cognitive personality theory. Developed by Seymour Epstein, CEST argues that humans operate by way of two independent information processing systems: experiential system and rational system. The experiential system is fast and emotion-driven. The rational system is slow and logic-driven. These two systems interact to determine our goals, thoughts, and behavior. Personal construct psychology (PCP) is a theory of personality developed by the American psychologist George Kelly in the 1950s. Kelly's fundamental view of personality was that people are like naive scientists who see the world through a particular lens, based on their uniquely organized systems of construction, which they use to anticipate events. But because people are naive scientists, they sometimes employ systems for construing the world that are distorted by idiosyncratic experiences not applicable to their current social situation. A system of construction that chronically fails to characterize and/or predict events, and is not appropriately revised to comprehend and predict one's changing social world, is considered to underlie psychopathology (or mental illness.) From the theory, Kelly derived a psychotherapy approach and also a technique called The Repertory Grid Interview that helped his patients to uncover their own "constructs" with minimal intervention or interpretation by the therapist. The repertory grid was later adapted for various uses within organizations, including decision-making and interpretation of other people's world-views. Humanistic theories Humanistic psychology emphasizes that people have free will and that this plays an active role in determining how they behave. Accordingly, humanistic psychology focuses on subjective experiences of persons as opposed to forced, definitive factors that determine behavior. Abraham Maslow and Carl Rogers were proponents of this view, which is based on the "phenomenal field" theory of Combs and Snygg (1949). Rogers and Maslow were among a group of psychologists that worked together for a decade to produce the Journal of Humanistic Psychology. This journal was primarily focused on viewing individuals as a whole, rather than focusing solely on separate traits and processes within the individual. Robert W. White wrote the book The Abnormal Personality that became a standard text on abnormal psychology. He also investigated the human need to strive for positive goals like competence and influence, to counterbalance the emphasis of Freud on the pathological elements of personality development. Maslow spent much of his time studying what he called "self-actualizing persons", those who are "fulfilling themselves and doing the best they are capable of doing". Maslow believes all who are interested in growth move towards self-actualizing (growth, happiness, satisfaction) views. Many of these people demonstrate a trend in dimensions of their personalities. Characteristics of self-actualizers according to Maslow include the four key dimensions: Awareness – maintaining constant enjoyment and awe of life. These individuals often experienced a "peak experience". He defined a peak experience as an "intensification of any experience to the degree there is a loss or transcendence of self". A peak experience is one in which an individual perceives an expansion of themselves, and detects a unity and meaningfulness in life. Intense concentration on an activity one is involved in, such as running a marathon, may invoke a peak experience. Reality and problem centered – having a tendency to be concerned with "problems" in surroundings. Acceptance/Spontaneity – accepting surroundings and what cannot be changed. Unhostile sense of humor/democratic – do not take kindly to joking about others, which can be viewed as offensive. They have friends of all backgrounds and religions and hold very close friendships. Maslow and Rogers emphasized a view of the person as an active, creative, experiencing human being who lives in the present and subjectively responds to current perceptions, relationships, and encounters. They disagree with the dark, pessimistic outlook of those in the Freudian psychoanalysis ranks, but rather view humanistic theories as positive and optimistic proposals which stress the tendency of the human personality toward growth and self-actualization. This progressing self will remain the center of its constantly changing world; a world that will help mold the self but not necessarily confine it. Rather, the self has opportunity for maturation based on its encounters with this world. This understanding attempts to reduce the acceptance of hopeless redundancy. Humanistic therapy typically relies on the client for information of the past and its effect on the present, therefore the client dictates the type of guidance the therapist may initiate. This allows for an individualized approach to therapy. Rogers found patients differ in how they respond to other people. Rogers tried to model a particular approach to therapy – he stressed the reflective or empathetic response. This response type takes the client's viewpoint and reflects back their feeling and the context for it. An example of a reflective response would be, "It seems you are feeling anxious about your upcoming marriage". This response type seeks to clarify the therapist's understanding while also encouraging the client to think more deeply and seek to fully understand the feelings they have expressed. Biopsychological theories Biology plays a very important role in the development of personality. The study of the biological level in personality psychology focuses primarily on identifying the role of genetic determinants and how they mold individual personalities. Some of the earliest thinking about possible biological bases of personality grew out of the case of Phineas Gage. In an 1848 accident, a large iron rod was driven through Gage's head, and his personality apparently changed as a result, although descriptions of these psychological changes are usually exaggerated. In general, patients with brain damage have been difficult to find and study. In the 1990s, researchers began to use electroencephalography (EEG), positron emission tomography (PET), and more recently functional magnetic resonance imaging (fMRI), which is now the most widely used imaging technique to help localize personality traits in the brain. Genetic basis of personality Ever since the Human Genome Project allowed for a much more in depth comprehension of genetics, there has been an ongoing controversy involving heritability, personality traits, and environmental vs. genetic influence on personality. The human genome is known to play a role in the development of personality. Previously, genetic personality studies focused on specific genes correlating to specific personality traits. Today's view of the gene-personality relationship focuses primarily on the activation and expression of genes related to personality and forms part of what is referred to as behavioural genetics. Genes provide numerous options for varying cells to be expressed; however, the environment determines which of these are activated. Many studies have noted this relationship in varying ways in which our bodies can develop, but the interaction between genes and the shaping of our minds and personality is also relevant to this biological relationship. DNA-environment interactions are important in the development of personality because this relationship determines what part of the DNA code is actually made into proteins that will become part of an individual. While different choices are made available by the genome, in the end, the environment is the ultimate determinant of what becomes activated. Small changes in DNA in individuals are what leads to the uniqueness of every person as well as differences in looks, abilities, brain functioning, and all the factors that culminate to develop a cohesive personality. Cattell and Eysenck have proposed that genetics have a powerful influence on personality. A large part of the evidence collected linking genetics and the environment to personality have come from twin studies. This "twin method" compares levels of similarity in personality using genetically identical twins. One of the first of these twin studies measured 800 pairs of twins, studied numerous personality traits, and determined that identical twins are most similar in their general abilities. Personality similarities were found to be less related for self-concepts, goals, and interests. Twin studies have also been important in the creation of the five factor personality model: neuroticism, extraversion, openness, agreeableness, and conscientiousness. Neuroticism and extraversion are the two most widely studied traits. Individuals scoring high in trait extraversion more often display characteristics such as impulsiveness, sociability, and activeness. Individuals scoring high in trait neuroticism are more likely to be moody, anxious, or irritable. Identical twins, however, have higher correlations in personality traits than fraternal twins. One study measuring genetic influence on twins in five different countries found that the correlations for identical twins were .50, while for fraternal they were about .20. It is suggested that heredity and environment interact to determine one's personality. Evolutionary theory Charles Darwin is the founder of the theory of the evolution of the species. The evolutionary approach to personality psychology is based on this theory. This theory examines how individual personality differences are based on natural selection. Through natural selection organisms change over time through adaptation and selection. Traits are developed and certain genes come into expression based on an organism's environment and how these traits aid in an organism's survival and reproduction. Polymorphisms, such as gender and blood type, are forms of diversity which evolve to benefit a species as a whole. The theory of evolution has wide-ranging implications on personality psychology. Personality viewed through the lens of evolutionary psychology places a great deal of emphasis on specific traits that are most likely to aid in survival and reproduction, such as conscientiousness, sociability, emotional stability, and dominance. The social aspects of personality can be seen through an evolutionary perspective. Specific character traits develop and are selected for because they play an important and complex role in the social hierarchy of organisms. Such characteristics of this social hierarchy include the sharing of important resources, family and mating interactions, and the harm or help organisms can bestow upon one another. Drive theories In the 1930s, John Dollard and Neal Elgar Miller met at Yale University, and began an attempt to integrate drives (see Drive theory), into a theory of personality, basing themselves on the work of Clark Hull. They began with the premise that personality could be equated with the habitual responses exhibited by an individual – their habits. From there, they determined that these habitual responses were built on secondary, or acquired drives. Secondary drives are internal needs directing the behaviour of an individual that results from learning. Acquired drives are learned, by and large in the manner described by classical conditioning. When we are in a certain environment and experience a strong response to a stimulus, we internalize cues from the said environment. When we find ourselves in an environment with similar cues, we begin to act in anticipation of a similar stimulus. Thus, we are likely to experience anxiety in an environment with cues similar to one where we have experienced pain or fear – such as the dentist's office. Secondary drives are built on primary drives, which are biologically driven, and motivate us to act with no prior learning process – such as hunger, thirst or the need for sexual activity. However, secondary drives are thought to represent more specific elaborations of primary drives, behind which the functions of the original primary drive continue to exist. Thus, the primary drives of fear and pain exist behind the acquired drive of anxiety. Secondary drives can be based on multiple primary drives and even in other secondary drives. This is said to give them strength and persistence. Examples include the need for money, which was conceptualized as arising from multiple primary drives such as the drive for food and warmth, as well as from secondary drives such as imitativeness (the drive to do as others do) and anxiety. Secondary drives vary based on the social conditions under which they were learned – such as culture. Dollard and Miller used the example of food, stating that the primary drive of hunger manifested itself behind the learned secondary drive of an appetite for a specific type of food, which was dependent on the culture of the individual. Secondary drives are also explicitly social, representing a manner in which we convey our primary drives to others. Indeed, many primary drives are actively repressed by society (such as the sexual drive). Dollard and Miller believed that the acquisition of secondary drives was essential to childhood development. As children develop, they learn not to act on their primary drives, such as hunger but acquire secondary drives through reinforcement. Friedman and Schustack describe an example of such developmental changes, stating that if an infant engaging in an active orientation towards others brings about the fulfillment of primary drives, such as being fed or having their diaper changed, they will develop a secondary drive to pursue similar interactions with others – perhaps leading to an individual being more gregarious. Dollard and Miller's belief in the importance of acquired drives led them to reconceive Sigmund Freud's theory of psychosexual development. They found themselves to be in agreement with the timing Freud used but believed that these periods corresponded to the successful learning of certain secondary drives. Dollard and Miller gave many examples of how secondary drives impact our habitual responses – and by extension our personalities, including anger, social conformity, imitativeness or anxiety, to name a few. In the case of anxiety, Dollard and Miller note that people who generalize the situation in which they experience the anxiety drive will experience anxiety far more than they should. These people are often anxious all the time, and anxiety becomes part of their personality. This example shows how drive theory can have | is not appropriately revised to comprehend and predict one's changing social world, is considered to underlie psychopathology (or mental illness.) From the theory, Kelly derived a psychotherapy approach and also a technique called The Repertory Grid Interview that helped his patients to uncover their own "constructs" with minimal intervention or interpretation by the therapist. The repertory grid was later adapted for various uses within organizations, including decision-making and interpretation of other people's world-views. Humanistic theories Humanistic psychology emphasizes that people have free will and that this plays an active role in determining how they behave. Accordingly, humanistic psychology focuses on subjective experiences of persons as opposed to forced, definitive factors that determine behavior. Abraham Maslow and Carl Rogers were proponents of this view, which is based on the "phenomenal field" theory of Combs and Snygg (1949). Rogers and Maslow were among a group of psychologists that worked together for a decade to produce the Journal of Humanistic Psychology. This journal was primarily focused on viewing individuals as a whole, rather than focusing solely on separate traits and processes within the individual. Robert W. White wrote the book The Abnormal Personality that became a standard text on abnormal psychology. He also investigated the human need to strive for positive goals like competence and influence, to counterbalance the emphasis of Freud on the pathological elements of personality development. Maslow spent much of his time studying what he called "self-actualizing persons", those who are "fulfilling themselves and doing the best they are capable of doing". Maslow believes all who are interested in growth move towards self-actualizing (growth, happiness, satisfaction) views. Many of these people demonstrate a trend in dimensions of their personalities. Characteristics of self-actualizers according to Maslow include the four key dimensions: Awareness – maintaining constant enjoyment and awe of life. These individuals often experienced a "peak experience". He defined a peak experience as an "intensification of any experience to the degree there is a loss or transcendence of self". A peak experience is one in which an individual perceives an expansion of themselves, and detects a unity and meaningfulness in life. Intense concentration on an activity one is involved in, such as running a marathon, may invoke a peak experience. Reality and problem centered – having a tendency to be concerned with "problems" in surroundings. Acceptance/Spontaneity – accepting surroundings and what cannot be changed. Unhostile sense of humor/democratic – do not take kindly to joking about others, which can be viewed as offensive. They have friends of all backgrounds and religions and hold very close friendships. Maslow and Rogers emphasized a view of the person as an active, creative, experiencing human being who lives in the present and subjectively responds to current perceptions, relationships, and encounters. They disagree with the dark, pessimistic outlook of those in the Freudian psychoanalysis ranks, but rather view humanistic theories as positive and optimistic proposals which stress the tendency of the human personality toward growth and self-actualization. This progressing self will remain the center of its constantly changing world; a world that will help mold the self but not necessarily confine it. Rather, the self has opportunity for maturation based on its encounters with this world. This understanding attempts to reduce the acceptance of hopeless redundancy. Humanistic therapy typically relies on the client for information of the past and its effect on the present, therefore the client dictates the type of guidance the therapist may initiate. This allows for an individualized approach to therapy. Rogers found patients differ in how they respond to other people. Rogers tried to model a particular approach to therapy – he stressed the reflective or empathetic response. This response type takes the client's viewpoint and reflects back their feeling and the context for it. An example of a reflective response would be, "It seems you are feeling anxious about your upcoming marriage". This response type seeks to clarify the therapist's understanding while also encouraging the client to think more deeply and seek to fully understand the feelings they have expressed. Biopsychological theories Biology plays a very important role in the development of personality. The study of the biological level in personality psychology focuses primarily on identifying the role of genetic determinants and how they mold individual personalities. Some of the earliest thinking about possible biological bases of personality grew out of the case of Phineas Gage. In an 1848 accident, a large iron rod was driven through Gage's head, and his personality apparently changed as a result, although descriptions of these psychological changes are usually exaggerated. In general, patients with brain damage have been difficult to find and study. In the 1990s, researchers began to use electroencephalography (EEG), positron emission tomography (PET), and more recently functional magnetic resonance imaging (fMRI), which is now the most widely used imaging technique to help localize personality traits in the brain. Genetic basis of personality Ever since the Human Genome Project allowed for a much more in depth comprehension of genetics, there has been an ongoing controversy involving heritability, personality traits, and environmental vs. genetic influence on personality. The human genome is known to play a role in the development of personality. Previously, genetic personality studies focused on specific genes correlating to specific personality traits. Today's view of the gene-personality relationship focuses primarily on the activation and expression of genes related to personality and forms part of what is referred to as behavioural genetics. Genes provide numerous options for varying cells to be expressed; however, the environment determines which of these are activated. Many studies have noted this relationship in varying ways in which our bodies can develop, but the interaction between genes and the shaping of our minds and personality is also relevant to this biological relationship. DNA-environment interactions are important in the development of personality because this relationship determines what part of the DNA code is actually made into proteins that will become part of an individual. While different choices are made available by the genome, in the end, the environment is the ultimate determinant of what becomes activated. Small changes in DNA in individuals are what leads to the uniqueness of every person as well as differences in looks, abilities, brain functioning, and all the factors that culminate to develop a cohesive personality. Cattell and Eysenck have proposed that genetics have a powerful influence on personality. A large part of the evidence collected linking genetics and the environment to personality have come from twin studies. This "twin method" compares levels of similarity in personality using genetically identical twins. One of the first of these twin studies measured 800 pairs of twins, studied numerous personality traits, and determined that identical twins are most similar in their general abilities. Personality similarities were found to be less related for self-concepts, goals, and interests. Twin studies have also been important in the creation of the five factor personality model: neuroticism, extraversion, openness, agreeableness, and conscientiousness. Neuroticism and extraversion are the two most widely studied traits. Individuals scoring high in trait extraversion more often display characteristics such as impulsiveness, sociability, and activeness. Individuals scoring high in trait neuroticism are more likely to be moody, anxious, or irritable. Identical twins, however, have higher correlations in personality traits than fraternal twins. One study measuring genetic influence on twins in five different countries found that the correlations for identical twins were .50, while for fraternal they were about .20. It is suggested that heredity and environment interact to determine one's personality. Evolutionary theory Charles Darwin is the founder of the theory of the evolution of the species. The evolutionary approach to personality psychology is based on this theory. This theory examines how individual personality differences are based on natural selection. Through natural selection organisms change over time through adaptation and selection. Traits are developed and certain genes come into expression based on an organism's environment and how these traits aid in an organism's survival and reproduction. Polymorphisms, such as gender and blood type, are forms of diversity which evolve to benefit a species as a whole. The theory of evolution has wide-ranging implications on personality psychology. Personality viewed through the lens of evolutionary psychology places a great deal of emphasis on specific traits that are most likely to aid in survival and reproduction, such as conscientiousness, sociability, emotional stability, and dominance. The social aspects of personality can be seen through an evolutionary perspective. Specific character traits develop and are selected for because they play an important and complex role in the social hierarchy of organisms. Such characteristics of this social hierarchy include the sharing of important resources, family and mating interactions, and the harm or help organisms can bestow upon one another. Drive theories In the 1930s, John Dollard and Neal Elgar Miller met at Yale University, and began an attempt to integrate drives (see Drive theory), into a theory of personality, basing themselves on the work of Clark Hull. They began with the premise that personality could be equated with the habitual responses exhibited by an individual – their habits. From there, they determined that these habitual responses were built on secondary, or acquired drives. Secondary drives are internal needs directing the behaviour of an individual that results from learning. Acquired drives are learned, by and large in the manner described by classical conditioning. When we are in a certain environment and experience a strong response to a stimulus, we internalize cues from the said environment. When we find ourselves in an environment with similar cues, we begin to act in anticipation of a similar stimulus. Thus, we are likely to experience anxiety in an environment with cues similar to one where we have experienced pain or fear – such as the dentist's office. Secondary drives are built on primary drives, which are biologically driven, and motivate us to act with no prior learning process – such as hunger, thirst or the need for sexual activity. However, secondary drives are thought to represent more specific elaborations of primary drives, behind which the functions of the original primary drive continue to exist. Thus, the primary drives of fear and pain exist behind the acquired drive of anxiety. Secondary drives can be based on multiple primary drives and even in other secondary drives. This is said to give them strength and persistence. Examples include the need for money, which was conceptualized as arising from multiple primary drives such as the drive for food and warmth, as well as from secondary drives such as imitativeness (the drive to do as others do) and anxiety. Secondary drives vary based on the social conditions under which they were learned – such as culture. Dollard and Miller used the example of food, stating that the primary drive of hunger manifested itself behind the learned secondary drive of an appetite for a specific type of food, which was dependent on the culture of the individual. Secondary drives are also explicitly social, representing a manner in which we convey our primary drives to others. Indeed, many primary drives are actively repressed by society (such as the sexual drive). Dollard and Miller believed that the acquisition of secondary drives was essential to childhood development. As children develop, they learn |
point of view, person, and number. The breadth of each subcategory however tends to differ among languages. Binding theory and antecedents The use of pronouns often involves anaphora, where the meaning of the pronoun is dependent on another referential element. The referent of the pronoun is often the same as that of a preceding (or sometimes following) noun phrase, called the antecedent of the pronoun. The grammatical behavior of certain types of pronouns, and in particular their possible relationship with their antecedents, has been the focus of studies in binding, notably in the Chomskyan government and binding theory. In this binding context, reflexive and reciprocal pronouns in English (such as himself and each other) are referred to as anaphors (in a specialized restricted sense) rather than as pronominal elements. Under binding theory, specific principles apply to different sets of pronouns. In English, reflexive and reciprocal pronouns must adhere to Principle A: an anaphor (reflexive or reciprocal, such as "each other") must be bound in its governing category (roughly, the clause). Therefore, in syntactic structure it must be lower in structure (it must have an antecedent) and have a direct relationship with its referent. This is called a C-command relationship. For instance, we see that John cut himself is grammatical, but Himself cut John is not, despite having identical arguments, since himself, the reflexive, must be lower in structure to John, its referent. Additionally, we see examples like John said that Mary cut himself are not grammatical because there is an intermediary noun, Mary, that disallows the two referents from having a direct relationship. On the other hand, personal pronouns (such as him or them) must adhere to Principle B: a pronoun must be free (i.e., not bound) within its governing category (roughly, the clause). This means that although the pronouns can have a referent, they cannot have a direct relationship with the referent where the referent selects the pronoun. For instance, John said Mary cut him is grammatical because the two co-referents, John and him are separated structurally by Mary. This is why a sentence like John cut him where him refers to John is ungrammatical. Binding cross-linguistically The type of binding that applies to subsets of pronouns varies cross-linguistically. For instance, in German linguistics, pronouns can be split into two distinct categories — personal pronouns and d-pronouns. Although personal pronouns act identically to English personal pronouns (i.e. follow Principle B), d-pronouns follow yet another principle, Principle C, and function similarly to nouns in that they cannot have a direct relationship to an antecedent. Antecedents The following sentences give examples of particular types of pronouns used with antecedents: Third-person personal pronouns: That poor man looks as if he needs a new coat. (the noun phrase that poor man is the antecedent of he) Julia arrived yesterday. I met her at the station. (Julia is the antecedent of her) When they saw us, the lions began roaring (the lions is the antecedent of they; as it comes after the pronoun it may be called a postcedent) Other personal pronouns in some circumstances: Terry and I were hoping no one would find us. (Terry and I is the antecedent of us) You and Alice can come if you like. (you and Alice is the antecedent of the second – plural – you) Reflexive and reciprocal pronouns: Jack hurt himself. (Jack is the antecedent of himself) We were teasing each other. (we is the antecedent of each other) Relative pronouns: The woman who looked at you is my sister. (the woman is the antecedent of who) Some other types, such as indefinite pronouns, are usually used without antecedents. Relative pronouns are used without antecedents in free relative clauses. Even third-person personal pronouns are sometimes used without antecedents ("unprecursed") – this applies to special uses such as dummy pronouns and generic they, as well as cases where the referent is implied by the context. English pronouns English personal pronouns have a number of different syntactic contexts (Subject, Object, Possessive, Reflexive) and many features: person (1st, 2nd, 3rd); number (singular, plural); gender (masculine, feminine, neuter or inanimate, epicene) English also has other pronoun types, including demonstrative, relative, indefinite, and interrogative pronouns: Personal and possessive Personal Personal pronouns may be classified by person, number, gender and case. English has three persons (first, second and third) and two numbers (singular and plural); in the third person singular there are also distinct pronoun forms for male, female and neuter gender. Principal forms are shown in the adjacent table. English personal pronouns have two cases, subject and object. Subject pronouns are used in subject position (I like to eat chips, but she does not). Object pronouns are used for the object of a verb or preposition (John likes me but not her). Other distinct forms found in some languages include: Second person informal and formal pronouns (the T–V distinction), like tu and vous in French. Formal second person pronouns can also signify plurality in many languages. There is no such distinction in standard modern English, though Elizabethan English marked the distinction with thou (singular informal) and you (plural or singular formal). Some dialects of English have developed informal plural second person pronouns, for instance, y'all (Southern American English) and you guys (American English). Inclusive and exclusive first person plural pronouns, which indicate whether or not the audience is included, that is, whether we means "you and I" or "they and I". There is no such distinction in English. Intensive (emphatic) pronouns, which re-emphasize a noun or pronoun that has already been mentioned. English uses the same forms as the reflexive pronouns; for example: I did it myself (contrast reflexive use, I did it to myself). Direct and indirect object pronouns, such as le and lui in French. English uses the same form for both; for example: Mary loves him (direct object); Mary sent him a letter (indirect object). Prepositional pronouns, used after a preposition. English uses ordinary object pronouns here: Mary looked at him. Disjunctive pronouns, used in isolation or in certain other special grammatical contexts, like moi in French. No distinct forms exist in English; for example: Who does this belong to? Me. Strong and weak forms of certain pronouns, found in some languages such as Polish. Pronoun avoidance, where personal pronouns are substituted by titles or kinship terms (particularly common in South-East Asia). Possessive Possessive pronouns are used to indicate possession (in a broad sense). Some occur as independent noun phrases: mine, yours, hers, ours, theirs. An example is: Those clothes are mine. Others act as a determiner and must accompany a noun: my, your, her, our, your, their, as in: I lost my wallet. (His and its can fall into either category, although its is nearly always found in the second.) Those of the second type have traditionally also been described as possessive adjectives, and in more modern terminology as possessive determiners. The term "possessive pronoun" is sometimes restricted to the first type. Both types replace possessive noun phrases. As an example, Their crusade to capture our attention could replace The advertisers' crusade to capture our attention. Reflexive and reciprocal Reflexive pronouns are used when a person or thing acts on itself, for example, John cut himself. In English they all end in -self or -selves and must refer to a noun phrase elsewhere in the same clause. Reciprocal pronouns refer to a reciprocal relationship (each other, one another). They must refer to a noun phrase in the same clause. An example in English is: They do not like each other. In some languages, the same forms can be used as both reflexive and reciprocal pronouns. Demonstrative Demonstrative pronouns (in English, this, that and their plurals these, those) often distinguish their targets by pointing or some other indication of position; for example, I'll take these. They may also be anaphoric, depending on an earlier expression for context, for example, A kid actor would try to be all sweet, and who needs that? Indefinite Indefinite pronouns, the largest group of pronouns, refer to one or more unspecified persons or things. One group in English includes compounds of some-, any-, every- and no- with -thing, -one and -body, for example: Anyone can do that. Another group, including many, more, both, and most, can appear alone | a subclass of pronouns or vice versa. The distinction may be considered to be one of subcategorization or valency, rather like the distinction between transitive and intransitive verbs – determiners take a noun phrase complement like transitive verbs do, while pronouns do not. This is consistent with the determiner phrase viewpoint, whereby a determiner, rather than the noun that follows it, is taken to be the head of the phrase. Cross-linguistically, it seems as though pronouns share 3 distinct categories: point of view, person, and number. The breadth of each subcategory however tends to differ among languages. Binding theory and antecedents The use of pronouns often involves anaphora, where the meaning of the pronoun is dependent on another referential element. The referent of the pronoun is often the same as that of a preceding (or sometimes following) noun phrase, called the antecedent of the pronoun. The grammatical behavior of certain types of pronouns, and in particular their possible relationship with their antecedents, has been the focus of studies in binding, notably in the Chomskyan government and binding theory. In this binding context, reflexive and reciprocal pronouns in English (such as himself and each other) are referred to as anaphors (in a specialized restricted sense) rather than as pronominal elements. Under binding theory, specific principles apply to different sets of pronouns. In English, reflexive and reciprocal pronouns must adhere to Principle A: an anaphor (reflexive or reciprocal, such as "each other") must be bound in its governing category (roughly, the clause). Therefore, in syntactic structure it must be lower in structure (it must have an antecedent) and have a direct relationship with its referent. This is called a C-command relationship. For instance, we see that John cut himself is grammatical, but Himself cut John is not, despite having identical arguments, since himself, the reflexive, must be lower in structure to John, its referent. Additionally, we see examples like John said that Mary cut himself are not grammatical because there is an intermediary noun, Mary, that disallows the two referents from having a direct relationship. On the other hand, personal pronouns (such as him or them) must adhere to Principle B: a pronoun must be free (i.e., not bound) within its governing category (roughly, the clause). This means that although the pronouns can have a referent, they cannot have a direct relationship with the referent where the referent selects the pronoun. For instance, John said Mary cut him is grammatical because the two co-referents, John and him are separated structurally by Mary. This is why a sentence like John cut him where him refers to John is ungrammatical. Binding cross-linguistically The type of binding that applies to subsets of pronouns varies cross-linguistically. For instance, in German linguistics, pronouns can be split into two distinct categories — personal pronouns and d-pronouns. Although personal pronouns act identically to English personal pronouns (i.e. follow Principle B), d-pronouns follow yet another principle, Principle C, and function similarly to nouns in that they cannot have a direct relationship to an antecedent. Antecedents The following sentences give examples of particular types of pronouns used with antecedents: Third-person personal pronouns: That poor man looks as if he needs a new coat. (the noun phrase that poor man is the antecedent of he) Julia arrived yesterday. I met her at the station. (Julia is the antecedent of her) When they saw us, the lions began roaring (the lions is the antecedent of they; as it comes after the pronoun it may be called a postcedent) Other personal pronouns in some circumstances: Terry and I were hoping no one would find us. (Terry and I is the antecedent of us) You and Alice can come if you like. (you and Alice is the antecedent of the second – plural – you) Reflexive and reciprocal pronouns: Jack hurt himself. (Jack is the antecedent of himself) We were teasing each other. (we is the antecedent of each other) Relative pronouns: The woman who looked at you is my sister. (the woman is the antecedent of who) Some other types, such as indefinite pronouns, are usually used without antecedents. Relative pronouns are used without antecedents in free relative clauses. Even third-person personal pronouns are sometimes used without antecedents ("unprecursed") – this applies to special uses such as dummy pronouns and generic they, as well as cases where the referent is implied by the context. English pronouns English personal pronouns have a number of different syntactic contexts (Subject, Object, Possessive, Reflexive) and many features: person (1st, 2nd, 3rd); number (singular, plural); gender (masculine, feminine, neuter or inanimate, epicene) English also has other pronoun types, including demonstrative, relative, indefinite, and interrogative pronouns: Personal and possessive Personal Personal pronouns may be classified by person, number, gender and case. English has three persons (first, second and third) and two numbers (singular and plural); in the third person singular there are also distinct pronoun forms for male, female and neuter gender. Principal forms are shown in the adjacent table. English personal pronouns have two cases, subject and object. Subject pronouns are used in subject position (I like to eat chips, but she does not). Object pronouns are used for the object of a verb or preposition (John likes me but not her). Other distinct forms found in some languages include: Second person informal and formal pronouns (the T–V distinction), like tu and vous in French. Formal second person pronouns can also signify plurality in many languages. There is no such distinction in standard modern English, though Elizabethan English marked the distinction with thou (singular informal) and you (plural or singular formal). Some dialects of English have developed informal plural second person pronouns, for instance, y'all (Southern American English) and you guys (American English). Inclusive and exclusive first person plural pronouns, which indicate whether or not the audience is included, that is, whether we means "you and I" or "they and I". There is no such distinction in English. Intensive (emphatic) pronouns, which re-emphasize a noun or pronoun that has already been mentioned. English uses the same forms as the reflexive pronouns; for example: I did it myself (contrast reflexive use, I did it to myself). Direct and indirect object pronouns, such |
especially other ascetics. Considerable parts of the Christian world had never heard of Augustine's doctrine of original sin. Eighteen Italian bishops, including Julian of Eclanum, protested the condemnation of Pelagius and refused to follow Zosimus' . Many of them later had to seek shelter with the Greek bishops Theodore of Mopsuestia and Nestorius, leading to accusations that Pelagian errors lay beneath the Nestorian controversy over Christology. Both Pelagianism and Nestorianism were condemned at the Council of Ephesus in 431. With its supporters either condemned or forced to move to the East, Pelagianism ceased to be a viable doctrine in the Latin West. Despite repeated attempts to suppress Pelagianism and similar teachings, some followers were still active in the Ostrogothic Kingdom (493–553), most notably in Picenum and Dalmatia during the rule of Theoderic the Great. Pelagianism was also reported to be popular in Britain, as Germanus of Auxerre made at least one visit (in 429) to denounce the heresy. Some scholars, including Nowell Myres and John Morris, have suggested that Pelagianism in Britain was understood as an attack on Roman decadence and corruption, but this idea has not gained general acceptance. Pelagius' teachings Free will and original sin The idea that God had created anything or anyone who was evil by nature struck Pelagius as Manichean. Pelagius taught that humans were free of the burden of original sin, because it would be unjust for any person to be blamed for another's actions. According to Pelagianism, humans were created in the image of God and had been granted conscience and reason to determine right from wrong, and the ability to carry out correct actions. If "sin" could not be avoided it could not be considered sin. In Pelagius' view, the doctrine of original sin placed too little emphasis on the human capacity for self-improvement, leading either to despair or to reliance on forgiveness without responsibility. He also argued that many young Christians were comforted with false security about their salvation leading them to relax their Christian practice. Pelagius believed that Adam's transgression had caused humans to become mortal, and given them a bad example, but not corrupted their nature, while Caelestius went even further, arguing that Adam had been created mortal. He did not even accept the idea that original sin had instilled fear of death among humans, as Augustine said. Instead, Pelagius taught that the fear of death could be overcome by devout Christians, and that death could be a release from toil rather than a punishment. Both Pelagius and Caelestius reasoned that it would be unreasonable for God to command the impossible, and therefore each human retained absolute freedom of action and full responsibility for all actions. Pelagius did not accept any limitation on free will, including necessity, compulsion, or limitations of nature. He believed that teaching a strong position on free will was the best motivation for individuals to reform their conduct. Sin and virtue In the Pelagian view, by corollary, sin was not an inevitable result of fallen human nature, but instead came about by free choice and bad habits; through repeated sinning, a person could corrupt their own nature and enslave themself to sin. Pelagius believed that God had given man the Old Testament and Mosaic Law in order to counter these ingrained bad habits, and when that wore off over time God revealed the New Testament. However, because Pelagius considered a person to always have the ability to choose the right action in each circumstance, it was therefore theoretically possible (though rare) to live a sinless life. Jesus Christ, held in Christian doctrine to have lived a life without sin, was the ultimate example for Pelagians seeking perfection in their own lives, but there were also other humans who were without sin—including some notable pagans and especially the Hebrew prophets. This view was at odds with that of Augustine and orthodox Christianity, which taught that Jesus was the only man free of sin. Pelagius did teach Jesus' vicarious atonement for the sins of mankind and the cleansing effect of baptism, but placed less emphasis on these aspects. Pelagius taught that a human's ability to act correctly was a gift of God, as well as divine revelation and the example and teachings of Jesus. Further spiritual development, including faith in Christianity, was up to individual choice, not divine benevolence. Pelagius accepted no excuse for sin, and argued that Christians should be like the church described in Ephesians 5:27, "without spot or wrinkle". Instead of accepting the inherent imperfection of man, or arguing that the highest moral standards could only be applied to an elite, Pelagius taught that all Christians should strive for perfection. Like Jovinian, Pelagius taught that married life was not inferior to monasticism, but with the twist that all Christians regardless of life situation were called to a kind of asceticism. Pelagius taught that it was not sufficient for a person to call themselves a Christian and follow the commandments of scripture; it was also essential to actively do good works and cultivate virtue, setting themselves apart from the masses who were "Christian in name only", and that Christians ought to be extraordinary and irreproachable in conduct. Specifically, he emphasized the importance of reading scripture, following religious commandments, charity, and taking responsibility for one's actions, and maintaining modesty and moderation. Pelagius taught that true virtue was not reflected externally in social status, but was an internal spiritual state. He explicitly called on wealthy Christians to share their fortunes with the poor. (Augustine criticized Pelagius' call for wealth redistribution.) Baptism and judgment Because sin in the Pelagian view was deliberate, with people responsible only for their own actions, infants were considered without fault in Pelagianism, and unbaptized infants were not thought to be sent to hell. Like early Augustine, Pelagians believed that infants would be sent to purgatory. Although Pelagius rejected that infant baptism was necessary to cleanse original sin, he nevertheless supported the practice because he felt it improved their spirituality through a closer union with Jesus. For adults, baptism was essential because it was the mechanism for obtaining forgiveness of the sins that a person had personally committed and a new beginning in their relationship with God. After death, adults would be judged by their acts and omissions and consigned to everlasting fire if they had failed: "not because of the evils they have done, but for their failures to do good". He did not accept purgatory as a possible destination for adults. Although Pelagius taught that the path of righteousness was open to all, in practice only a few would manage to follow it and be saved. Like many medieval theologians, Pelagius believed that instilling in Christians the fear of hell was often necessary to convince them to follow their religion where internal motivation was absent or insufficient. Comparison Significant influences on Pelagius included Eastern Christianity, which had a more positive view of human nature, and classical philosophy, from which he drew the ideas of personal autonomy and self-improvement. Augustine accused Pelagius' idea of virtue of being "Ciceronian", because it overemphasized the role of human intellect and will. Although his teachings on original sin were novel, Pelagius' views on grace, free will and predestination were similar to those of contemporary Greek-speaking theologians such as Origen, John Chrysostom, and Jerome. Theologian Carol Harrison commented that Pelagianism is "a radically different alternative to Western understandings of the human person, human responsibility and freedom, ethics and the nature of salvation" which might have come about if Augustine had not been victorious in the Pelagian controversy. According to Harrison, "Pelagianism represents an attempt to safeguard God’s justice, to preserve the integrity of human nature as created by God, and of human beings' obligation, responsibility and ability to attain a life of perfect righteousness." However, this is at the expense of downplaying human frailty and presenting "the operation of divine grace as being merely external". According to scholar Rebecca Weaver, "what most distinguished Pelagius was his conviction of an unrestricted freedom of choice, given by God and immune to alteration by sin or circumstance." Definition What Augustine called "Pelagianism" was more his own invention than that of Pelagius. According to Thomas Scheck, Pelagianism is the heresy of denying Catholic Church teaching on original sin, or more specifically the beliefs condemned as heretical in 417 and 418. In her study, Ali Bonner (a lecturer at the University | on sin and original sin. Caelestius defended himself by arguing that his original sin was still being debated and his beliefs were orthodox. His views on grace were not mentioned, although Augustine (who had not been present) later claimed that Caelestius had been condemned because of "arguments against the grace of Christ". Unlike Caelestius, Pelagius refused to answer the question as to whether man had been created mortal, and, outside of Northern Africa, it was Caelestius' teachings which were the main targets of condemnation. In 412, Augustine read Pelagius' Commentary on Romans and described its author as a "highly advanced Christian". Augustine maintained friendly relations with Pelagius until the next year, initially only condemning Caelestius' teachings, and considering his dispute with Pelagius to be an academic one. Jerome attacked Pelagianism for saying that humans had the potential to be sinless, and connected it with other recognized heresies, including Origenism, Jovinianism, Manichaeanism, and Priscillianism. Scholar Michael Rackett noted that the linkage of Pelagianism and Origenism was "dubious" but influential. Jerome also disagreed with Pelagius' strong view of free will. In 415, he wrote to refute Pelagian statements. Noting that Jerome was also an ascetic and critical of earthly wealth, historian Wolf Liebeschuetz suggested that his motive for opposing Pelagianism was envy of Pelagius' success. In 415, Augustine's emissary Orosius brought charges against Pelagius at a council in Jerusalem, which were referred to Rome for judgement. The same year, the exiled Gallic bishops Heros of Arles and Lazarus of Aix accused Pelagius of heresy, citing passages in Caelestius' . Pelagius defended himself by disavowing Caelestius' teachings, leading to his acquittal at the Synod of Diospolis in Lod, which proved to be a key turning point in the controversy. Following the verdict, Augustine convinced two synods in North Africa to condemn Pelagianism, whose findings were partially confirmed by Pope Innocent I. In January 417, shortly before his death, Innocent excommunicated Pelagius and two of his followers. Innocent's successor, Zosimus, reversed the judgement against Pelagius, but backtracked following pressure from the African bishops. Pelagianism was later condemned at the Council of Carthage in 418, after which Zosimus issued the excommunicating both Pelagius and Caelestius. Concern that Pelagianism undermined the role of the clergy and episcopacy was specifically cited in the judgement. At the time, Pelagius' teachings had considerable support among Christians, especially other ascetics. Considerable parts of the Christian world had never heard of Augustine's doctrine of original sin. Eighteen Italian bishops, including Julian of Eclanum, protested the condemnation of Pelagius and refused to follow Zosimus' . Many of them later had to seek shelter with the Greek bishops Theodore of Mopsuestia and Nestorius, leading to accusations that Pelagian errors lay beneath the Nestorian controversy over Christology. Both Pelagianism and Nestorianism were condemned at the Council of Ephesus in 431. With its supporters either condemned or forced to move to the East, Pelagianism ceased to be a viable doctrine in the Latin West. Despite repeated attempts to suppress Pelagianism and similar teachings, some followers were still active in the Ostrogothic Kingdom (493–553), most notably in Picenum and Dalmatia during the rule of Theoderic the Great. Pelagianism was also reported to be popular in Britain, as Germanus of Auxerre made at least one visit (in 429) to denounce the heresy. Some scholars, including Nowell Myres and John Morris, have suggested that Pelagianism in Britain was understood as an attack on Roman decadence and corruption, but this idea has not gained general acceptance. Pelagius' teachings Free will and original sin The idea that God had created anything or anyone who was evil by nature struck Pelagius as Manichean. Pelagius taught that humans were free of the burden of original sin, because it would be unjust for any person to be blamed for another's actions. According to Pelagianism, humans were created in the image of God and had been granted conscience and reason to determine right from wrong, and the ability to carry out correct actions. If "sin" could not be avoided it could not be considered sin. In Pelagius' view, the doctrine of original sin placed too little emphasis on the human capacity for self-improvement, leading either to despair or to reliance on forgiveness without responsibility. He also argued that many young Christians were comforted with false security about their salvation leading them to relax their Christian practice. Pelagius believed that Adam's transgression had caused humans to become mortal, and given them a bad example, but not corrupted their nature, while Caelestius went even further, arguing that Adam had been created mortal. He did not even accept the idea that original sin had instilled fear of death among humans, as Augustine said. Instead, Pelagius taught that the fear of death could be overcome by devout Christians, and that death could be a release from toil rather than a punishment. Both Pelagius and Caelestius reasoned that it would be unreasonable for God to command the impossible, and therefore each human retained absolute freedom of action and full responsibility for all actions. Pelagius did not accept any limitation on free will, including necessity, compulsion, or limitations of nature. He believed that teaching a strong position on free will was the best motivation for individuals to reform their conduct. Sin and virtue In the Pelagian view, by corollary, sin was not an inevitable result of fallen human nature, but instead came about by free choice and bad habits; through repeated sinning, a person could corrupt their own nature and enslave themself to sin. Pelagius believed that God had given man the Old Testament and Mosaic Law in order to counter these ingrained bad habits, and when that wore off over time God revealed the New Testament. However, because Pelagius considered a person to always have the ability to choose the right action in each circumstance, it was therefore theoretically possible (though rare) to live a sinless life. Jesus Christ, held in Christian doctrine to have lived a life without sin, was the ultimate example for Pelagians seeking perfection in their own lives, but there were also other humans who were without sin—including some notable pagans and especially the Hebrew prophets. This view was at odds with that of Augustine and orthodox Christianity, which taught that Jesus was the only man free of sin. Pelagius did teach Jesus' vicarious atonement for the sins of mankind and the cleansing effect of baptism, but placed less emphasis on these aspects. Pelagius taught that a human's ability to act correctly was a gift of God, as well as divine revelation and the example and teachings of Jesus. Further spiritual development, including faith in Christianity, was up to individual choice, not divine benevolence. Pelagius accepted no excuse for sin, and argued that Christians should be like the church described in Ephesians 5:27, "without spot or wrinkle". Instead of accepting the inherent imperfection of man, or arguing that the highest moral standards could only be applied to an elite, Pelagius taught that all Christians should strive for perfection. Like Jovinian, Pelagius taught that married life was not inferior to monasticism, but with the twist that all Christians regardless of life situation were called to a kind of asceticism. Pelagius taught that it was not sufficient for a person to call themselves a Christian and follow the commandments of scripture; it was also essential to actively do good works and cultivate virtue, setting themselves apart from the masses who were "Christian in name only", and that Christians ought to be extraordinary and irreproachable in conduct. Specifically, he emphasized the importance of reading scripture, following religious commandments, charity, and taking responsibility for one's actions, and maintaining modesty and moderation. Pelagius taught that true virtue was not reflected externally in social status, but was an internal spiritual state. He explicitly called on wealthy Christians to share their fortunes with the poor. (Augustine criticized Pelagius' call for wealth redistribution.) Baptism and judgment Because sin in the Pelagian view was deliberate, with people responsible only for their own actions, infants were considered without fault in Pelagianism, and unbaptized infants were not thought to be sent to hell. Like early Augustine, Pelagians believed that infants would be sent to purgatory. Although Pelagius rejected that infant baptism was necessary to cleanse original sin, he nevertheless supported the practice because he felt it improved their spirituality through a closer union with Jesus. For adults, baptism was essential because it was the mechanism for obtaining forgiveness of the sins that a person had personally committed and a new beginning in their relationship with God. After death, adults would be judged by their acts and omissions and consigned to everlasting fire if they had failed: "not because of the evils they have done, but for their failures to do good". He did not accept purgatory as a possible destination for adults. Although Pelagius taught that the path of righteousness was open to all, in practice only a few would manage to follow it and be saved. Like many medieval theologians, Pelagius believed that instilling in Christians the fear of hell was often necessary to convince them to follow their religion where internal motivation was absent or insufficient. Comparison Significant influences on Pelagius included Eastern Christianity, which had a more positive view of human nature, and classical philosophy, from which he drew the ideas of personal autonomy and self-improvement. Augustine accused Pelagius' idea of virtue of being "Ciceronian", because it overemphasized the role of human intellect and will. Although his teachings on original sin were novel, Pelagius' views on grace, free will and predestination were similar to those of contemporary Greek-speaking theologians such as Origen, John Chrysostom, and Jerome. Theologian Carol Harrison commented that Pelagianism is "a radically different alternative to Western understandings of the human person, human responsibility and freedom, ethics and the nature of salvation" which might have come about if Augustine had not been victorious in the Pelagian controversy. According to Harrison, "Pelagianism represents an attempt to safeguard God’s justice, to preserve the integrity of human nature as created by God, and of human beings' obligation, responsibility and ability to attain a life of perfect righteousness." However, this is at the expense of downplaying human frailty and presenting "the operation of divine grace as being merely external". According to scholar Rebecca Weaver, "what most distinguished Pelagius was his conviction of an unrestricted freedom of choice, given by God and immune to alteration by sin or circumstance." Definition What Augustine called "Pelagianism" was more his own invention than that of Pelagius. According to Thomas Scheck, Pelagianism is the heresy of denying Catholic Church teaching on original sin, or more specifically the beliefs condemned as heretical in 417 and 418. In her study, Ali Bonner (a lecturer at the University of Cambridge) found that there was no |
was reconciling the world to himself in Christ [the Son], not counting people’s sins against them. . . . God [the Father] made him who had no sin [Jesus of Nazareth] to be sin for us, so that in him [the Son] we might become the righteousness of God [the Father]." (2 Corinthians 5:19, 21) It is possible, however, to modify patripassianism so as to acknowledge the Divine Being as having feelings toward, and sharing in the experiences of, both Jesus— whom Christians regard as both human and divine— and other human beings. Full-orbed patripassianism denies Trinitarian distinctions, yet it does not contradict Christianity as defined in the Creeds to say that God feels or experiences things, including nonphysical forms of suffering. With regard to the crucifixion of Jesus, they claim it is consistent with Scriptural teaching to say that God the Father suffered—that is, felt emotional and spiritual pain as He watched His Son suffer on the Cross, as it is written "The Spirit searches all things, even the deep things of God (...) no one knows the thoughts of God except the Spirit of God (...) What we have received is (...) the Spirit who is from God." (1 Corinthians 2:10-12). History Patripassianism is attested as early as the 2nd century; theologians such as Praxeas speak of God as unipersonal. Patripassianism was referred to as a belief ascribed to those following Sabellianism, after a chief proponent, Sabellius, especially by the chief opponent Tertullian, who also opposed Praexas. Sabellius, considered a founder of an early movement, was a priest who was excommunicated from the Church by Pope Callixtus I in 220 and lived in Rome. Sabellius advanced the doctrine of one God sometimes referred to as the “economic Trinity” and he opposed the Eastern Orthodox doctrine of the “essential Trinity”. Praxeas and Noetus were some major followers. Because the writings of Sabellius were destroyed it is hard to know if he did actually believe in Patripassianism, but one early | the Father. In the West, a version of this belief was known pejoratively as patripassianism by its critics (from Latin patri- "father" and passio "suffering"), because the teaching required that since God the Father had become directly incarnate in Christ, the Father literally sacrificed Himself on the Cross. Trinitarian perspective From the standpoint of the doctrine of the Trinity— one divine being existing in three persons— patripassianism is considered heretical since "it simply cannot make sense of the New Testament's teaching on the interpersonal relationship of Father, Son, and Spirit." In this patripassianism asserts that God the Father—rather than God the Son—became incarnate and suffered on the cross for humanity's redemption. This amplifies the personhood of Jesus Christ as the personality of the Father, but is seen by trinitarians as distorting the spiritual transaction of atonement that was taking place at the cross, which the Apostle Paul described as follows: "God [the Father] was reconciling the world to himself in Christ [the Son], not counting people’s sins against them. . . . God [the Father] made him who had no sin [Jesus of Nazareth] to be sin for us, so that in him [the Son] we might become the righteousness of God [the Father]." (2 Corinthians 5:19, 21) It is possible, however, to modify patripassianism so as to acknowledge the Divine Being as having feelings toward, and sharing in the experiences of, both Jesus— whom Christians regard as both human and divine— and other human beings. Full-orbed patripassianism denies Trinitarian distinctions, yet it does not contradict Christianity as defined in the Creeds to say that God feels or experiences things, including nonphysical forms of suffering. With regard to the crucifixion of Jesus, they claim it is consistent |
Jesus: Two Visions, who viewed the birth stories as "metaphorical narratives", and stated, "I do not think the virginal conception is historical, and I do not think there was a special star or wise men or shepherds or birth in a stable in Bethlehem. Thus I do not see these stories as historical reports but as literary creations." John Dominic Crossan, prominent member of the Jesus Seminar, author of Jesus: A Revolutionary Biography, who has stated, "I understand the virginal conception of Jesus to be a confessional statement about Jesus' status and not a biological statement about Mary's body. It is later faith in Jesus as an adult retrojected mythologically onto Jesus as an infant." Robert Funk, founder of the Jesus Seminar, and author of Honest to Jesus, who has asserted, "We can be certain that Mary did not conceive Jesus without the assistance of human sperm. It is unclear whether Joseph or some other unnamed male was the biological father of Jesus. It is possible that Jesus was illegitimate." Jane Schaberg, feminist biblical scholar and author of The Illegitimacy of Jesus, who contended that Matthew and Luke were aware that Jesus had been conceived illegitimately, probably as a result of rape, and had left hints of that knowledge, even though their main purpose was to explore the theological significance of Jesus' birth. Uta Ranke-Heinemann, who contends that the virgin birth of Jesus was meant—and should be understood—as an allegory of a special initiative of God, comparable to God's creation of Adam, and in line with legends and allegories of antiquity. David Jenkins, Bishop of Durham from 1984 until 1994, was the first senior Anglican clergyman to come to the attention of the UK media for his position that "I wouldn't put it past God to arrange a virgin birth if he wanted. But I don't think he did." Gerd Lüdemann, German New Testament scholar and historian, member of the Jesus Seminar, and author of Virgin Birth? The Real Story of Mary and Her Son Jesus, argued that early Christians had developed the idea of a virgin birth as a later "reaction to the report, meant as a slander but historically correct, that Jesus was conceived or born outside wedlock. ... It has a historical foundation in the fact that Jesus really did have another father than Joseph and was in fact fathered before Mary's marriage, presumably through rape." Robin Meyers, United Church of Christ minister, proponent of Progressive Christianity, and author of Saving Jesus From the Church: How to Stop Worshiping Christ and Start Following Jesus. Asserts that "A beautiful, but obviously contrived, tale is the virgin birth, which may have been used to cover a scandal." Sects and denominations The Divine Principle, the textbook of the Unification movement (also called the Unification Church), a new religious movement founded in South Korea, does not include the teaching that Zechariah was the | surely also be seen in an illegitimate baby boy born through the aggressive and selfish act of a man sexually violating a teenage girl." Marcus J. Borg, prominent member of the Jesus Seminar, author of numerous books, and co-author of The Meaning of Jesus: Two Visions, who viewed the birth stories as "metaphorical narratives", and stated, "I do not think the virginal conception is historical, and I do not think there was a special star or wise men or shepherds or birth in a stable in Bethlehem. Thus I do not see these stories as historical reports but as literary creations." John Dominic Crossan, prominent member of the Jesus Seminar, author of Jesus: A Revolutionary Biography, who has stated, "I understand the virginal conception of Jesus to be a confessional statement about Jesus' status and not a biological statement about Mary's body. It is later faith in Jesus as an adult retrojected mythologically onto Jesus as an infant." Robert Funk, founder of the Jesus Seminar, and author of Honest to Jesus, who has asserted, "We can be certain that Mary did not conceive Jesus without the assistance of human sperm. It is unclear whether Joseph or some other unnamed male was the biological father of Jesus. It is possible that Jesus was illegitimate." Jane Schaberg, feminist biblical scholar and author of The Illegitimacy of Jesus, who contended that Matthew and Luke were aware that Jesus had been conceived illegitimately, probably as a result of rape, and had left hints of that knowledge, even though their main purpose was to explore the theological significance of Jesus' birth. Uta Ranke-Heinemann, who contends that the virgin birth of Jesus was meant—and should be understood—as an allegory of a special initiative of God, comparable to God's creation of Adam, and in line with legends and allegories of antiquity. David Jenkins, Bishop of Durham from 1984 until 1994, was the first senior Anglican clergyman to come to the attention of the UK media for his position that "I wouldn't put it past God to arrange a virgin birth if he wanted. But I don't think he did." Gerd Lüdemann, German New Testament scholar and historian, member of the Jesus Seminar, and author of Virgin Birth? The Real Story of Mary and Her Son Jesus, argued that early Christians had developed the idea of a virgin birth as a later "reaction to the report, meant as a slander but historically correct, that Jesus was conceived or born outside wedlock. ... It has a historical foundation in the fact that Jesus really did have another father than Joseph and was in fact fathered before Mary's marriage, presumably through rape." Robin Meyers, United Church of Christ minister, proponent of Progressive Christianity, and author of Saving Jesus From the Church: How to Stop Worshiping Christ and Start Following Jesus. Asserts that "A beautiful, but obviously contrived, tale is the virgin birth, which may have been used to cover a scandal." Sects and denominations The Divine Principle, the textbook of the Unification movement (also called the Unification Church), a new religious movement founded in South Korea, does not include the teaching that Zechariah was the father of Jesus; however some of its members hold that belief. Notably, this view is advanced by Young Oon Kim, citing the work of British liberal theologian Leslie Weatherhead in her book Unification Theology (1980). The Church of Jesus Christ of Latter Day Saints (Strangite), founded by James Jesse Strang rejects the virgin birth and believes |
standards for scheduling daily life, work shifts, and public transportation. Their greater accuracy allowed for the faster pace of life which was necessary for the Industrial Revolution. The home pendulum clock was replaced by less-expensive, synchronous, electric clocks in the 1930s and '40s. Pendulum clocks are now kept mostly for their decorative and antique value. Pendulum clocks must be stationary to operate. Any motion or accelerations will affect the motion of the pendulum, causing inaccuracies, thus necessitating other mechanisms for use in portable timepieces. History The first pendulum clock was invented in 1656 by Dutch scientist and inventor Christiaan Huygens, and patented the following year. Huygens contracted the construction of his clock designs to clockmaker Salomon Coster, who actually built the clock. Huygens was inspired by investigations of pendulums by Galileo Galilei beginning around 1602. Galileo discovered the key property that makes pendulums useful timekeepers: isochronism, which means that the period of swing of a pendulum is approximately the same for different sized swings. Galileo in 1637 described to his son a mechanism which could keep a pendulum swinging, which has been called the first pendulum clock design (picture at top). It was partly constructed by his son in 1649, but neither lived to finish it. The introduction of the pendulum, the first harmonic oscillator used in timekeeping, increased the accuracy of clocks enormously, from about 15 minutes per day to 15 seconds per day<ref>, p.3, also published in Proceedings of the Royal Society of London, A 458, 563–579</ref> leading to their rapid spread as existing 'verge and foliot' clocks were retrofitted with pendulums. These early clocks, due to their verge escapements, had wide pendulum swings of 80–100°. In his 1673 analysis of pendulums, Horologium Oscillatorium, Huygens showed that wide swings made the pendulum inaccurate, causing its period, and thus the rate of the clock, to vary with unavoidable variations in the driving force provided by the movement. Clockmakers' realization that only pendulums with small swings of a few degrees are isochronous motivated the invention of the anchor escapement by Robert Hooke around 1658, which reduced the pendulum's swing to 4–6°. The anchor became the standard escapement used in pendulum clocks. In addition to increased accuracy, the anchor's narrow pendulum swing allowed the clock's case to accommodate longer, slower pendulums, which needed less power and caused less wear on the movement. The seconds pendulum (also called the Royal pendulum), 0.994 m (39.1 in) long, in which the time period is two seconds, became widely used in quality clocks. The long narrow clocks built around these pendulums, first made by William Clement around 1680, became known as grandfather clocks. The increased accuracy resulting from these developments caused the minute hand, previously rare, to be added to clock faces beginning around 1690. The 18th and 19th century wave of horological innovation that followed the invention of the pendulum brought many improvements to pendulum clocks. The deadbeat escapement invented in 1675 by Richard Towneley and popularized by George Graham around 1715 in his precision "regulator" clocks gradually replaced the anchor escapement and is now used in most modern pendulum clocks. Observation that pendulum clocks slowed down in summer brought the realization that thermal expansion and contraction of the pendulum rod with changes in temperature was a source of error. This was solved by the invention of temperature-compensated pendulums; the mercury pendulum by Graham in 1721 and the gridiron pendulum by John Harrison in 1726. With these improvements, by the mid-18th century precision pendulum clocks achieved accuracies of a few seconds per week. Until the 19th century, clocks were handmade by individual craftsmen and were very expensive. The rich ornamentation of pendulum clocks of this period indicates their value as status symbols of the wealthy. The clockmakers of each country and region in Europe developed their own distinctive styles. By the 19th century, factory production of clock parts gradually made pendulum clocks affordable by middle-class families. During the Industrial Revolution, the faster pace of life and scheduling of shifts and public transportation like trains depended on the more accurate timekeeping made possible by the pendulum. Daily life was organized around the home pendulum clock. More accurate pendulum clocks, called regulators, were installed in places of business and railroad stations and used to schedule work and set other clocks. The need for extremely accurate timekeeping in celestial navigation to determine longitude on ships during long sea voyages drove the development of the most accurate pendulum clocks, called astronomical regulators. These precision instruments, installed in naval observatories and kept accurate within a second by observation of star transits overhead, were used to set marine chronometers on naval and commercial vessels. Beginning in the 19th century, astronomical regulators in naval observatories served as primary standards for national time distribution services that distributed time signals over telegraph wires. From 1909, US National Bureau of Standards (now NIST) based the US time standard on Riefler pendulum clocks, accurate to about 10 milliseconds per day. In 1929 it switched to the Shortt-Synchronome free pendulum clock before phasing in quartz standards in the 1930s. With an error of less than one second per year, the Shortt was the most accurate commercially produced pendulum clock. Pendulum clocks remained the world standard for accurate timekeeping for 270 years, until the invention of the quartz clock in 1927, and were used as time standards through World War 2. The French Time Service included pendulum clocks in their ensemble of standard clocks until 1954. The home pendulum clock began to be replaced as domestic timekeeper during the 1930s and 1940s by the synchronous electric clock, which kept more accurate time because it was synchronized to the oscillation of the electric power grid. The most accurate experimental pendulum clock ever made may be the Littlemore Clock built by Edward T. Hall in the 1990s (donated in 2003 to the National Watch and Clock Museum, Columbia, Pennsylvania, USA). Mechanism The mechanism which runs a mechanical clock is called the movement. The movements of all mechanical pendulum clocks have these five parts: A power source; either a weight on a cord or chain that turns a pulley or sprocket, or a mainspring A gear train (wheel train) that steps up the speed of the power so that the pendulum can use it. The gear ratios of the gear train also divide the rotation rate down to give wheels that rotate once every hour and once every 12 hours, to turn the hands of the clock. An escapement that gives the pendulum precisely timed impulses to keep it swinging, and which releases the gear train wheels to move forward a fixed amount at each swing. This is the source of the "ticking" sound of an operating pendulum clock. The pendulum, a weight on a rod, which is the timekeeping element of the clock An indicator or dial that records how often the escapement has rotated and therefore how much time has passed, usually a traditional clock face with rotating hands. Additional functions in clocks besides basic timekeeping are called complications. More elaborate pendulum clocks may include these complications: Striking train: strikes a bell or gong on every hour, with the number of strikes equal to the number of the hour. Some clocks will also signal the half hour with a single strike. More elaborate types, technically called chiming clocks, strike on the quarter hours, and may play melodies or Cathedral chimes, usually Westminster quarters. Calendar dials: show the day, date, and sometimes month. Moon phase dial: Shows the phase of the moon, usually with a painted picture of the moon on a rotating disk. Equation of time dial: this rare complication was used in early | under the pendulum bob which moves the bob up or down on its rod. Moving the bob up reduces the length of the pendulum, reducing the pendulum's period so the clock gains time. In some pendulum clocks, fine adjustment is done with an auxiliary adjustment, which may be a small weight that is moved up or down the pendulum rod. In some master clocks and tower clocks, adjustment is accomplished by a small tray mounted on the rod where small weights are placed or removed to change the effective length, so the rate can be adjusted without stopping the clock. The period of a pendulum increases slightly with the width (amplitude) of its swing. The rate of error increases with amplitude, so when limited to small swings of a few degrees the pendulum is nearly isochronous; its period is independent of changes in amplitude. Therefore, the swing of the pendulum in clocks is limited to 2° to 4°. Temperature compensation A major source of error in pendulum clocks is thermal expansion; the pendulum rod changes in length slightly with changes in temperature, causing changes in the rate of the clock. An increase in temperature causes the rod to expand, making the pendulum longer, so its period increases and the clock loses time. Many older quality clocks used wooden pendulum rods to reduce this error, as wood expands less than metal. The first pendulum to correct for this error was the mercury pendulum invented by Graham in 1721, which was used in precision regulator clocks into the 20th century. These had a bob consisting of a container of the liquid metal mercury. An increase in temperature would cause the pendulum rod to expand, but the mercury in the container would also expand and its level would rise slightly in the container, moving the center of gravity of the pendulum up toward the pivot. By using the correct amount of mercury, the centre of gravity of the pendulum remained at a constant height, and thus its period remained constant, despite changes in temperature. The most widely used temperature-compensated pendulum was the gridiron pendulum invented by John Harrison around 1726. This consisted of a "grid" of parallel rods of high-thermal-expansion metal such as zinc or brass and low-thermal-expansion metal such as steel. If properly combined, the length change of the high-expansion rods compensated for the length change of the low-expansion rods, again achieving a constant period of the pendulum with temperature changes. This type of pendulum became so associated with quality that decorative "fake" gridirons are often seen on pendulum clocks, that have no actual temperature compensation function. Beginning around 1900, some of the highest precision scientific clocks had pendulums made of ultra-low-expansion materials such as the nickel steel alloy Invar or fused silica, which required very little compensation for the effects of temperature. Atmospheric drag The viscosity of the air through which the pendulum swings will vary with atmospheric pressure, humidity, and temperature. This drag also requires power that could otherwise be applied to extending the time between windings. Traditionally the pendulum bob is made with a narrow streamlined lens shape to reduce air drag, which is where most of the driving power goes in a quality clock. In the late 19th century and early 20th century, pendulums for precision regulator clocks in astronomical observatories were often operated in a chamber that had been pumped to a low pressure to reduce drag and make the pendulum's operation even more accurate by avoiding changes in atmospheric pressure. Fine adjustment of the rate of the clock could be made by slight changes to the internal pressure in the sealed housing. Leveling and "beat" To keep time accurately, pendulum clocks must be absolutely level. If they are not, the pendulum swings more to one side than the other, upsetting the symmetrical operation of the escapement. This condition can often be heard audibly in the ticking sound of the clock. The ticks or "beats" should be at precisely equally spaced intervals to give a sound of, "tick...tock...tick...tock"; if they are not, and have the sound "tick-tock...tick-tock..." the clock is out of beat and needs to be leveled. This problem can easily cause the clock to stop working, and is one of the most common reasons for service calls. A spirit level or watch timing machine can achieve a higher accuracy than relying on the sound of the beat; precision regulators often have a built-in spirit level for the task. Older freestanding clocks often have feet with adjustable screws to level them, more recent ones have a leveling adjustment in the movement. Some modern pendulum clocks have 'auto-beat' or 'self-regulating beat adjustment' devices, and don't need this adjustment. Local gravity Since the pendulum rate will increase with an increase in gravity, and local gravity varies with latitude and elevation on Earth, precision pendulum clocks must be readjusted to keep time after a move. For example, a pendulum clock moved from sea level to will lose 16 seconds per day. With the most accurate pendulum clocks, even moving the clock to the top of a tall building would cause it to lose measurable time due to lower gravity. Torsion pendulum Also called torsion-spring pendulum, this is a wheel-like mass (most often four spheres on cross spokes) suspended from a vertical strip (ribbon) of spring steel, used as the regulating mechanism in torsion pendulum clocks. Rotation of the mass winds and unwinds the suspension spring, with the energy impulse applied to the top of the spring. The main advantage of this type of pendulum is its low energy use; with a period of 12—15 seconds, compared to the gravity swing pendulum's period of 0.5—2s, it is possible to make clocks that need to be wound only every 30 days, or even only once a year or more. Since the restoring force is provided by the elasticity of the spring, which varies with temperature, it is more affected by temperature changes than a gravity-swing pendulum. The most accurate torsion clocks use a spring of elinvar which has low temperature coefficient of elasticity. A torsion pendulum clock requiring only annual winding is sometimes called a "400-Day clock" or "anniversary clock", sometimes given as a wedding gift. Torsion pendulums are also used in "perpetual" clocks which do not need winding, as their mainspring is kept wound by changes in atmospheric temperature and pressure with a bellows arrangement. The Atmos clock, one example, uses a torsion pendulum with a long oscillation period of 60 seconds. Escapement The escapement is a mechanical linkage that converts the force from the clock's wheel train into impulses that keep the pendulum swinging back and forth. It is the part that makes the "ticking" sound in a working pendulum clock. Most escapements consist of a wheel with pointed teeth called the escape wheel which is turned by the clock's wheel train, and surfaces the teeth push against, called pallets. During most of the pendulum's swing the wheel is prevented from turning because a tooth is resting against one of the pallets; this is called the "locked" state. Each swing of the pendulum a pallet releases a tooth of the escape wheel. The wheel rotates forward a fixed amount until a tooth catches on the other pallet. These releases allow the clock's wheel train to advance a fixed amount with each swing, moving the hands forward at a constant rate, controlled by the pendulum. Although the escapement is |
be produced in response to input conditions within a limited time, otherwise unintended operation will result. Invention and early development PLC originated in the late 1960s in the automotive industry in the US and were designed to replace relay logic systems. Before, control logic for manufacturing was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. The hard-wired nature made it difficult for design engineers to alter the automation process. Changes would require rewiring and careful updating of the documentation. If even one wire were out of place, or one relay failed, the whole system would become faulty. Often technicians would spend hours troubleshooting by examining the schematics and comparing them to existing wiring. When general-purpose computers became available, they were soon applied to control logic in industrial processes. These early computers were unreliable and required specialist programmers and strict control of working conditions, such as temperature, cleanliness, and power quality. The PLC provided several advantages over earlier automation systems. It tolerated the industrial environment better than computers and was more reliable, compact and required less maintenance than relay systems. It was easily extensible with additional I/O modules, while relay systems required complicated hardware changes in case of reconfiguration. This allowed for easier iteration over manufacturing process design. With simple programming language focused on logic and switching operations, it was more user-friendly than computers using general-purpose programming languages. It also permitted its operation to be monitored. Early PLCs were programmed in ladder logic, which strongly resembled a schematic diagram of relay logic. This program notation was chosen to reduce training demands for the existing technicians. Other PLCs used a form of instruction list programming, based on a stack-based logic solver. Modicon In 1968, GM Hydramatic (the automatic transmission division of General Motors) issued a request for proposals for an electronic replacement for hard-wired relay systems based on a white paper written by engineer Edward R. Clark. The winning proposal came from Bedford Associates from Bedford, Massachusetts. The result was the first PLC—built in 1969–designated the 084, because it was Bedford Associates' eighty-fourth project. Bedford Associates started a company dedicated to developing, manufacturing, selling, and servicing this new product, which they named (standing for modular digital controller). One of the people who worked on that project was Dick Morley, who is considered to be the "father" of the PLC. The Modicon brand was sold in 1977 to Gould Electronics and later to Schneider Electric, the current owner. About this same time, Modicon created Modbus, a data communications protocol used with its PLCs. Modbus has since become a standard open protocol commonly used to connect many industrial electrical devices. One of the first 084 models built is now on display at Schneider Electric's facility in North Andover, Massachusetts. It was presented to Modicon by GM, when the unit was retired after nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its product range until the 984 made its appearance. Allen-Bradley In a parallel development Odo Josef Struger is sometimes known as the "father of the programmable logic controller" as well. He was involved in the invention of the AllenBradley programmable logic controller and is credited with inventing the PLC initialism. Allen-Bradley (now a brand owned by Rockwell Automation) became a major PLC manufacturer in the United States during his tenure. Struger played a leadership role in developing IEC 61131-3 PLC programming language standards. Early methods of programming Many early PLCs were not capable of graphical representation of the logic, and so it was instead represented as a series of logic expressions in some kind of Boolean format, similar to Boolean algebra. As programming terminals evolved, it became more common for ladder logic to be used, because it was a familiar format used for electro-mechanical control panels. Newer formats, such as state logic and Function Block (which is similar to the way logic is depicted when using digital integrated logic circuits) exist, but they are still not as popular as ladder logic. A primary reason for this is that PLCs solve the logic in a predictable and repeating sequence, and ladder logic allows the person writing the logic to see any issues with the timing of the logic sequence more easily than would be possible in other formats. Up to the mid-1990s, PLCs were programmed using proprietary programming panels or special-purpose programming terminals, which often had dedicated function keys representing the various logical elements of PLC programs. Some proprietary programming terminals displayed the elements of PLC programs as graphic symbols, but plain ASCII character representations of contacts, coils, and wires were common. Programs were stored on cassette tape cartridges. Facilities for printing and documentation were minimal due to a lack of memory capacity. The oldest PLCs used non-volatile magnetic core memory. Architecture A PLC is an industrial microprocessor-based controller with programmable memory used to store program instructions and various functions. It consists of: a processor unit (CPU) which interprets inputs, executes the control program stored in memory and sends output signals, a power supply unit which converts AC voltage to DC, a memory unit storing data from inputs and program to be executed by the processor, an input and output interface, where the controller receives and sends data from/to external devices, a communications interface to receive and transmit data on communication networks from/to remote PLCs. PLCs require programming device which is used to develop and later download the created program into the memory of the controller. Modern PLCs generally contain a real-time operating system, such as OS-9 or VxWorks. Mechanical design There are two types of mechanical design for PLC systems. A single box, or a brick is a small programmable controller that fits all units and interfaces into one compact casing, although, typically, additional expansion modules for inputs and outputs are available. Second design type – a modular PLC – has a chassis (also called a rack) that provides space for modules with different functions, such as power supply, processor, selection of I/O modules and communication interfaces – which all can be customized for the particular application. Several racks can be administered by a single processor and may have thousands of inputs and outputs. Either a special high-speed serial I/O link or comparable communication method is used so that racks can be distributed away from the processor, reducing the wiring costs for large plants. Options are also available to mount I/O points directly to the machine and utilize quick disconnecting cables to sensors and valves, saving time for wiring and replacing components. Discrete and analog signals Discrete (digital) signals can only take on or off value (1 or 0, true or false). Examples of devices providing a discrete signal include limit switches, photoelectric sensors and encoders. Discrete signals are sent using either voltage or current, where specific extreme ranges are designated as on and off. For example, a controller might use 24 V DC input with values above 22 V DC representing on, values below 2 V DC representing off, and intermediate values undefined. Analog signals can use voltage or current that is proportional to the size of the monitored variable and can take any value within their scale. Pressure, temperature, flow, and weight are often represented by analog signals. These are typically interpreted as integer values with various ranges of accuracy depending on the device and the number of bits available to store the data. For example, an analog 0 to 10 V or 4-20 mA current loop input would be converted into an integer value of 0 to 32,767. The PLC will take this value and transpose it into the desired units of the process so the operator or program can read it. Proper integration will also include filter times to reduce noise as well as high and low limits to report faults. Current inputs are less sensitive to electrical noise (e.g. from welders or | conditions (such as dust, moisture, heat, cold), while offering extensive input/output (I/O) to connect the PLC to sensors and actuators. PLC input can include simple digital elements such as limit switches, analog variables from process sensors (such as temperature and pressure), and more complex data such as that from positioning or machine vision systems. PLC output can include elements such as indicator lamps, sirens, electric motors, pneumatic or hydraulic cylinders, magnetic relays, solenoids, or analog outputs. The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a fieldbus or computer network that plugs into the PLC. The functionality of the PLC has evolved over the years to include sequential relay control, motion control, process control, distributed control systems, and networking. The data handling, storage, processing power, and communication capabilities of some modern PLCs are approximately equivalent to desktop computers. PLC-like programming combined with remote I/O hardware, allow a general-purpose desktop computer to overlap some PLCs in certain applications. Desktop computer controllers have not been generally accepted in heavy industry because the desktop computers run on less stable operating systems than PLCs, and because the desktop computer hardware is typically not designed to the same levels of tolerance to temperature, humidity, vibration, and longevity as the processors used in PLCs. Operating systems such as Windows do not lend themselves to deterministic logic execution, with the result that the controller may not always respond to changes of input status with the consistency in timing expected from PLCs. Desktop logic applications find use in less critical situations, such as laboratory automation and use in small facilities where the application is less demanding and critical. Basic functions The most basic function of a programmable controller is to emulate the functions of electromechanical relays. Discrete inputs are given a unique address, and a PLC instruction can test if the input state is on or off. Just as a series of relay contacts perform a logical AND function, not allowing current to pass unless all the contacts are closed, so a series of "examine if on" instructions will energize its output storage bit if all the input bits are on. Similarly, a parallel set of instructions will perform a logical OR. In an electromechanical relay wiring diagram, a group of contacts controlling one coil is called a "rung" of a "ladder diagram ", and this concept is also used to describe PLC logic. Some models of PLC limit the number of series and parallel instructions in one "rung" of logic. The output of each rung sets or clears a storage bit, which may be associated with a physical output address or which may be an "internal coil" with no physical connection. Such internal coils can be used, for example, as a common element in multiple separate rungs. Unlike physical relays, there is usually no limit to the number of times an input, output or internal coil can be referenced in a PLC program. Some PLCs enforce a strict left-to-right, top-to-bottom execution order for evaluating the rung logic. This is different from electro-mechanical relay contacts, which, in a sufficiently complex circuit, may either pass current left-to-right or right-to-left, depending on the configuration of surrounding contacts. The elimination of these "sneak paths" is either a bug or a feature, depending on programming style. More advanced instructions of the PLC may be implemented as functional blocks, which carry out some operation when enabled by a logical input and which produce outputs to signal, for example, completion or errors, while manipulating variables internally that may not correspond to discrete logic. Communication PLCs use built-in ports, such as USB, Ethernet, RS-232, RS-485, or RS-422 to communicate with external devices (sensors, actuators) and systems (programming software, SCADA, HMI). Communication is carried over various industrial network protocols, like Modbus, or EtherNet/IP. Many of these protocols are vendor specific. PLCs used in larger I/O systems may have peer-to-peer (P2P) communication between processors. This allows separate parts of a complex process to have individual control while allowing the subsystems to co-ordinate over the communication link. These communication links are also often used for HMI devices such as keypads or PC-type workstations. Formerly, some manufacturers offered dedicated communication modules as an add-on function where the processor had no network connection built-in. User interface PLCs may need to interact with people for the purpose of configuration, alarm reporting, or everyday control. A human-machine interface (HMI) is employed for this purpose. HMIs are also referred to as man-machine interfaces (MMIs) and graphical user interfaces (GUIs). A simple system may use buttons and lights to interact with the user. Text displays are available as well as graphical touch screens. More complex systems use programming and monitoring software installed on a computer, with the PLC connected via a communication interface. Process of a scan cycle A PLC works in a program scan cycle, where it executes its program repeatedly. The simplest scan cycle consists of 3 steps: read inputs, execute the program, write outputs. The program follows the sequence of instructions. It typically takes a time span of tens of milliseconds for the processor to evaluate all the instructions and update the status of all outputs. If the system contains remote I/O—for example, an external rack with I/O modules—then that introduces additional uncertainty in the response time of the PLC system. As PLCs became more advanced, methods were developed to change the sequence of ladder execution, and subroutines were implemented. This enhanced programming could be used to save scan time for high-speed processes; for example, parts of the program used only for setting up the machine could be segregated from those parts required to operate at higher speed. Newer PLCs now have the option to run the logic program synchronously with the IO scanning. This means that IO is updated in the background and the logic reads and writes values as required during the logic scanning. Special-purpose I/O modules may be used where the scan time of the PLC is too long to allow predictable performance. Precision timing modules, or counter modules for use with shaft encoders, are used where the scan time would be too long to reliably count pulses or detect the sense of rotation of an encoder. This allows even a relatively slow PLC to still interpret the counted values to control a machine, as the accumulation of pulses is done by a dedicated module that is unaffected by the speed of program execution. Security In his book from 1998, E. A. Parr pointed out that even though most programmable controllers require physical keys and passwords, the lack of strict access control and version control systems, as well as an easy to understand programming language make it likely that unauthorized changes to programs will happen and remain unnoticed. Prior to the discovery of the Stuxnet computer worm in June 2010, security of PLCs received little attention. Modern programmable controllers generally contain a real-time operating systems, which can be vulnerable to exploits in similar way as desktop operating systems, like Microsoft Windows. PLCs can also be attacked by gaining control of a computer they communicate with. , these concerns have grown as networking is becoming more commonplace in the PLC environment connecting the previously separate plant floor networks and office networks. In February 2021, Rockwell Automation publicly disclosed a critical vulnerability affecting its Logix controllers family. Secret cryptographic key used to verify communication between the PLC and workstation can be extracted from Studio 5000 Logix Designer programming software and used to remotely change program code and configuration of connected controller. The vulnerability was given a severity score of 10 out of 10 on the CVSS vulnerability scale. At the time of writing, the mitigation of the vulnerability was to limit network access to affected devices. Safety PLCs In recent years "safety" PLCs have become popular, either as standalone models or as functionality and safety-rated hardware added to existing controller architectures (Allen-Bradley Guardlogix, Siemens F-series etc.). These differ from conventional PLC types by being suitable for safety-critical applications for which PLCs have traditionally been supplemented with hard-wired safety relays and areas of the memory dedicated to the safety instructions. The standard of safety level is the SIL. For example, a safety PLC might be used to control access to a robot cell with trapped-key access, or perhaps to manage the shutdown response to an emergency stop on a conveyor production line. Such PLCs typically have a restricted regular instruction set augmented with safety-specific instructions designed to interface with emergency stops, light screens, and so forth. The flexibility that such systems offer has resulted in rapid growth of demand for these controllers. PLC compared with other control systems PLCs are well adapted to a range of automation tasks. These are typically industrial processes in manufacturing where the cost of developing and maintaining the automation system is high relative to the total cost of the automation, and where changes to the system would be expected during its operational life. PLCs contain input and output devices compatible with industrial pilot devices and controls; little electrical design is required, and the design problem centers on expressing the desired sequence of operations. PLC applications are typically highly customized systems, so the cost of a packaged PLC is low compared to the cost of a specific custom-built controller design. On the other hand, in the case of mass-produced goods, customized control systems are economical. This is due to the lower cost of the components, which can be optimally chosen instead of a "generic" solution, and where the non-recurring engineering charges are spread over thousands or millions of units. Programmable controllers are widely used in motion, positioning, or torque control. Some manufacturers produce motion control units to be integrated with PLC so that G-code (involving a CNC machine) can be used to instruct machine movements. PLC Chip / Embedded Controller For small machines with low or medium volume. PLCs that can execute PLC languages such as Ladder, Flow-Chart/Grafcet,... Similar to traditional PLCs, but their small size allows developers to design them into custom printed circuit boards like a microcontroller, without computer programming knowledge, but with a |
had worked with DNA mappers James Watson and Francis Crick, and to whom David credits his sense of humor. He has two siblings, a brother Wally, seven years his junior, who works as an IT Systems Administrator in the financial sector, and a younger sister named Beth. David first became interested in comics when he was about five years old, reading copies of Harvey Comics' Casper and Wendy in a barbershop. He became interested in superheroes through the Adventures of Superman TV series. Although David's parents approved of his reading Harvey Comics and comics featuring Disney characters, they did not approve of superhero books, especially those published by Marvel Comics, feeling that characters that looked like monsters, such as the Thing or the Hulk, or who wore bug-eyed costumes, like Spider-Man, did not appear heroic. As a result, David read those comics in secret, beginning with his first Marvel book, Fantastic Four Annual #3 (November 1965), which saw the wedding of Mister Fantastic and the Invisible Woman. His parents eventually allowed him to start reading superhero titles, his favorite of which was Superman. He cites John Buscema as his favorite pre-1970s artist. David attended his first comic book convention around the time that Jack Kirby's New Gods premiered, after asking his father to take him to one of Phil Seuling's shows in New York, where David obtained Kirby's autograph, his first encounter with a comics professional. David's earliest interest in writing came through the journalism work of his father, Gunter, who sometimes reviewed movies and took young Peter along (if it was age-appropriate). While Gunter wrote his reviews back at the newspaper's office, David wrote his own, portions of which sometimes found their way into Gunter's published reviews. David began to entertain the notion of becoming a professional writer at age twelve, buying a copy of The Guide to the Writer's Market, and subscribing to similar-themed magazines, in the hopes of becoming a reporter. David lived in Bloomfield, New Jersey, in a small house at 11 Albert Terrace, and attended Demarest Elementary School. His family later moved to Verona, New Jersey, where he spent his adolescence. By the time he entered his teens, he had lost interest in comic books, feeling he had outgrown them. David's best friend in junior high and first year in high school, Keith, was gay, and David has described how both of them were targets of ostracism and harassment from homophobes. Although his family eventually moved to Pennsylvania, his experiences in Verona soured him on that town and shaped his liberal sociopolitical positions regarding LGBT issues. He later made Verona the home location of villain Morgan le Fay in his novel Knight Life, and has often discussed his progressive views on LGBT issues in his column and on his blog. David's interest in comics was rekindled when he saw a copy of Superman vs. Muhammad Ali (1978) while passing a newsstand, and later, X-Men #95 (October 1975), and discovered in that latter book the "All-New, All-Different" team that had first appeared in Giant-Size X-Men #1 (May 1975). These two books were the first comics he had purchased in years. A seminal moment in the course of his aspirations occurred when he met writer Stephen King at a book signing, and told him that he was an aspiring writer. King signed David's copy of Danse Macabre with the inscription, "Good luck with your writing career.", which David now inscribes himself onto books presented to him by fans who tell him the same thing. Other authors that David cites as influences include Harlan Ellison, Arthur Conan Doyle, Robert B. Parker, Neil Gaiman, Terry Pratchett, Robert Crais and Edgar Rice Burroughs. Specific books he has mentioned as favorites include To Kill a Mockingbird, Tarzan of the Apes, The Princess Bride, The Essential Ellison, A Confederacy of Dunces, Adams Versus Jefferson, and Don Quixote. David has singled out Ellison in particular as a writer whom he has tried to emulate. David attended New York University, where he graduated with a Bachelor of Arts degree in journalism. Career Early work David's first professional assignment was covering the World Science Fiction Convention held in Washington in 1974 for the Philadelphia Bulletin. David eventually gravitated towards fiction after his attempts at journalism did not meet with success. His first published fiction appeared in Asimov's Science Fiction in 1980. He sold an op-ed piece to The New York Times, but overall his submissions that met with rejection far outnumbered those accepted. Comics career 1980s David eventually gave up on a career in writing, and came to work in book publishing. His first publishing job was for the E.P. Dutton imprint Elsevier/Nelson, where he worked mainly as an assistant to the editor-in-chief. He later worked in sales and distribution for Playboy Paperbacks. He subsequently worked for five years in Marvel Comics' Sales Department, first as Assistant Direct Sales Manager under Carol Kalish, who hired him, and then succeeding Kalish as Sales Manager. During this time he made some cursory attempts to sell stories, including submission of some Moon Knight plots to Dennis O'Neil, but his efforts were unfruitful. Three years into David's tenure as Direct Sales Manager, Jim Owsley became editor of the Spider-Man titles. Although crossing over from sales into editorial was considered a conflict of interest in the Marvel offices, Owsley, whom David describes as a "maverick," was impressed with how David had not previously hesitated to work with him when Owsley was an assistant editor under Larry Hama. When Owsley became an editor, he purchased a Spider-Man story from David, which appeared in The Spectacular Spider-Man #103 (June 1985). Owsley subsequently purchased from David "The Death of Jean DeWolff", a violent murder mystery darker in tone than the usually lighter Spider-Man stories that ran in issues #107–110 (October 1985 – January 1986) of that title. Responding to charges of conflict of interest, David made a point of not discussing editorial matters with anyone during his 9-to-5 hours as Direct Sales Manager, and decided not to exploit his position as Sales Manager by promoting the title. Although David attributes the story's poor sales to this decision, he asserts that such crossing over from Sales to Editorial is now common. In the Marvel offices, a rumor circulated that it was actually Owsley who was writing the stories attributed to David. Nonetheless, David says he was fired from Spectacular Spider-Man by Owsley due to editorial pressure by Marvel's Editor-in-Chief Jim Shooter, and has commented that the resentment stirred by Owsley's purchase of his stories may have permanently damaged Owsley's career. Months later, Bob Harras offered David The Incredible Hulk, as it was a struggling title that no one else wanted to write, which gave David free rein to do whatever he wanted with the character. During his 12-year run on Hulk, David explored the recurring themes of the Hulk's multiple personality disorder, his periodic changes between the more rageful and less intelligent Green Hulk and the more streetwise, cerebral Gray Hulk, and of being a journeyman hero, which were inspired by The Incredible Hulk #312 (October 1985), in which writer Bill Mantlo (and possibly, according to David, Barry Windsor-Smith) had first established that Banner had suffered childhood abuse at the hands of his father. These aspects of the character were later used in the 2003 feature film adaptation by screenwriter Michael France and director Ang Lee. Comic Book Resources credits David with making the formerly poor-selling book "a must-read mega-hit". David collaborated with a number of artists who became fan-favorites on the series, including Todd McFarlane, Dale Keown and Gary Frank. Among the new characters he created during his run on the series were the Riot Squad and the Pantheon. David wrote the first appearance of the Thunderbolts, a team created by Kurt Busiek and Mark Bagley, in The Incredible Hulk #449 (January 1997).<ref>Manning "1990s" in Gilbert (2008), p. 282: "Writer Peter David and artist Mike Deodato, Jr. debuted Marvel's newest superteam, the Thunderbolts in issue 449 of The Incredible Hulk."</ref> It was after he had been freelancing for a year, and into his run on Hulk, that David felt that his writing career had cemented. After putting out feelers at DC Comics, and being offered the job of writing a four-issue miniseries of The Phantom by editor Mike Gold, David quit his sales position to write full-time. David had a brief tenure writing Green Lantern when the character was exclusive to the short-lived anthology series Action Comics Weekly from issues #608–620 in 1988. David took over Dreadstar during its First Comics run, with issue #41 (March 1989) after Jim Starlin left the title, and remained on it until issue #64 (March 1991), the final issue of that run. David's other Marvel Comics work in the late 1980s and 1990s includes runs on Wolverine, the New Universe series Mark Hazzard: Merc and Justice, a run on the original X-Factor, and the futuristic series Spider-Man 2099, about a man in the year 2099 who takes up the mantle of Spider-Man, the title | Assistant Direct Sales Manager under Carol Kalish, who hired him, and then succeeding Kalish as Sales Manager. During this time he made some cursory attempts to sell stories, including submission of some Moon Knight plots to Dennis O'Neil, but his efforts were unfruitful. Three years into David's tenure as Direct Sales Manager, Jim Owsley became editor of the Spider-Man titles. Although crossing over from sales into editorial was considered a conflict of interest in the Marvel offices, Owsley, whom David describes as a "maverick," was impressed with how David had not previously hesitated to work with him when Owsley was an assistant editor under Larry Hama. When Owsley became an editor, he purchased a Spider-Man story from David, which appeared in The Spectacular Spider-Man #103 (June 1985). Owsley subsequently purchased from David "The Death of Jean DeWolff", a violent murder mystery darker in tone than the usually lighter Spider-Man stories that ran in issues #107–110 (October 1985 – January 1986) of that title. Responding to charges of conflict of interest, David made a point of not discussing editorial matters with anyone during his 9-to-5 hours as Direct Sales Manager, and decided not to exploit his position as Sales Manager by promoting the title. Although David attributes the story's poor sales to this decision, he asserts that such crossing over from Sales to Editorial is now common. In the Marvel offices, a rumor circulated that it was actually Owsley who was writing the stories attributed to David. Nonetheless, David says he was fired from Spectacular Spider-Man by Owsley due to editorial pressure by Marvel's Editor-in-Chief Jim Shooter, and has commented that the resentment stirred by Owsley's purchase of his stories may have permanently damaged Owsley's career. Months later, Bob Harras offered David The Incredible Hulk, as it was a struggling title that no one else wanted to write, which gave David free rein to do whatever he wanted with the character. During his 12-year run on Hulk, David explored the recurring themes of the Hulk's multiple personality disorder, his periodic changes between the more rageful and less intelligent Green Hulk and the more streetwise, cerebral Gray Hulk, and of being a journeyman hero, which were inspired by The Incredible Hulk #312 (October 1985), in which writer Bill Mantlo (and possibly, according to David, Barry Windsor-Smith) had first established that Banner had suffered childhood abuse at the hands of his father. These aspects of the character were later used in the 2003 feature film adaptation by screenwriter Michael France and director Ang Lee. Comic Book Resources credits David with making the formerly poor-selling book "a must-read mega-hit". David collaborated with a number of artists who became fan-favorites on the series, including Todd McFarlane, Dale Keown and Gary Frank. Among the new characters he created during his run on the series were the Riot Squad and the Pantheon. David wrote the first appearance of the Thunderbolts, a team created by Kurt Busiek and Mark Bagley, in The Incredible Hulk #449 (January 1997).<ref>Manning "1990s" in Gilbert (2008), p. 282: "Writer Peter David and artist Mike Deodato, Jr. debuted Marvel's newest superteam, the Thunderbolts in issue 449 of The Incredible Hulk."</ref> It was after he had been freelancing for a year, and into his run on Hulk, that David felt that his writing career had cemented. After putting out feelers at DC Comics, and being offered the job of writing a four-issue miniseries of The Phantom by editor Mike Gold, David quit his sales position to write full-time. David had a brief tenure writing Green Lantern when the character was exclusive to the short-lived anthology series Action Comics Weekly from issues #608–620 in 1988. David took over Dreadstar during its First Comics run, with issue #41 (March 1989) after Jim Starlin left the title, and remained on it until issue #64 (March 1991), the final issue of that run. David's other Marvel Comics work in the late 1980s and 1990s includes runs on Wolverine, the New Universe series Mark Hazzard: Merc and Justice, a run on the original X-Factor, and the futuristic series Spider-Man 2099, about a man in the year 2099 who takes up the mantle of Spider-Man, the title character of which David co-created. David left X-Factor after 19 issues, and he wrote the first 44 issues of Spider-Man 2099 before quitting that book to protest the firing of editor Joey Cavalieri. The book was cancelled two issues later, along with the entire 2099 line. 1990s In 1990, David wrote a seven-issue Aquaman miniseries, The Atlantis Chronicles, for DC Comics, about the history of Aquaman's home of Atlantis, which David has referred to as among the written works of which he is most proud, and his first time writing in the full script format. He later wrote a 1994 Aquaman miniseries, Aquaman: Time and Tide, which led to a relaunched monthly Aquaman series,David, Peter. "Giving Credit Where Credit is Due, Part 1" peterdavid.net; August 20, 2010; Reprinted from Comics Buyer's Guide #1033 (September 3, 1993) the practice of bagged comics, so-called "poster covers" that showcase a character without indicating anything about the comic's content, the meaninglessness of killing off characters to be eventually revived, the poor commitment on the part of some to maintaining continuity in shared fictional universes, and the emphasis on gearing monthly comics series toward eventual collection into trade paperbacks. David has opined that failure on the part of consumers to purchase the monthly individual issues in favor of waiting for the trade collections hurts the sales of the monthly, and its chances of being collected at all. A father of four daughters, David has worked on a number of series that feature female leads, such as Supergirl, Fallen Angel and She-Hulk, and has lamented that the American comic book market is not very supportive of such books. David has spoken out about fans who are abusive or threatening to creators, and against copyright infringement, particularly that which is committed through peer-to-peer file sharing and posting literary works in their entirety on the Internet without the permission of the copyright holder. On many occasions, he has offered criticisms of specific publishers, as when he criticized Wizard magazine for ageism."Did Wizard deserve it?" "But I Digress..." Comics Buyer's Guide #1438; June 8, 2001; Page 58 He has criticized companies for not sufficiently compensating the creators of their long-standing and lucrative characters, such as Marvel Comics for its treatment of Blade creator Marv Wolfman and Archie Comics for its treatment of Josie and the Pussycats creator Dan DeCarlo.David, Peter. "Slashing away at Slashback" "But I Digress..." Comics Buyer's Guide #1427; March 23, 2001; Page 58 He has criticized publishers for various other business practices, including Marvel and Image Comics. He has defended said companies from criticism he feels is unfounded, as when he defended Marvel from a February 17, 1992 Barron's magazine article. He has criticized deletionists on Wikipedia on more than one occasion.David, Peter. "The Wikipedia Deletionists, Round 2". peterdavid.net. April 23, 2010 On occasion, he has disagreed publicly with specific industry personalities such as Frank Miller and Jim Shooter. Particularly publicized were his disagreements with Spawn creator Todd McFarlane in 1992 and 1993, in the wake of the formation of Image Comics, the company McFarlane co-founded. This came to a head during a public debate they participated in at Philadelphia's Comicfest convention in October 1993, which was moderated by artist George Pérez. McFarlane claimed that Image was not being treated fairly by the media, and by David in particular. The three judges, Maggie Thompson, editor of the Comics Buyer's Guide, William Christensen of Wizard Press, and John Danovich of the magazine Hero Illustrated, voted 2–1 in favor of David, with Danovich voting the debate a tie. David has since criticized McFarlane for other business practices, and has engaged in public disagreements with The Comics Journal editor Gary Groth, Erik Larsen, Rob Liefeld, Marvel Editor-In-Chief Joe Quesada, writer/director Kevin Smith, DC Comics Vice President |
gas, or liquid at positive pressure Plenism, or Horror vacui (physics) the concept that "nature abhors a vacuum" Plenum (meeting), a meeting of a deliberative assembly in which all members are present; contrast with quorum Plenum space, enclosed spaces (in buildings) used for airflow Plenum cable, electrical wire permitted in plenum spaces | Corporation, a publisher of scientific books and journals Plenum (physics), a space completely filled with matter Undergravel filters, in aquarium filtration, an open space under a layer of gravel |
After a bequest of 17th century Dutch artworks by Lady Michaelis in 1932 the art collection of Pretoria City Council expanded quickly to include South African works by Henk Pierneef, Pieter Wenning, Frans Oerder, Anton van Wouw and Irma Stern. And according to the museum: "As South African museums in Cape Town and Johannesburg already had good collections of 17th, 18th and 19th century European art, it was decided to focus on compiling a representative collection of South African art" making it somewhat unusual compared to its contemporaries. Pretoria houses several performing arts venues including: the South African State Theatre which houses the arts of Opera, musicals, plays and comedic performances. A 9 metre tall statue of former president Nelson Mandela was unveiled in front of the Union Buildings on 16 December 2013. Since Nelson Mandela's inauguration as South Africa's first majority elected president the Union Buildings have come to represent the new 'Rainbow Nation'. Public art in Pretoria has flourished since the 2010 FIFA World Cup with many areas receiving new public artworks. Sport One of the most popular sports in Pretoria is rugby union. Loftus Versfeld is home to the Blue Bulls, who compete in the domestic Currie Cup, and also to the Bulls in the international United Rugby Championship competition. The Bulls rugby team, which is operated by the Blue Bulls, won the Super Rugby competition in 2007, 2009 and 2010. Loftus Versfeld also hosts the football side Mamelodi Sundowns. Pretoria also hosted matches during the 1995 Rugby World Cup. Loftus Versfeld was used for some matches in the 2010 FIFA World Cup. Association football is one of the most popular sports in the city. There are two football teams in the city playing in South Africa's top-flight football league, the Premier Soccer League. They are Mamelodi Sundowns and Supersport United. Supersport United were the 2008–09 PSL Champions. Following the 2011/2012 season the University of Pretoria F.C. gained promotion to the South African Premier Division, the top domestic league, becoming the third Pretoria-based team in the league. After a poor league finish in the 2015/2016 season, University of Pretoria F.C. were relegated to the National First Division, the second-highest football league in South Africa, in the 2016 Premier Soccer League promotion/relegation play-offs. Cricket is also a popular game in the city. As there is no international cricket stadium in the city, it does not host any top-class cricket tournaments, although the nearby situated Centurion has Supersport Park which is an international cricket stadium and has hosted many important tournaments such as 2003 Cricket World Cup, 2007 ICC World Twenty20, 2009 IPL and 2009 ICC Champions Trophy. The most local franchise team to Pretoria is the Titans, although Northerns occasionally play in the city in South Africa's provincial competitions. Many Pretoria born cricketers have gone on to play for South Africa, including former international captains AB de Villiers Faf du Plessis. The Pretoria Transnet Blind Cricket Club is situated in Pretoria and is the biggest Blind Cricket club in South Africa. Their field is at the Transnet Engineering campus on Lynette Street, home of differently disabled cricket. PTBCC has played many successful blind cricket matches with abled bodied teams such as the South African Indoor Cricket Team and TuksCricket Junior Academy. Northerns Blind Cricket is the Provincial body that governs PTBCC and Filefelfia Secondary School. The Northern Blind Cricket team won the 40 over National Blind Cricket tournament that was held in Cape Town in April 2014. The city's Sun Arena at Times Square hosted the NBA Africa Game 2018. Places of worship Among the places of worship, they are predominantly Christian churches and temples : Zion Christian Church, Apostolic Faith Mission of South Africa, Assemblies of God, Baptist Union of Southern Africa (Baptist World Alliance), Methodist Church of Southern Africa (World Methodist Council), Anglican Church of Southern Africa (Anglican Communion), Presbyterian Church of Africa (World Communion of Reformed Churches), Roman Catholic Archdiocese of Pretoria (Catholic Church). There are also Muslim mosques and Hindu temples. Jewish community Pretoria has a small Jewish community of around 3,000. Jewish citizens have been in Pretoria since its foundation in the 19th century and played an important role in its industrial and economic growth. A Mr. De Vries, the first Jewish inhabitant of Pretoria, was a prominent citizen and prosecutor, a member of the Volksraad and a pioneer of the Afrikaans language. Another famed Jewish Pretorian was Sammy Marks. Other early Jewish settlers, many of them immigrants from Lithuania, were not as educated as De Vries and often did not speak Dutch, Afrikaans, or English. Many of them spoke only Yiddish and made a living as shopkeepers in the local retail industry. Most Jewish residents stayed neutral in the Second Boer War, though some joined the South African Republic army. The first congregation was founded between 1890 and 1895, and in 1898 the first synagogue opened on Paul Kruger Street. A second synagogue, known as the Great Synagogue, opened in 1922. Both synagogues are no longer in operation, but a Reformed synagogue, Temple Menorah, opened in the early 1950s. The Jewish community of Pretoria's golden age was in the early 20th century, when many Jewish sports clubs, charities, and youth groups flourished. After 1948, many Jews left for Cape Town or Johannesburg. The synagogue on Paul Kruger Street was purchased by the government in 1952 to become the new home of the High Court where prominent opposition figures in the Anti-Apartheid Movement were tried, including Nelson Mandela, Walter Sisulu, and 26 others were prosecuted for treason from 1 August 1958 to 29 March 1961; the Rivonia Trial was held there in 1963–1964. Two Jewish schools arose in Pretoria, the Miriam Marks School, which was founded in 1905, and the Carmel School, which opened in 1959. Only the second, currently also operating as a synagogue, remains. Pretoria's Reformed congregation shares a rabbi with the Johannesburg one, though the synagogue no longer operates and services take place in worshippers' private homes. Buddhist community A Buddhist center, the Jang Chup Chopel Rigme Centre ("Center of Light") was founded in early January 2015 by Duan Pienaar or Gyalten Nyima (his adopted monastic name) in Waverley around Pretoria-Moot. Pienaar is the only Afrikaner ordained in the highly selective Tibetan Tantric Buddhist community in Bylakuppe, in southern India. His instructor Lama Kyabje Choden Rinpoche is the highest tantric master after the Dalai Lama. Pienaar, who studied Buddhist teachers for twenty years, spent two years in India. Coat of arms The Pretoria civic arms, designed by Dr. Frans Engelenburg, were granted by the College of Arms on 7 February 1907. They were registered with the Transvaal Provincial Administration in March 1953 and at the Bureau of Heraldry in May 1968. The Bureau provided new artwork, in a more modern style, in 1989. The arms were: Gules, on an mimosa tree eradicated proper within an orle of eight bees volant, Or, an inescutcheon Or and thereon a Roman praetor seated proper. In layman's terms : a red shield displaying an uprooted mimosa tree surrounded by a border of eight golden bees, superimposed on the tree is a golden shield depicting a Roman praetor. The tree represented growth, the bees industry, and the praetor (judge) was an heraldic pun on the name. The crest was a three-towered golden castle; the supporters were an eland and a kudu; and the motto Praestantia praevaleat Pretoria. The coat of arms have gone out of favour after the City Council amalgamated with its surrounding councils to form the City of Tshwane Metropolitan Municipality. Education Primary education Arcadia Primary School Brooklyn Primary School Capital Park Primary School Crawford College Eduplex Primary School Glenstantia Primary School Hamilton Primary School La Montagne Primary School Laerskool Anton van Wouw Laerskool Boerefort Laerskool Constantiapark Laerskool Danie Malan Laerskool Elarduspark Laerskool Garsfontein Laerskool Jopie Fourie Laerskool Louis Leipoldt Laerskool Lynnwood Laerskool Magalieskruin Laerskool Menlopark Laerskool Meyerspark Laerskool Monumentpark Laerskool Queenswood Laerskool Pretoria-Oos Laerskool Skuilkrans Laerskool Tygerpoort Laerskool Wonderboom Laerskool Wonderboom-Suid Lynnwood Privaatskool Lynnwood Ridge Primary School Maragon Olympus Nantes Primary School Northridge Primary School Prestige College Pretoria Preparatory School Rietondale Primary School Robert Ricks Primary School St. Mary's Diocesan School for Girls St. Paulus Primary School Stratford Preparatory School Sunnyside Primary School Tyger Valley College Waterkloof House Preparatory School Waterkloof Primary School Wespark Primary School Woodhill College Secondary education Afrikaanse Hoër Meisieskool Afrikaanse Hoër Seunskool Carpe Diem Academy Christian Brothers' College Christian Progressive College Clapham High School Cornerstone College Cornwall Hill College Crawford College Curro Hazeldean High School Hatfield Christian School The Glen High School Hillview High School Hoërskool Akasia Hoërskool C.R. Swart Hoërskool Centurion Hoërskool Die Wilgers Hoërskool Garsfontein Hoërskool Gerrit Maritz Hoërskool Hercules Hoërskool Menlopark Hoërskool Montana Hoërskool F.H. Odenaal Hoërskool Oos-Moot Hoërskool Overkruin Hoërskool Silverton Hoërskool Tuine Hoërskool Waterkloof Hoërskool Wonderboom HTS John Vorster HTS Tuine Langenhoven High School Laudium Secondary School Maragon Mooikloof Prestige College Pretoria Boys High School Pretoria Central High School Pretoria High School for Girls Pretoria North High School Pretoria Secondary School Pretoria Technical High School Pretoria West High School Pro Arte Alphen Park Rietondale High School St. Alban's College St. Mary's Diocesan School for Girls Summat College Tshwane College Tshwane Muslim School Tyger Valley College Willowridge High School Woodhill College International schools Schools for foreign students: Advanced College Brooklyn British International College Courtney House International College Dansa International College École Miriam Makeba (French school) Deutsche Schule Pretoria (German school) Russian Embassy School in Pretoria AISJ-Pretoria North American International School Star College Pretoria Silver Oaks International School Tertiary education Pretoria is one of South Africa's leading academic cities and is home to both the largest residential university in South Africa, largest distance education university in South Africa and a research intensive university. The three Universities in the city in order of the year founded are as follows: University of South Africa The University of South Africa (commonly referred to as Unisa), founded in 1873 as the University of the Cape of Good Hope, is the largest university on the African continent and attracts a third of all higher education students in South Africa. It spent most of its early history as an examining agency for Oxford and Cambridge universities and as an incubator from which most other universities in South Africa are descended. In 1946 it was given a new role as a distance education university and in 2012 it had a student headcount of over 300,000 students, including African and international students in 130 countries worldwide, making it one of the world's mega universities. Unisa is a dedicated open distance education institution and offers both vocational and academic programmes. University of Pretoria The University of Pretoria (commonly referred to as UP, Tuks, or Tukkies) is a multi campus public research university. The university was established in 1908 as the Pretoria campus of the Johannesburg based Transvaal University College and is the fourth South African institution in continuous operation to be awarded university status. Established in 1920, the University of Pretoria Faculty of Veterinary Science is the second oldest veterinary school in Africa and the only veterinary school in South Africa. In 1949 the university launched the first MBA programme outside of North America. Since 1997, the university has produced more research outputs every year than any other institution of higher learning in South Africa, as measured by the Department of Education's accreditation benchmark. Tshwane University of Technology The Tshwane University of Technology (commonly referred to as TUT) is a higher education institution, offering vocational oriented diplomas and degrees, and came into being through a merger of Technikon Northern Gauteng, Technikon North-West and Technikon Pretoria. TUT caters for approximately 60,000 students and it has become the largest residential higher education institution in South Africa. CSIR The Council for Scientific and Industrial Research (CSIR) is South Africa's central scientific research and development organisation. It was established by an act of parliament in 1945 and is situated on its own campus in the city. It is the largest research and development organisation in Africa and accounts for about 10% of the entire African R&D budget. It has a staff of approximately 3,000 technical and scientific researchers, often working in multi-disciplinary teams. In 2002, Dr. Sibusiso Sibisi was appointed as the president and CEO of the CSIR. Military Pretoria has earned a reputation as being the centre of South Africa's Military and is home to several military facilities of the South African National Defence Force: Military headquarters Transito Air Force Headquarters This complex is the headquarters to the South African Air Force. The Dequar Road Complex A military complex that houses the following: South African Army's Headquarters South African Infantry Formation HQ A General Support Base Support Formation HQ Training Formation HQ The 102 Field Workshop unit The 17 Maintenance Unit The S.A.M.S Military Health Department. The Sebokeng Complex A military complex located on the corner of Patriot Street and Koraalboom Road that houses the following military headquarters: South African Army Armour Formation HQ South African Army Artillery Formation HQ South African Army Intelligence Corps HQ South African Army Air Defence Artillery Formation HQ Military bases The Dequar Road Base This base is situated in the suburb of Salvokop and is divided into two parts: The Green Magazine (Groen Magazyn) which is the Headquarters to the Transvaalse Staatsartillerie, A reserve artillery regiment of the South African Army Magazine Hill which is the regimental Headquarters to the Pretoria Armoured Regiment, A reserve tank regiment of the South African Army. Thaba Tshwane Thaba Tshwane is a large military area south-west of the Pretoria Central Business District and North of Air Force Base Swartkop. It is the headquarters of several army units- Joint Support Base Garrison that is responsible for the town management of Thaba Tshwane The Tshwane Regiment, a reserve motorised infantry regiment of the South African Army The 18 Light Regiment, a reserve artillery regiment of the South African Army The National Ceremonial Guard and Band The military base also houses the 1 Military Hospital and the Military Police School. Within Thaba Tshwane, a facility known as "TEK Base" exists which houses its own units: The SA Army Engineer Formation 2 Parachute Battalion 44 Parachute Engineer Regiment 1 Military Printing Regiment 4 Survey and Map Regiment Joint Support Base Wonderboom The Wonderboom Military Base is located adjacent to the Wonderboom Airport and is the headquarters of the South African Army Signals Formation. It also houses the School of Signals, 1 Signal Regiment, 2 Signal Regiment, 3 Electronic Workshop, 4 Signal Regiment and 5 Signal Regiment. Military colleges The South African Air Force College, the South African Military Health Service School for Military Health Training and the South African Army College are situated in the Thaba Tshwane Military Base and are used to train Commissioned and Non-commissioned Officers to perform effectively in combat/command roles in the various branches of the South African National Defence Force. The South African Defence Intelligence College is also located in the Sterrewag Suburb north of Air Force Base Waterkloof. Air force bases While technically not within the city limits of Pretoria, Air Force Base Swartkop and Air Force Base Waterkloof are often used for defence related matters within the city. These may include aerial | centre, where it became a regular road, before again becoming a highway west of the city. These roads are now designated the M2 and M4. There is a third, original east–west road: the R104, previously named Church Street. Church Street has been renamed Helen Joseph from Nelson Mandela Church Square, WF Nkomo from Nelson Mandela to R511, Stanza Bopape from Nelson Mandela to the East and Elias Motswaledi from R511 to the West. The N14 starts in the centre of town from the M4 (former N4). It is a normal road heading south through the centre before becoming the Ben Schoeman highway. At the Brakfontein interchange, the Ben Schoeman highway becomes the N1, but the N14 continues as the intersecting west-south-western highway towards Krugersdorp. The R114 parallels the N14 in its westward journey running just to the north of the highway. The R21 provides a second north–south highway, further east. It starts from the Fountains Interchange south of the city centre, but is still a road until Monument Park, when it becomes a true highway. It crosses the N1 east of the Brakfontein Interchange at the Flying Saucer Interchange and runs north–south towards Ekurhuleni (specifically Kempton Park and Boksburg). Importantly it links Pretoria with the OR Tambo International Airport in Kempton Park. A proposed third north–south highway, in the west of the city, the R80 is partially built. At present the highway begins in Soshanguve. It terminates just north of the city centre at an intersection with the M1. Plans have been in place for some time to extend this all the way past the M4 and N14 highways to the N1 in Randburg. Pretoria is also served by many regional roads. The R55 starts at an interchange with the R80, and runs north–south west of the city to Sandton. The R50 starts from the N1 just after the Flying Saucer Interchange in the south-east of the city, and continues south-east towards Delmas. The R511 runs north–south from Randburg towards Brits and barely by-passes Pretoria to the west. The R514 starts from the M1, north of the city centre, and terminates at the R511. The R513 crosses Pretoria's northern suburbs from east to west. It links Pretoria to Cullinan and Bronkhorstspruit in the east and Hartbeespoort in the west. The R566 takes origin in Pretoria's northern suburbs, and exits the town to the west just north of the R513. It connects Pretoria to Brits. Finally the R573 starts from the R513, just east of the town and heads north-east to Siyabuswa. Pretoria is also served internally by metropolitan routes. Airports For scheduled air services, Pretoria is served by Johannesburg's airports: OR Tambo International, south of central Pretoria; and Lanseria, south-west of the city. Wonderboom Airport in the suburb of Wonderboom in the north of Pretoria primarily services light commercial and private aircraft. However, as from August 2015, scheduled flights from Wonderboom Airport to Cape Town International Airport were made available by SA Airlink. There are two military air bases to the south of the city, Swartkop and Waterkloof. Culture Media Since Pretoria forms part the Tshwane Metropolitan Municipality, most radio, television and paper media is the same as the rest of the metro area. Radio There are many radio stations in the greater Pretoria region, some of note are: Jacaranda FM, previously known as Jacaranda 94.2, is a commercial South African radio station, broadcasting in English and Afrikaans, with a footprint that covers Gauteng, Limpopo, Mpumalanga and the North West Province and boasts a listening audience of 2 million people a week, and a digital community of more than 1,1 million people a month. The station's format is mainstream adult contemporary with programming constructed around a playlist of hit music from the 1980s, 1990s and now. Tuks FM is the radio station of the University of Pretoria and one of South Africa's community broadcasters. It was one of the first community broadcasters in South Africa to be given an FM licence. It is known for contemporary music and is operated by UP's student base. Radio Pretoria is a community-based radio station in Pretoria, South Africa, whose programmes are aimed at Afrikaners. It broadcasts 24 hours a day in stereo on 104.2 FM in the greater Pretoria area. Various other transmitters (with their own frequencies) in South Africa broadcast the station's content further afield, while the station is also available on Sentech's digital satellite platform. Impact Radio, is a Christian Community Radio Station based in Pretoria, and broadcasting on 103FM in the Greater Tshwane Area. Television Pretoria is serviced by eTV, SABC, MNET, and SuperSport Paper The city is serviced by a variety of printed publications namely; Pretoria News is a daily newspaper established in Pretoria in 1898. It publishes a daily edition from Monday to Friday and a Weekend edition on Saturday and Sunday. It is an independent newspaper in the English language that serves the city and its direct environs. It is available online via the Independent online website. Beeld is an Afrikaans-language daily newspaper that was launched on 16 September 1974. Beeld is distributed in four provinces of South Africa: Gauteng, Mpumalanga, Limpopo, North West. Die Beeld (English: The Image) was an Afrikaans-language Sunday newspaper in the late 1960s. Pretoria Creole Pretoria Sotho (called Sepitori by its speakers) is the urban lingua franca of Pretoria and the Tshwane metropolitan area in South Africa. It is a combination of Tswana and Northern Sotho (Pedi), with influences from Tsotsitaal and other black South African languages. It is a creole language that developed in the city during the years of Apartheid. Museums Ditsong National Museum of Cultural History a.k.a. African Window Freedom Park Hapo Museum Kruger House (Residence of the president of the ZAR, Paul Kruger) Mapungubwe Museum Melrose House (The Treaty of Vereeniging which ended the Anglo-Boer War was signed here in 1902) National Library of South Africa Pioneer Museum Pretoria Art Museum Pretoria Forts South African Air Force Museum Transvaal Museum Van Tilburg Collection Van Wouw Museum Voortrekker Monument Willem Prinsloo Agricultural Museum Sammy Marks House SP Engelbrecht Museum (history of the NHK church). Smuts House Museum Music A number of popular South African bands and musicians are originally from Pretoria. These include Desmond and the Tutus, Bittereinder, The Black Cat Bones, Seether, popular mostwako rapper JR, Joshua na die Reën and DJ Mujava who was raised in the town of Attridgeville. The song "Marching to Pretoria" refers to this city. Pretoria was the capital of the South African Republic (a.k.a. Republic of the Transvaal; 1852–1881 and 1884–1902) the principal battleground for the First and Second Boer War, the latter which brought both the Transvaal and the Orange Free State republic under British rule. "Marching to Pretoria" was one of the songs that British soldiers sang as they marched from the Cape Colony, under British Rule since 1814, to the capital of the Southern African Republic (or in Dutch, Zuid-Afrikaansche Republiek). As the song's refrain puts it: "We are marching to Pretoria, Pretoria, Pretoria/We are marching to Pretoria, Pretoria, Hurrah." The opening line of John Lennon's Beatles' song I Am the Walrus, "I am he as you are he as you are me and we are all together", is often believed to be based on the lyric "I'm with you and you're with me and so we are all together" in "Marching to Pretoria". Lennon denied this, insisting his lyrics came from "nothing". Performing arts and galleries Pretoria is home to an extensive portfolio of public art. A diverse and evolving city, Pretoria boasts a vibrant art scene and a variety of works that range from sculptures to murals to pieces by internationally and locally renowned artists. The Pretoria Art Museum is home to a vast collection of local artworks. After a bequest of 17th century Dutch artworks by Lady Michaelis in 1932 the art collection of Pretoria City Council expanded quickly to include South African works by Henk Pierneef, Pieter Wenning, Frans Oerder, Anton van Wouw and Irma Stern. And according to the museum: "As South African museums in Cape Town and Johannesburg already had good collections of 17th, 18th and 19th century European art, it was decided to focus on compiling a representative collection of South African art" making it somewhat unusual compared to its contemporaries. Pretoria houses several performing arts venues including: the South African State Theatre which houses the arts of Opera, musicals, plays and comedic performances. A 9 metre tall statue of former president Nelson Mandela was unveiled in front of the Union Buildings on 16 December 2013. Since Nelson Mandela's inauguration as South Africa's first majority elected president the Union Buildings have come to represent the new 'Rainbow Nation'. Public art in Pretoria has flourished since the 2010 FIFA World Cup with many areas receiving new public artworks. Sport One of the most popular sports in Pretoria is rugby union. Loftus Versfeld is home to the Blue Bulls, who compete in the domestic Currie Cup, and also to the Bulls in the international United Rugby Championship competition. The Bulls rugby team, which is operated by the Blue Bulls, won the Super Rugby competition in 2007, 2009 and 2010. Loftus Versfeld also hosts the football side Mamelodi Sundowns. Pretoria also hosted matches during the 1995 Rugby World Cup. Loftus Versfeld was used for some matches in the 2010 FIFA World Cup. Association football is one of the most popular sports in the city. There are two football teams in the city playing in South Africa's top-flight football league, the Premier Soccer League. They are Mamelodi Sundowns and Supersport United. Supersport United were the 2008–09 PSL Champions. Following the 2011/2012 season the University of Pretoria F.C. gained promotion to the South African Premier Division, the top domestic league, becoming the third Pretoria-based team in the league. After a poor league finish in the 2015/2016 season, University of Pretoria F.C. were relegated to the National First Division, the second-highest football league in South Africa, in the 2016 Premier Soccer League promotion/relegation play-offs. Cricket is also a popular game in the city. As there is no international cricket stadium in the city, it does not host any top-class cricket tournaments, although the nearby situated Centurion has Supersport Park which is an international cricket stadium and has hosted many important tournaments such as 2003 Cricket World Cup, 2007 ICC World Twenty20, 2009 IPL and 2009 ICC Champions Trophy. The most local franchise team to Pretoria is the Titans, although Northerns occasionally play in the city in South Africa's provincial competitions. Many Pretoria born cricketers have gone on to play for South Africa, including former international captains AB de Villiers Faf du Plessis. The Pretoria Transnet Blind Cricket Club is situated in Pretoria and is the biggest Blind Cricket club in South Africa. Their field is at the Transnet Engineering campus on Lynette Street, home of differently disabled cricket. PTBCC has played many successful blind cricket matches with abled bodied teams such as the South African Indoor Cricket Team and TuksCricket Junior Academy. Northerns Blind Cricket is the Provincial body that governs PTBCC and Filefelfia Secondary School. The Northern Blind Cricket team won the 40 over National Blind Cricket tournament that was held in Cape Town in April 2014. The city's Sun Arena at Times Square hosted the NBA Africa Game 2018. Places of worship Among the places of worship, they are predominantly Christian churches and temples : Zion Christian Church, Apostolic Faith Mission of South Africa, Assemblies of God, Baptist Union of Southern Africa (Baptist World Alliance), Methodist Church of Southern Africa (World Methodist Council), Anglican Church of Southern Africa (Anglican Communion), Presbyterian Church of Africa (World Communion of Reformed Churches), Roman Catholic Archdiocese of Pretoria (Catholic Church). There are also Muslim mosques and Hindu temples. Jewish community Pretoria has a small Jewish community of around 3,000. Jewish citizens have been in Pretoria since its foundation in the 19th century and played an important role in its industrial and economic growth. A Mr. De Vries, the first Jewish inhabitant of Pretoria, was a prominent citizen and prosecutor, a member of the Volksraad and a pioneer of the Afrikaans language. Another famed Jewish Pretorian was Sammy Marks. Other early Jewish settlers, many of them immigrants from Lithuania, were not as educated as De Vries and often did not speak Dutch, Afrikaans, or English. Many of them spoke only Yiddish and made a living as shopkeepers in the local retail industry. Most Jewish residents stayed neutral in the Second Boer War, though some joined the South African Republic army. The first congregation was founded between 1890 and 1895, and in 1898 the first synagogue opened on Paul Kruger Street. A second synagogue, known as the Great Synagogue, opened in 1922. Both synagogues are no longer in operation, but a Reformed synagogue, Temple Menorah, opened in the early 1950s. The Jewish community of Pretoria's golden age was in the early 20th century, when many Jewish sports clubs, charities, and |
disordered offenders and other patients whose condition is such that they have to be treated in secure units. Other psychiatrists may also specialize in psychopharmacology, psychotherapy, psychiatric genetics, neuroimaging, dementia-related disorders such as Alzheimer's disease, attention deficit hyperactivity disorder (ADHD), sleep medicine, pain medicine, palliative medicine, eating disorders, sexual disorders, women's health, global mental health, early psychosis intervention, mood disorders and anxiety disorders such as obsessive–compulsive disorder (OCD) and posttraumatic stress disorder (PTSD). Psychiatrists work in a wide variety of settings. Some are full-time medical researchers, many see patients in private medical practices, consult liaison psychiatrists see patients in hospital settings where psychiatric and other medical conditions interact. Professional requirements While requirements to become a psychiatrist differ from country to country, all require a medical degree. India In India, an MBBS degree is the basic qualification needed to do Psychiatry. After completing MBBS (including internship) one can attend various PG Medical Entrance Exams and take MD in psychiatry which is a 3-year course. Diploma Course in Psychiatry or DNB Psychiatry can also be taken to become a Psychiatrist. Netherlands In the Netherlands, one must complete medical school after which one is certified as a medical doctor. After a strict selection program, one can specialize in psychiatry: a 4.5-year specialization. During this specialization, the resident has to do a 6-month residency in the field of social psychiatry, a 12-month residency in a field of their own choice (which can be child psychiatry, forensic psychiatry, somatic medicine, or medical research). To become an adolescent psychiatrist, one has to do an extra specialization period of 2 more years. In short, this means that it takes at least 10.5 years of study to become a psychiatrist which can go up to 12.5 years if one becomes a children's and adolescent psychiatrist. Pakistan In Pakistan, one must complete basic medical education, an MBBS, then get registered with Pakistan Medical and Dental Council as a General Practitioner after a one-year mandatory internship, House Job. After registration with PMDC, one has to go for FCPS-I exam, after that four-year training in Psychiatry under College of Physicians and Surgeons Pakistan. Training includes rotations in General Medicine, Neurology, and Clinical Psychology for 3 months each, during first two years. There is a mid-exam IMM (Intermediate Module) and a final exam after 4 years. UK and the Republic of Ireland In the United Kingdom, psychiatrists must hold a medical degree. These degrees are often abbreviated MB BChir, MB BCh, MB ChB, BM BS, or MB BS. Following this, the individual will work as a Foundation House Officer for two additional years in the UK, or one year as Intern in the Republic of Ireland to achieve registration as a basic medical practitioner. Training in psychiatry can then begin and it is taken in two parts: three years of Basic Specialist Training culminating in the MRCPsych exam followed by three years of Higher Specialist Training referred to as "ST4-6" in the UK and "Senior Registrar Training" in the Republic of Ireland. Candidates with MRCPsych degree and complete basic training must reinterview for higher specialist training. At this stage, the development of special interests such as forensic, child/adolescent takes place. At the end of 3 years of higher specialist training, candidates are awarded a CCT (UK) | of Ireland In the United Kingdom, psychiatrists must hold a medical degree. These degrees are often abbreviated MB BChir, MB BCh, MB ChB, BM BS, or MB BS. Following this, the individual will work as a Foundation House Officer for two additional years in the UK, or one year as Intern in the Republic of Ireland to achieve registration as a basic medical practitioner. Training in psychiatry can then begin and it is taken in two parts: three years of Basic Specialist Training culminating in the MRCPsych exam followed by three years of Higher Specialist Training referred to as "ST4-6" in the UK and "Senior Registrar Training" in the Republic of Ireland. Candidates with MRCPsych degree and complete basic training must reinterview for higher specialist training. At this stage, the development of special interests such as forensic, child/adolescent takes place. At the end of 3 years of higher specialist training, candidates are awarded a CCT (UK) or CCST (Ireland), both meaning Certificate of Completion of (Specialist) Training. At this stage, the psychiatrist can register as a specialist, and the qualification of CC(S)T is recognized in all EU/EEA states. As such, training in the UK and Ireland is considerably longer than in the US or Canada and frequently takes around 8–9 years following graduation from medical school. Those with a CC(S)T will be able to apply for Consultant posts. Those with training from outside the EU/EEA should consult local/native medical boards to review their qualifications and eligibility for equivalence recognition (for example, those with a US residency and ABPN qualification). US and Canada In the U.S. and Canada one must first attain the degree of M.D. or D.O., followed by practice as a psychiatric resident for another four years (five years in Canada). This extended period involves comprehensive training in psychiatric diagnosis, psychopharmacology, medical care issues, and psychotherapies. All accredited psychiatry residencies in the United States require proficiency in cognitive-behavioral, brief, psychodynamic, and supportive psychotherapies. Psychiatry residents are required to complete at least four post-graduate months of internal medicine or pediatrics, plus a minimum of two months of neurology during their |
often not taken as part of the Peano axioms, but rather as axioms of the "underlying logic". The next three axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final axiom is a second order statement of the principle of mathematical induction over the natural numbers, which makes this formulation close to second-order arithmetic. A weaker first-order system called Peano arithmetic is obtained by explicitly adding the addition and multiplication operation symbols and replacing the second-order induction axiom with a first-order axiom schema. Historical second-order formulation When Peano formulated his axioms, the language of mathematical logic was in its infancy. The system of logical notation he created to present the axioms did not prove to be popular, although it was the genesis of the modern notation for set membership (∈, which comes from Peano's ε) and implication (⊃, which comes from Peano's reversed 'C'.) Peano maintained a clear distinction between mathematical and logical symbols, which was not yet common in mathematics; such a separation had first been introduced in the Begriffsschrift by Gottlob Frege, published in 1879. Peano was unaware of Frege's work and independently recreated his logical apparatus based on the work of Boole and Schröder. The Peano axioms define the arithmetical properties of natural numbers, usually represented as a set N or The non-logical symbols for the axioms consist of a constant symbol 0 and a unary function symbol S. The first axiom states that the constant 0 is a natural number: Peano's original formulation of the axioms used 1 instead of 0 as the "first" natural number, while the axioms in Formulario mathematico include zero. The next four axioms describe the equality relation. Since they are logically valid in first-order logic with equality, they are not considered to be part of "the Peano axioms" in modern treatments. The remaining axioms define the arithmetical properties of the natural numbers. The naturals are assumed to be closed under a single-valued "successor" function S. Axioms 1, 6, 7, 8 define a unary representation of the intuitive notion of natural numbers: the number 1 can be defined as S(0), 2 as S(S(0)), etc. However, considering the notion of natural numbers as being defined by these axioms, axioms 1, 6, 7, 8 do not imply that the successor function generates all the natural numbers different from 0. The intuitive notion that each natural number can be obtained by applying successor sufficiently often to zero requires an additional axiom, which is sometimes called the axiom of induction. The induction axiom is sometimes stated in the following form: In Peano's original formulation, the induction axiom is a second-order axiom. It is now common to replace this second-order principle with a weaker first-order induction scheme. There are important differences between the second-order and first-order formulations, as discussed in the section below. Defining arithmetic operations and relations If we use the second-order induction axiom, it is possible to define addition, multiplication, and total (linear) ordering on N directly using the axioms. However, and addition and multiplication are often added as axioms. The respective functions and relations are constructed in set theory or second-order logic, and can be shown to be unique using the Peano axioms. Addition Addition is a function that maps two natural numbers (two elements of N) to another one. It is defined recursively as: For example: The structure is a commutative monoid with identity element 0. is also a cancellative magma, and thus embeddable in a group. The smallest group embedding N is the integers. Multiplication Similarly, multiplication is a function mapping two natural numbers to another one. Given addition, it is defined recursively as: It is easy to see that (or "1", in the familiar language of decimal representation) is the multiplicative right identity: To show that is also the multiplicative left identity requires the induction axiom due to the way multiplication is defined: is the left identity of 0: . If is the left identity of (that is ), then is also the left identity of : . Therefore, by the induction axiom is the multiplicative left identity of all natural numbers. Moreover, it can be shown that multiplication is commutative and distributes over addition: . Thus, is a commutative semiring. Inequalities The usual total order relation ≤ on natural numbers can be defined as follows, assuming 0 is a natural number: For all , if and only if there exists some such that . This relation is stable under addition and multiplication: for , if , then: a + c ≤ b + c, and a · c ≤ b · c. Thus, the structure is an ordered semiring; because there is no natural number between 0 and 1, it is a discrete ordered semiring. The axiom of induction is sometimes stated in the following form that uses a stronger hypothesis, making use of the order relation "≤": For any predicate φ, if φ(0) is true, and for every , if implies that φ(k) is true, then φ(S(n)) is true, then for every , φ(n) is true. This form of the induction axiom, called strong induction, is a consequence of the standard formulation, but is often better suited for reasoning about the ≤ order. For example, to show that the naturals are well-ordered—every nonempty subset of N has a least element—one can reason as follows. Let a nonempty be given and assume X has no least element. Because 0 is the least element of N, it must be that . For any , suppose for every , . Then , for otherwise it would be the least element of X. Thus, by the strong induction principle, for every , . Thus, , which contradicts X being a nonempty subset of N. Thus X has a least element. Models A model of the Peano axioms is a triple , where N is a (necessarily infinite) set, and satisfies the axioms above. Dedekind proved in his 1888 book, The Nature and Meaning of Numbers (, i.e., “What are the numbers and what are they good for?”) that any two models of the Peano axioms (including the second-order induction axiom) are isomorphic. In particular, given two models and of the Peano axioms, there is a unique homomorphism satisfying and it is a bijection. This means that the second-order Peano axioms are categorical. (This is not the case with any first-order reformulation of the Peano axioms, below.) Set-theoretic models The Peano axioms can be derived from set theoretic constructions of the natural numbers and axioms of set theory such as ZF. The standard construction of the naturals, due to John von Neumann, starts from a definition of 0 as the empty set, ∅, and an operator s on sets defined as: The set of natural numbers N is defined as the intersection of all sets closed under s that contain the empty set. Each natural number is equal (as a set) to the | the more restrictive setting of first-order logic. Therefore, the addition and multiplication operations are directly included in the signature of Peano arithmetic, and axioms are included that relate the three operations to each other. The following list of axioms (along with the usual axioms of equality), which contains six of the seven axioms of Robinson arithmetic, is sufficient for this purpose: In addition to this list of numerical axioms, Peano arithmetic contains the induction schema, which consists of a recursively enumerable set of axioms. For each formula in the language of Peano arithmetic, the first-order induction axiom for φ is the sentence where is an abbreviation for y1,...,yk. The first-order induction schema includes every instance of the first-order induction axiom, that is, it includes the induction axiom for every formula φ. Equivalent axiomatizations There are many different, but equivalent, axiomatizations of Peano arithmetic. While some axiomatizations, such as the one just described, use a signature that only has symbols for 0 and the successor, addition, and multiplications operations, other axiomatizations use the language of ordered semirings, including an additional order relation symbol. One such axiomatization begins with the following axioms that describe a discrete ordered semiring. , i.e., addition is associative. , i.e., addition is commutative. , i.e., multiplication is associative. , i.e., multiplication is commutative. , i.e., multiplication distributes over addition. , i.e., zero is an identity for addition, and an absorbing element for multiplication (actually superfluous). , i.e., one is an identity for multiplication. , i.e., the '<' operator is transitive. , i.e., the '<' operator is irreflexive. , i.e., the ordering satisfies trichotomy. , i.e. the ordering is preserved under addition of the same element. , i.e. the ordering is preserved under multiplication by the same positive element. , i.e. given any two distinct elements, the larger is the smaller plus another element. , i.e. zero and one are distinct and there is no element between them. In other words, 0 is covered by 1, which suggests that natural numbers are discrete. , i.e. zero is the minimum element. The theory defined by these axioms is known as PA−; the theory PA is obtained by adding the first-order induction schema. An important property of PA− is that any structure satisfying this theory has an initial segment (ordered by ) isomorphic to . Elements in that segment are called standard elements, while other elements are called nonstandard elements. Undecidability and incompleteness According to Gödel's incompleteness theorems, the theory of PA (if consistent) is incomplete. Consequently, there are sentences of first-order logic (FOL) that are true in the standard model of PA but are not a consequence of the FOL axiomatization. Essential incompleteness already arises for theories with weaker axioms, such as Robinson arithmetic. Closely related to the above incompleteness result (via Gödel's completeness theorem for FOL) it follows that there is no algorithm for deciding whether a given FOL sentence is a consequence of a first-order axiomatization of Peano arithmetic or not. Hence, PA is an example of an undecidable theory. Undecidability arises already for the existential sentences of PA, due to the negative answer to Hilbert's tenth problem, whose proof implies that all computably enumerable sets are diophantine sets, and thus definable by existentially quantified formulas (with free variables) of PA. Formulas of PA with higher quantifier rank (more quantifier alternations) than existential formulas are more expressive, and define sets in the higher levels of the arithmetical hierarchy. Nonstandard models Although the usual natural numbers satisfy the axioms of PA, there are other models as well (called "non-standard models"); the compactness theorem implies that the existence of nonstandard elements cannot be excluded in first-order logic. The upward Löwenheim–Skolem theorem shows that there are nonstandard models of PA of all infinite cardinalities. This is not the case for the original (second-order) Peano axioms, which have only one model, up to isomorphism. This illustrates one way the first-order system PA is weaker than the second-order Peano axioms. When interpreted as a proof within a first-order set theory, such as ZFC, Dedekind's categoricity proof for PA shows that each model of set theory has a unique model of the Peano axioms, up to isomorphism, that embeds as an initial segment of all other models of PA contained within that model of set theory. In the standard model of set theory, this smallest model of PA is the standard model of PA; however, in a nonstandard model of set theory, it may be a nonstandard model of PA. This situation cannot be avoided with any first-order formalization of set theory. It is natural to ask whether a countable nonstandard model can be explicitly constructed. The answer is affirmative as Skolem in 1933 provided an explicit construction of such a nonstandard model. On the other hand, Tennenbaum's theorem, proved in 1959, shows that there is no countable nonstandard model of PA in which either the addition or multiplication operation is computable. This result shows it is difficult to be completely explicit in describing the addition and multiplication operations of a countable nonstandard model of PA. There is only one possible order type of a countable nonstandard model. Letting ω be the order type of the natural numbers, ζ be the order type of the integers, and η be the order type of the rationals, the order type of any countable nonstandard model of PA is , which can be visualized as a copy of the natural numbers followed by a dense linear ordering of copies of the integers. Overspill A cut in a nonstandard model M is a nonempty subset C of M so that C is downward closed (x < y and y ∈ C ⇒ x ∈ C) and C is closed under successor. A proper cut is a cut that is a proper subset of M. Each nonstandard model has many proper cuts, including one that corresponds to the standard natural numbers. However, the induction scheme in Peano arithmetic prevents any |
sky. The constellations in Macedonian folklore represented agricultural items and animals, reflecting their village way of life. To them, Procyon and Sirius were Volci "the wolves", circling hungrily around Orion which depicted a plough with oxen. Rarer names are the Latin translation of Procyon, Antecanis, and the Arabic-derived names Al Shira and Elgomaisa. Medieval astrolabes of England and Western Europe used a variant of this, Algomeiza/Algomeyza. Al Shira derives from , "the Syrian sign" (the other sign being Sirius; "Syria" is supposedly a reference to its northern location relative to Sirius); Elgomaisa derives from "the bleary-eyed (woman)", in contrast to "the teary-eyed (woman)", which is Sirius. (See Gomeisa.) At the same time this name is synonymous with the Turkish name "Rumeysa", and it is a commonly used name in Turkey. In Chinese, (), meaning South River, refers to an asterism consisting of Procyon, ε Canis Minoris and β Canis Minoris. Consequently, Procyon itself is known as (, the Third Star of South River). It is part of the Vermilion Bird. The Hawaiians see Procyon as part of an asterism Ke ka o Makali'i ("the canoe bailer of Makali'i") that helps them navigate at sea. In Hawaiian language, this star is called Puana ("blossom"), which is a new Hawaiian name based on the Māori name Puangahori. It forms this asterism (Ke ka o Makali'i) with the Pleiades (Makali'i), Auriga, Orion, Capella, Sirius, Castor and Pollux. In Tahitian lore, Procyon was one of the pillars propping up the sky, known as Anâ-tahu'a-vahine-o-toa-te-manava ("star-the-priestess-of-brave-heart"), the pillar for elocution. Māori astronomers know the star as Puangahori ("False Puanga") which distinguishes it from its pair Puanga or Puanga-rua ("Blossom-cluster") which refers to a star of great importance to Māori culture and calendar, known by its western name Rigel. Procyon appears on the flag of Brazil, symbolizing the state of Amazonas. The Kalapalo people of Mato Grosso state in Brazil call Procyon and Canopus Kofongo ("Duck"), with Castor and Pollux representing his hands. The asterism's appearance signified the coming of the rainy season and increase in food staple manioc, used at feasts to feed guests. Known as Sikuliarsiujuittuq to the Inuit, Procyon was quite significant in their astronomy and mythology. Its eponymous name means "the one who never goes onto the newly formed sea ice", and refers to a man who stole food from his village's hunters because he was too obese to hunt on ice. He was killed by the other hunters who convinced him to go on the sea ice. Procyon received this designation because it typically appears red (though sometimes slightly greenish) as it rises during the Arctic winter; this red color was associated with Sikuliarsiujuittuq's bloody end. View from this system Were the Sun to be observed from this star system, it would appear to be a magnitude 2.55 star in the constellation Aquila with the exact opposite coordinates at right ascension , declination . It would be as bright as β Scorpii is in our sky. Canis Minor would obviously be missing its brightest star. Procyon's closest neighboring star is Luyten's Star, about away. Procyon would be the brightest star in the night sky of | with an apparent visual magnitude of 0.34. It has the Bayer designation α Canis Minoris, which is Latinized to Alpha Canis Minoris, and abbreviated α CMi or Alpha CMi, respectively. As determined by the European Space Agency Hipparcos astrometry satellite, this system lies at a distance of just , and is therefore one of Earth's nearest stellar neighbors. A binary star system, Procyon consists of a white-hued main-sequence star of spectral type F5 IV–V, designated component A, in orbit with a faint white dwarf companion of spectral type DQZ, named Procyon B. The pair orbit each other with a period of 40.84 years and an eccentricity of 0.4. Observation Procyon is usually the eighth-brightest star in the night sky, culminating at midnight on 14 January. It forms one of the three vertices of the Winter Triangle asterism, in combination with Sirius and Betelgeuse. The prime period for evening viewing of Procyon is in late winter in the Northern Hemisphere. It has a color index of 0.42, and its hue has been described as having a faint yellow tinge to it. Stellar system Procyon is a binary star system with a bright primary component, Procyon A, having an apparent magnitude of 0.34, and a faint companion, Procyon B, at magnitude 10.7. The pair orbit each other with a period of 40.84 years along an elliptical orbit with an eccentricity of 0.4, more eccentric than Mercury's. The plane of their orbit is inclined at an angle of 31.1° to the line of sight with the Earth. The average separation of the two components is , a little less than the distance between Uranus and the Sun, though the eccentric orbit carries them as close as 8.9 AU and as far as 21.0 AU. Procyon A The primary has a stellar classification of F5IV–V, indicating that it is a late-stage F-type main-sequence star. Procyon A is bright for its spectral class, suggesting that it is evolving into a subgiant that has nearly fused its hydrogen core into helium, after which it will expand as the nuclear reactions move outside the core. As it continues to expand, the star will eventually swell to about 80 to 150 times its current diameter and become a red or orange color. This will probably happen within 10 to 100 million years. The effective temperature of the stellar atmosphere is an estimated , giving Procyon A a white hue. It is 1.5 times the solar mass (), twice the solar radius (), and has seven times the Sun's luminosity (). Both the core and the envelope of this star are convective; the two regions being separated by a wide radiation zone. Oscillations In late June 2004, Canada's orbital MOST satellite telescope carried |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.