sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
predication if some facts expressed by ordinary sentences hold. In his work On Interpretation, he maintained that the concept of "universal" is apt to be predicated of many and that singular is not. For instance, man is a universal while Callias is a singular. The philosopher distinguished highest genera like animal and species like man but he maintained that both are predicated of individual men. This was considered part of an approach to the principle of things, which adheres to the criterion that what is most universal is also most real. Consider for example a particular oak tree. This is a member of a species and it has much in common with other oak trees, past, present and future. Its universal, its oakness, is a part of it. A biologist can study oak trees and learn about oakness and more generally the intelligible order within the sensible world. Accordingly, Aristotle was more confident than Plato about coming to know the sensible world; he was a prototypical empiricist and a founder of induction. Aristotle was a new, moderate sort of realist about universals. Medieval philosophy Boethius The problem was introduced to the medieval world by Boethius, by his translation of Porphyry's Isagoge. It begins: "I shall omit to speak about genera and species, as to whether they subsist (in the nature of things) or in mere conceptions only; whether also if subsistent, they are bodies or incorporeal, and whether they are separate from, or in, sensibles, and subsist about these, for such a treatise is most profound, and requires another more extensive investigation". Boethius, in his commentaries on the aforementioned translation, says that a universal, if it were to exist has to apply to several particulars entirely. He also specifies that they apply simultaneously at once and not in a temporal succession. He reasons that they cannot be mind-independent, i.e. they do not have a real existence, because a quality cannot be both one thing and common to many particulars in such a way that it forms part of a particular's substance, as it would then be partaking of universality and particularity. However, he also says that universals can't also be of the mind since a mental construct of a quality is an abstraction and understanding of something outside of the mind. He concludes that this representation is either a true understanding of the quality, in which case we revert to the earlier problem faced by those who believe universals are real. Conversely, if the mental abstractions were not a true understanding, then 'what is understood otherwise than the thing is false'. His solution to this problem was to state that the mind is able to separate in thought what is not necessarily separable in reality. He cites the human mind's ability to abstract from concrete particulars as an instance of this. This, according to Boethius, avoids the problem of Platonic universals being out there in the real world, but also the problem of them being purely constructs of the mind in that universals are simply the mind thinking of particulars in an abstract, universal way. His assumption focuses on the problems that language create. Boethius maintained that the structure of language corresponds to the structure of things and that language creates what he called as philosophical babble of confused and contradictory accounts of the nature of things. To illustrate his view, suppose that although the mind cannot think of 2 or 4 as an odd number, as this would be a false representation, it can think of an even number that is neither 2 nor 4. Medieval realism Boethius mostly stayed close to Aristotle in his thinking about universals. Realism's biggest proponents in the Middle Ages, however, came to be Thomas Aquinas and Duns Scotus. Aquinas argued that both the essence of a thing and its existence were clearly distinct; in this regard he is also Aristotelian. Duns Scotus argues that in a thing there is no real distinction between the essence and the existence, instead there is only a formal distinction. Scotus believed that universals exist only inside the things that they exemplify, and that they "contract" with the haecceity of the thing to create the individual. As a result of his realist position, he argued strongly against both nominalism and conceptualism, arguing instead for Scotist realism, a medieval response to the conceptualism of Abelard. That is to say, Scotus believed that such properties as 'redness' and 'roundness' exist in reality and are mind-independent entities. Furthermore, Duns Scotus wrote about this problem in his own commentary (Quaestiones) on Porphyry's Isagoge, as Boethius had done. Scotus was interested in how the mind forms universals, and he believed this to be 'caused by the intellect'. This intellect acts on the basis that the nature of, say, 'humanity' that is found in other humans and also that the quality is attributable to other individual humans. Medieval nominalism The opposing view to realism is one called nominalism, which at its strongest maintains that universals are verbal constructs and that they do not inhere in objects or pre-exist them. Therefore, universals in this view are something which are peculiar to human cognition and language. The French philosopher and theologian Roscellinus (1050–1125) was an early, prominent proponent of this view. His particular view was that universals are little more than vocal utterances (voces). William of Ockham (1285-1347) wrote extensively on this topic. He argued strongly that universals are a product of abstract human thought. According to Ockham, universals are just words or concepts (at best) that only exist in the mind and have no real place in the external world. His opposition to universals was not based on his eponymous Razor, but rather he found that regarding them as real was contradictory in some sense. An early work has Ockham stating that 'no thing outside the soul is universal, either through itself or through anything real or rational added on, no matter how it is considered or understood'. Nevertheless, his position did shift away from an outright opposition to accommodating them in his later works such as the Summae Logicae (albeit in a modified way that would not classify him as a complete realist). Modern and contemporary philosophy Mill The 19th-century British philosopher John Stuart Mill discussed the problem of universals in the course of a book that eviscerated the philosophy of Sir William Hamilton. Mill wrote, "The formation of a concept does not consist in separating the attributes which are said to compose it from all other attributes of the same object and enabling us to conceive those attributes, disjoined from any others. We neither conceive them, nor think them, nor cognize them in any way, as a thing apart, but solely as forming, in combination with numerous other attributes, the idea of an individual object". However, he then proceeds to state that Berkeley's position is factually wrong by stating the following: In other words, we may be "temporarily unconscious" of whether an image is white, black or yellow and concentrate our attention on the fact that it is a man and on just those attributes necessary to identify it as a man (but not as any particular one). It may then have the significance of a universal of manhood. Peirce The 19th-century American logician Charles Sanders Peirce, known as the father of pragmatism, developed his own views on the problem of universals in the course of a review of an edition of the writings of George Berkeley. Peirce begins with the observation that "Berkeley's metaphysical theories have at first sight an air of paradox and levity very unbecoming to a bishop". He includes among these paradoxical doctrines Berkeley's denial of "the possibility of forming the simplest general conception". He wrote that if there is some mental fact that works in practice the way that a universal would, that fact is a universal. "If I have learned a formula in gibberish which in any way jogs my memory so as to enable me in each single case to act as though I had a general idea, what possible utility is there in distinguishing between such a gibberish... and an idea?" Peirce also held as a matter of ontology that what he called "thirdness", the more general facts about the world, are extra-mental realities. James William James learned pragmatism, this way of understanding an idea by its practical effects, from his friend Peirce, but he gave it new significance – which was not to Peirce's taste: he came to complain that James had "kidnapped" the term and eventually to call himself a "pragmaticist" instead. Although James certainly agreed with Peirce and against Berkeley that general ideas exist as a psychological fact, he was a nominalist in his ontology: There are at least three ways in which a realist might try to answer James' challenge of explaining the reason why universal conceptions are more lofty than those of particulars: the moral–political answer, the mathematical–scientific answer, and the anti-paradoxical answer. Each has contemporary or near-contemporary advocates. Weaver The moral or political response is given by the conservative philosopher Richard M. Weaver in Ideas Have Consequences (1948), where he describes how the acceptance of "the fateful doctrine of nominalism" was "the crucial event in the history of Western culture; from this flowed those acts which issue now in modern decadence". Quine The noted American philosopher, W. V. O. Quine addressed the problem of universals throughout his career. In his paper, 'On Universals', from 1947, he states the problem of universals is chiefly understood as being concerned with entities and not the linguistic aspect of naming a universal. He says that Platonists believe that our ability to form general conceptions of things is incomprehensible unless universals exist outside of the mind, whereas nominalists believe that such ideas are 'empty verbalism'. Quine himself does not propose to resolve this particular debate. What he does say however is that certain types of 'discourse' presuppose universals: nominalists therefore must give these up. Quine's approach is therefore more an epistemological one, i.e. what can be known, rather than a metaphysical one, i.e. what is real. Cocchiarella Nino Cocchiarella put forward the idea that realism is the best response to certain logical paradoxes to which nominalism leads ("Nominalism and Conceptualism as Predicative Second Order Theories of Predication", Notre Dame Journal of Formal Logic, vol. 21 (1980)). It is noted that
believe universals are real. Conversely, if the mental abstractions were not a true understanding, then 'what is understood otherwise than the thing is false'. His solution to this problem was to state that the mind is able to separate in thought what is not necessarily separable in reality. He cites the human mind's ability to abstract from concrete particulars as an instance of this. This, according to Boethius, avoids the problem of Platonic universals being out there in the real world, but also the problem of them being purely constructs of the mind in that universals are simply the mind thinking of particulars in an abstract, universal way. His assumption focuses on the problems that language create. Boethius maintained that the structure of language corresponds to the structure of things and that language creates what he called as philosophical babble of confused and contradictory accounts of the nature of things. To illustrate his view, suppose that although the mind cannot think of 2 or 4 as an odd number, as this would be a false representation, it can think of an even number that is neither 2 nor 4. Medieval realism Boethius mostly stayed close to Aristotle in his thinking about universals. Realism's biggest proponents in the Middle Ages, however, came to be Thomas Aquinas and Duns Scotus. Aquinas argued that both the essence of a thing and its existence were clearly distinct; in this regard he is also Aristotelian. Duns Scotus argues that in a thing there is no real distinction between the essence and the existence, instead there is only a formal distinction. Scotus believed that universals exist only inside the things that they exemplify, and that they "contract" with the haecceity of the thing to create the individual. As a result of his realist position, he argued strongly against both nominalism and conceptualism, arguing instead for Scotist realism, a medieval response to the conceptualism of Abelard. That is to say, Scotus believed that such properties as 'redness' and 'roundness' exist in reality and are mind-independent entities. Furthermore, Duns Scotus wrote about this problem in his own commentary (Quaestiones) on Porphyry's Isagoge, as Boethius had done. Scotus was interested in how the mind forms universals, and he believed this to be 'caused by the intellect'. This intellect acts on the basis that the nature of, say, 'humanity' that is found in other humans and also that the quality is attributable to other individual humans. Medieval nominalism The opposing view to realism is one called nominalism, which at its strongest maintains that universals are verbal constructs and that they do not inhere in objects or pre-exist them. Therefore, universals in this view are something which are peculiar to human cognition and language. The French philosopher and theologian Roscellinus (1050–1125) was an early, prominent proponent of this view. His particular view was that universals are little more than vocal utterances (voces). William of Ockham (1285-1347) wrote extensively on this topic. He argued strongly that universals are a product of abstract human thought. According to Ockham, universals are just words or concepts (at best) that only exist in the mind and have no real place in the external world. His opposition to universals was not based on his eponymous Razor, but rather he found that regarding them as real was contradictory in some sense. An early work has Ockham stating that 'no thing outside the soul is universal, either through itself or through anything real or rational added on, no matter how it is considered or understood'. Nevertheless, his position did shift away from an outright opposition to accommodating them in his later works such as the Summae Logicae (albeit in a modified way that would not classify him as a complete realist). Modern and contemporary philosophy Mill The 19th-century British philosopher John Stuart Mill discussed the problem of universals in the course of a book that eviscerated the philosophy of Sir William Hamilton. Mill wrote, "The formation of a concept does not consist in separating the attributes which are said to compose it from all other attributes of the same object and enabling us to conceive those attributes, disjoined from any others. We neither conceive them, nor think them, nor cognize them in any way, as a thing apart, but solely as forming, in combination with numerous other attributes, the idea of an individual object". However, he then proceeds to state that Berkeley's position is factually wrong by stating the following: In other words, we may be "temporarily unconscious" of whether an image is white, black or yellow and concentrate our attention on the fact that it is a man and on just those attributes necessary to identify it as a man (but not as any particular one). It may then have the significance of a universal of manhood. Peirce The 19th-century American logician Charles Sanders Peirce, known as the father of pragmatism, developed his own views on the problem of universals in the course of a review of an edition of the writings of George Berkeley. Peirce begins with the observation that "Berkeley's metaphysical theories have at first sight an air of paradox and levity very unbecoming to a bishop". He includes among these paradoxical doctrines Berkeley's denial of "the possibility of forming the simplest general conception". He wrote that if there is some mental fact that works in practice the way that a universal would, that fact is a universal. "If I have learned a formula in gibberish which in any way jogs my memory so as to enable me in each single case to act as though I had a general idea, what possible utility is there in distinguishing between such a gibberish... and an idea?" Peirce also held as a matter of ontology that what he called "thirdness", the more general facts about the world, are extra-mental realities. James William James learned pragmatism, this way of understanding an idea by its practical effects, from his friend Peirce, but he gave it new significance – which was not to Peirce's taste: he came to complain that James had "kidnapped" the term and eventually to call himself a "pragmaticist" instead. Although James certainly agreed with Peirce and against Berkeley that general ideas exist as a psychological fact, he was a nominalist in his ontology: There are at least three ways in which a realist might try to answer James' challenge of explaining the reason why universal conceptions are more lofty than those of particulars: the moral–political answer, the mathematical–scientific answer, and the anti-paradoxical answer. Each has contemporary or near-contemporary advocates. Weaver The moral or political response is given by the conservative philosopher Richard M. Weaver in Ideas Have Consequences (1948), where he describes how the acceptance of "the fateful doctrine of nominalism" was "the crucial event in the history of Western culture; from this flowed those acts which issue now in modern decadence". Quine The noted American philosopher, W. V. O. Quine addressed the problem of universals throughout his career. In his paper, 'On Universals', from 1947, he states the problem of universals is chiefly understood as being concerned with entities and not the linguistic aspect of naming a universal. He says that Platonists believe that our ability to form general conceptions of things is incomprehensible unless universals exist outside of the mind, whereas nominalists believe that such ideas are 'empty verbalism'. Quine himself does not propose to resolve this particular debate. What he does say however is that certain types of 'discourse' presuppose universals: nominalists therefore must give these up. Quine's approach is therefore more an epistemological one, i.e. what can be known, rather than a metaphysical one, i.e. what is real. Cocchiarella Nino Cocchiarella put forward the idea that realism is the best response to certain logical paradoxes to which nominalism leads ("Nominalism and Conceptualism as Predicative Second Order Theories of Predication", Notre Dame Journal of Formal Logic, vol. 21 (1980)). It is noted that in a sense Cocchiarella has adopted Platonism for anti-Platonic reasons. Plato, as seen in the dialogue Parmenides, was willing to accept a certain amount of paradox with his forms. Cocchiarella adopts the forms to avoid paradox. Armstrong The Australian philosopher David Malet Armstrong has been one of the leading realists in the twentieth century, and has used a concept of universals to build a naturalistic and scientifically realist ontology. In both Universals and Scientific Realism (1978) and Universals: An Opinionated Introduction (1989), Armstrong describes the relative merits of a number of nominalist theories which appeal either to "natural classes" (a view he ascribes to Anthony Quinton), concepts, resemblance relations or predicates, and also discusses non-realist "trope" accounts (which he describes in the Universals and Scientific Realism volumes as "particularism"). He gives a number of reasons to reject all of these, but also dismisses a number of realist accounts. Penrose Roger Penrose contends that the foundations of mathematics can't be understood without the Platonic view that "mathematical truth is absolute, external and eternal, and not based on man-made criteria ... mathematical objects have a timeless existence of their own..." Positions There are many philosophical positions regarding universals. Platonic realism (also called extreme realism" or exaggerated realism) is the view that universals or forms in this sense, are the causal explanation behind the notion of what things exactly are; (the view that universals are real entities existing independent of particulars). Aristotelian realism (also called strong realism or moderate realism) is the rejection of extreme realism. This position establishes the view of a universal as being that of the quality within a thing and every other thing individual to it; (the view that universals are real entities, but their existence is dependent on the particulars that exemplify them). Anti-realism is the objection to both positions. Anti-realism is divided into two subcategories; (1) Nominalism and (2) Conceptualism. Taking "beauty" as example, each of these positions will state the following: Beauty is a property that exists in an ideal form independently of
crude oil across Alaska from the Prudhoe Bay oil fields In arts and entertainment Games Pipeline (game), a 1988 computer game for the BBC Micro and Acorn Electron Pipeline (board game), winner of Games Magazine's 1992 Game of the Year award Literature Pipeline (comics), a character from Marvel Comics with the ability to teleport himself and others Pipeline, a 2017 play written by Dominique Morisseau Music Pipeline (instrumental), a 1963 song by the Chantays Pipeline Music, a record label "Pipeline", a 1983 song by Depeche Mode from the album Construction Time Again "Pipeline", a 1984 song by the Alan Parsons Project from the album Ammonia Avenue BBC Radio Scotland, a programme of music on the bagpipes Other uses in arts and entertainment Pipeline (film), a 2021 South Korean heist film CNN Pipeline, a streaming video service by CNN Other uses
for optimizing digital circuit Telestream pipeline, a video capture and playout hardware device Physical infrastructure Pipeline transport, a conduit made from pipes connected end-to-end for long-distance fluid or gas transport Plastic pipework, for fluid handling Milking pipeline, used on a dairy farm to transport fluid milk Business Art pipeline, process of creating and implementing art for a particular project, most commonly associated with the creative process for developing video games. Sales pipeline, a visualisation of the sales process of a company Places Banzai Pipeline, a surfing spot on the North Shore of Oahu Mister Pipeline, an Oahu surfer title Tolt Pipeline Trail, an equestrian and biking trail in Redmond, Washington, USA and Canada Keystone Pipeline, a partially operational and proposed pipeline from Canada to the Gulf of Mexico Trans-Alaska Pipeline System, a pipeline transporting crude oil across Alaska from the Prudhoe Bay oil fields In arts and entertainment Games Pipeline (game), a 1988
judicial confidence. Many defendants in serious and complex fraud cases are represented by solicitors experienced in commercial litigation, including negotiation. This means that the defendant is usually protected from being put under improper pressure to plead. The main danger to be guarded against in these cases is that the prosecutor is persuaded to agree to a plea or a basis that is not in the public interest and interests of justice because it does not adequately reflect the seriousness of the offending ... Any plea agreement must reflect the seriousness and extent of the offending and give the court adequate sentencing powers. It must consider the impact of an agreement on victims and also the wider public, whilst respecting the rights of defendants. John H. Langbein argues that the modern American system of plea bargaining is comparable to the medieval European system of torture: There is, of course, a difference between having your limbs crushed if you refuse to confess, or suffering some extra years of imprisonment if you refuse to confess, but the difference is of degree, not kind. Plea bargaining, like torture, is coercive. Like the medieval Europeans, the Americans are now operating a procedural system that engages in condemnation without adjudication. Consequences for innocent accused Theoretical work based on the prisoner's dilemma is one reason that, in many countries, plea bargaining is forbidden. Often, precisely the prisoner's dilemma scenario applies: it is in the interest of both suspects to confess and testify against the other suspect, irrespective of the innocence of the accused. Arguably, the worst case is when only one party is guilty: here, the innocent one has no incentive to confess, while the guilty one has a strong incentive to confess and give testimony (including false testimony) against the innocent. A 2009 study by the European Association of Law and Economics observed that innocent defendants are consistently more likely than guilty defendants to reject otherwise-favorable pleas proposals, even when theoretically disadvantageous to do so, because of perceived unfairness, and would do so even if the expected sanction would be worse if they proceeded to trial. The study concluded that "[t]his somewhat counterintuitive 'cost of innocence', where the preferences of innocents lead them collectively to fare worse than their guilty counterparts, is further increased by the practice of imposing much harsher sentences at trial on defendants who contest the charges. This 'trial penalty' seeks to facilitate guilty pleas by guilty defendants [...and ironically...] disproportionately, collectively, penalizes innocents, who reject on fairness grounds some offers their guilty counterparts accept." The extent to which innocent people will accept a plea bargain and plead guilty is contentious and has been subjected to considerable research. Much research has focused on the relatively few actual cases where innocence was subsequently proven, such as successful appeals for murder and rape based upon DNA evidence, which tend to be atypical of trials as a whole (being by their nature only the most serious kinds of crime). Other studies have focused on presenting hypothetical situations to subjects and asking what choice they would make. More recently some studies have attempted to examine actual reactions of innocent persons generally, when faced with actual plea bargain decisions. A study by Dervan and Edkins (2013) attempted to recreate a real-life controlled plea bargain situation, rather than merely asking theoretical responses to a theoretical situation—a common approach in previous research. It placed subjects in a situation where an accusation of academic fraud (cheating) could be made, of which some subjects were in fact by design actually guilty (and knew this), and some were innocent but faced seemingly strong evidence of guilt and no verifiable proof of innocence. Each subject was presented with the evidence of guilt and offered a choice between facing an academic ethics board and potentially a heavy penalty in terms of extra courses and other forfeits, or admitting guilt and accepting a lighter "sentence". The study found that as expected from court statistics, around 90% of accused subjects who were actually guilty chose to take the plea-bargain and plead guilty. It also found that around 56% of subjects who were actually innocent (and privately knew it) also take up the plea-bargain and plead guilty, for reasons including avoiding formal quasi-legal processes, uncertainty, possibility of greater harm to personal future plans, or deprivation of home environment due to remedial courses. The authors stated: Previous research has argued that the innocence problem is minimal because defendants are risk-prone and willing to defend themselves before a tribunal. Our research, however, demonstrates that when study participants are placed in real, rather than hypothetical, bargaining situations and are presented with accurate information regarding their statistical probability of success, just as they might be so informed by their attorney or the government during a criminal plea negotiation, innocent defendants are highly risk-averse. More pressure to plea bargain may be applied in weak cases (where there is less certainty of both guilt and jury conviction) than strong cases. Prosecutors tend to be strongly motivated by conviction rates, and "there are many indications that prosecutors are willing to go a long way to avoid losing cases, [and that] when prosecutors decide to proceed with such weak cases they are often willing to go a long way to assure that a plea bargain is struck". Prosecutors often have great power to procure a desired level of incentive, as they select the charges to be presented. For this reason, [P]lea bargains are just as likely in strong and weak cases. Prosecutors only need to adjust the offer to the probability of conviction in order to reach an agreement. Thus, weaker cases result in more lenient plea bargains, and stronger ones in relative harshness, but both result in an agreement. [... W]hen the case is weak, the parties must rely on charge bargaining ... But [charge bargaining] is hardly an obstacle. Charge bargaining in weak cases is not the exception; it is the norm all around the country. Thus, even if the evidence against innocent defendants is, on average, weaker, the likelihood of plea bargains is not dependent on guilt. Another situation in which an innocent defendant may plead guilty is in the case of a defendant who cannot raise bail, and who is being held in custody in a jail or detention facility. Because it may take months, or even years, for criminal cases to come to trial or even indictment in some jurisdictions, an innocent defendant who is offered a plea bargain that includes a sentence of less time than they would otherwise spend in jail awaiting an indictment or a trial may choose to accept the plea arrangement and plead guilty. Misalignment of goals and incentives Agency problems may arise in plea bargaining as, although the prosecutor represents the people and the defense attorney represents the defendant, these agents' goals may not be congruent with those of their principals. For example, prosecutors and defense attorneys may seek to maintain good relations with one another, creating a potential conflict with the parties they represent. A defense attorney may receive a flat fee for representing a client, or may not receive additional money for taking a case to trial, creating an incentive for the defense attorney to settle a case to increase profits or to avoid a financial loss. A prosecutor may want to maintain a high conviction rate or avoid a losing high-profile trials, creating the potential that they will enter into a plea bargain that furthers their interests but reduces the potential of the prosecution and sentence to deter crime. Prosecutors may also make charging decisions that significantly affect a defendant's sentence, and may file charges or offer plea deals that cause even an innocent defendant to consider or accept a plea bargain. Issues related to cost of justice Another argument against plea bargaining is that it may not actually reduce the costs of administering justice. For example, if a prosecutor has only a 25% chance of winning his case and sending a defendant away to prison for 10 years, they may make a plea agreement for a sentence of one year; but if plea bargaining is unavailable, a prosecutor may drop the case completely. Usage in common law countries United States Plea bargaining is a significant part of the criminal justice system in the United States; the vast majority (roughly 90%) of criminal cases in the United States are settled by plea bargain rather than by a jury trial. Plea bargains are subject to the approval of the court, and different states and jurisdictions have different rules. The Federal Sentencing Guidelines are followed in federal cases and have been created to ensure a standard of uniformity in all cases decided in the federal courts. A two- or three-level offense level reduction is usually available for those who accept responsibility by not holding the prosecution to the burden of proving its case; this usually amounts to a complete sentence reduction had they gone to trial and lost. The Federal Rules of Criminal Procedure provide for two main types of plea agreements. An 11(c)(1)(B) agreement does not bind the court; the prosecutor's recommendation is merely advisory, and the defendant cannot withdraw their plea if the court decides to impose a sentence other than what was stipulated in the agreement. An 11(c)(1)(C) agreement, however, binds the court once the court accepts the agreement. When such an agreement is proposed, the court can reject it if it disagrees with the proposed sentence, in which case the defendant has an opportunity to withdraw their plea. Plea bargains are so common in the Superior Courts of California (the general trial courts) that the Judicial Council of California has published an optional seven-page form (containing all mandatory advisements required by federal and state law) to help prosecutors and defense attorneys reduce such bargains into written plea agreements. Certain aspects of the American justice system serve to promote plea bargaining. For example, the adversarial nature of the U.S. criminal justice system puts judges in a passive role, in which they have no independent access to information with which to assess the strength of the case against the defendant. The prosecutor and defense may thus control the outcome of a case through plea bargaining. The court must approve a plea bargain as being within the interests of justice. The lack of compulsory prosecution also gives prosecutors greater discretion as well as the inability of crime victims to mount a
withdrawing the remaining or more serious charges. In New South Wales, a 10-25% discount on the sentence is customarily given in exchange for an early guilty plea, but this concession is expected to be granted by the judge as a way of recognizing the utilitarian value of an early guilty plea to the justice system - it is never negotiated with a prosecutor. The courts in these jurisdictions have made it plain that they will always decide what the appropriate penalty is to be. No bargaining takes place between the prosecution and the defence over criminal penalties. Use in civil law countries Plea bargaining is extremely difficult in jurisdictions based on the civil law. This is because, unlike common law systems, civil law systems have no concept of plea—if the defendant confesses; a confession is entered into evidence, but the prosecution is not absolved of the duty to present a full case. A court may decide that a defendant is innocent even though they presented a full confession. Also, unlike common law systems, prosecutors in civil law countries may have limited or no power to drop or reduce charges after a case has been filed, and in some countries their power to drop or reduce charges before a case has been filed is limited, making plea bargaining impossible. Since the 1980s, many civil law nations have adapted their systems to allow for plea bargaining. Brazil In 2013 Brazil passed a law allowing plea bargains, which have been used in the political corruption trials taking place since then. Central African Republic In the Central African Republic, witchcraft carries heavy penalties but those accused of it typically confess in exchange for a modest sentence. China In China, a plea bargaining pilot scheme was introduced by the Standing Committee of the National People's Congress in 2016. For defendants that face jail terms of three years or fewer, agrees to plead guilty voluntarily and agree with prosecutors' crime and sentencing proposals are given mitigated punishments. Denmark In 2009, in a case about whether witness testimony originating from a plea deal in the United States was admissible in a Danish criminal trial (297/2008 H), the Supreme Court of Denmark (Danish: Højesteret) unanimously ruled that plea bargains are prima facie not legal under Danish law, but that the witnesses in the particular case would be allowed to testify regardless (with the caveat that the lower court consider the possibility that the testimony was untrue or at least influenced by the benefits of the plea bargain). The Supreme Court did, however, point out that Danish law contains mechanisms similar to plea bargains, such as of the Danish Penal Code (Danish: Straffeloven) which states that a sentence may be reduced if the perpetrator of a crime provides information that helps solve a crime perpetrated by others, or of the Danish Competition Law (Danish: Konkurrenceloven) which states that someone can apply to avoid being fined or prosecuted for participating in a cartel if they provide information about the cartel that the authorities did not know at the time. If a defendant admits to having committed a crime, the prosecution does not have to file charges against them, and the case can be heard as a so-called "admission case" (Danish: tilståelsessag) under of the Law on the Administration of Justice (Danish: Retsplejeloven) provided that: the confession is supported by other pieces of evidence (meaning that a confession is not enough to convict someone on its own); both the defendant and the prosecutor consent to it; the court does not have any objections; §§ 68, 69, 70 and 73 of the penal code do not apply to the case. Estonia In Estonia, plea bargaining was introduced in the 1990s: the penalty is reduced in exchange for confession and avoiding most of the court proceedings. Plea bargaining is permitted for the crimes punishable by no more than four years of imprisonment. Normally, a 25% reduction of the penalty is given. France The introduction of a limited form of plea bargaining (comparution sur reconnaissance préalable de culpabilité or CRPC, often summarized as plaider coupable) in 2004 was highly controversial in France. In this system, the public prosecutor could propose to suspects of relatively minor crimes a penalty not exceeding one year in prison; the deal, if accepted, had to be accepted by a judge. Opponents, usually lawyers and leftist political parties, argued that plea bargaining would greatly infringe on the rights of defense, the long-standing constitutional right of presumption of innocence, the rights of suspects in police custody, and the right to a fair trial. For instance, Robert Badinter argued that plea bargaining would give too much power to the public prosecutor and would encourage defendants to accept a sentence only to avoid the risk of a bigger sentence in a trial, even if they did not really deserve it. Only a minority of criminal cases are settled by that method: in 2009, 77,500 out of the 673,700 or 11.5% of the decisions by the correctional courts. Georgia Plea bargaining (Georgian: საპროცესო შეთანხმება, literally "plea agreement") was introduced in Georgia in 2004. The substance of the Georgian plea bargaining is similar to the United States and other common law jurisdictions. A plea bargaining, also called a plea agreement or negotiated plea, is an alternative and consensual way of criminal case settlement. A plea agreement means settlement of case without main hearing when the defendant agrees to plead guilty in exchange for a lesser charge or for a more lenient sentence or for dismissal of certain related charges. (Article 209 of the Criminal Procedure Code of Georgia) Defendants' rights during plea bargaining The main principle of the plea bargaining is that it must be based on the free will of the defendant, equality of the parties and advanced protection of the rights of the defendant: In order to avoid fraud of the defendant or insufficient consideration of his or her interests, legislation foresees obligatory participation of the defense council; (Article 210 of the Criminal Procedure Code of Georgia) The defendant has the right to reject the plea agreement on any stage of the criminal proceedings before the court renders the judgment. (Article 213 of the Criminal Procedure Code of Georgia) In case of refusal, it is prohibited to use information provided by the defendant under the plea agreement against him or her in the future. (Article 214 of the Criminal Procedure Code of Georgia) The defendant has the right to appeal the judgment rendered consequent to the plea agreement if the plea agreement was concluded by deception, coercion, violence, threat, or violence. (Article 215 of the Criminal Procedure Code of Georgia) Obligations of the prosecutor while concluding the plea agreement While concluding the plea agreement, the prosecutor is obliged to take into consideration public interest, severity of the penalty, and personal characteristics of the defendant. (Article 210 of the Criminal Procedure Code of Georgia) To avoid abuse of powers, legislation foresees written consent of the supervisory prosecutor as necessary precondition to conclude plea agreement and to amend its provisions. (Article 210 of the Criminal Procedure Code of Georgia) Oversight over the plea agreement Plea agreement without the approval of the court does not have the legal effect. The court must satisfy itself that the plea agreement is concluded on the basis of the free will of the defendant, that the defendant fully acknowledges the essence of the plea agreement and its consequences. (Article 212 of the Criminal Procedure Code of Georgia) A guilty plea of the defendant is not enough to render a guilty judgment. (Article 212 of the Criminal Procedure Code of Georgia) Consequently, the court is obliged to discuss two issues: Whether irrefutable evidence is presented which proves the defendant's guilt beyond reasonable doubt. Whether the sentence provided for in the plea agreement is legitimate. (Article 212 of the Criminal Procedure Code of Georgia). After both criteria are satisfied the court additionally checks whether formalities related to the legislative requirements are followed and only then makes its decision. If the court finds that presented evidence is not sufficient to support the charges or that a motion to render a judgment without substantial consideration of a case is submitted in violation of the requirements stipulated by the Criminal Procedure Code of Georgia, it shall return the case to the prosecution. The court before returning the case to the prosecutor offers the parties to change the terms of the agreement. If the changed terms do not satisfy the court, then it shall return the case to the prosecution. (Article 213 of the Criminal Procedure Code of Georgia). If the court satisfies itself that the defendant fully acknowledges the consequences of the plea agreement, and he or she was represented by the defense council, his or her will is expressed in full compliance with the legislative requirements without deception and coercion, also if there is enough body of doubtless evidence for the conviction and the agreement is reached on legitimate sentence - the court approves the plea agreement and renders guilty judgment. If any of the abovementioned requirements are not satisfied, the court rejects to approve the plea agreement and returns the case to the prosecutor. (Article 213 of the Criminal Procedure Code of Georgia). Role of the victim in plea agreement negotiations The plea agreement is concluded between the parties - the prosecutor and the defendant. Notwithstanding the fact that the victim is not party to the criminal case and the prosecutor is not a tool in the hands of the victim to obtain revenge against the offender, the attitude of the victim in relation to the plea agreement is still important. Under Article 217 of the Criminal Procedure Code of Georgia, the prosecutor is obliged to consult with the victim prior to concluding the plea agreement and inform him or her about this. In addition, under the Guidelines of the Prosecution Service of Georgia, the prosecutor is obliged to take into consideration the interests of the victim and as a rule conclude the plea agreement after the damage is compensated. Germany Plea agreements have made a limited appearance in Germany. However, there is no exact equivalent of a guilty plea in German criminal procedure. Italy Italy has a form of bargaining, popularly known as patteggiamento but that has a technical name of penalty application under request of the parts. In fact, the bargaining is not about the charges, but about the penalty applied in sentence, reduced up to one third. When the defendant deems that the punishment that would, concretely, be handed down is less than a five-year imprisonment (or that it would just be a fine), the defendant may request to plea bargain with the prosecutor. The defendant is rewarded with a reduction on the sentence and has other advantages (such as that the defendant does not pay the fees on the proceeding). The defendant must accept the penalty for the charges (even if the plea-bargained sentence has some particular matters in further compensation proceedings), no matter how serious the charges are. Sometimes, the prosecutor agrees to reduce a charge or to drop some of multiple charges in exchange for the defendant's acceptance of the penalty. The defendant, in the request, could argue with the penalty and aggravating and extenuating circumstancing with the prosecutor, that can accept or refuse. The request could also be made by the prosecutor. The plea bargaining could be granted if the penalty that could be concretely applied is, after the reduction of one third, inferior to five-year imprisonment (so called patteggiamento allargato, wide bargaining); When the penalty applied,
constraints as the other idols of the consumer music industry – they last a little while and then disappear. Meanwhile, they are useful in neutralizing the innate spirit of rebellion of young people. The term 'protest song' is no longer valid because it is ambiguous and has been misused. I prefer the term 'revolutionary song'. Nueva canción (literally "new song" in Spanish) was a type of protest/social song in Latin American music which took root in South America, especially Chile and other Andean countries, and gained extreme popularity throughout Latin America. It combined traditional Latin American folk music idioms (played on the quena, zampoña, charango or cajón with guitar accompaniment) with some popular (esp. British) rock music, and was characterized by its progressive and often politicized lyrics. It is sometimes considered a precursor to rock en español. The lyrics are typically in Spanish, with some indigenous or local words mixed in. In 2019, "A Rapist in Your Path" () was first performed in Chile to protest rape culture and victim shaming. Videos of the song and its accompanying dance went viral, spreading across the world. Cuba A type of Cuban protest music started in the mid-1960s when a movement in Cuban music emerged that combined traditional folk music idioms with progressive and often politicized lyrics. This movement of protest music came to be known as "Nueva trova", and was somewhat similar to that of Nueva canción, however with the advantage of support from the Cuban government, as it promoted the Cuban Revolution – and thus part of revolutionary song. United States Though originally and still largely Cuban, nueva trova has become popular across Latin America, especially in Puerto Rico. The movements biggest stars included Puerto Ricans such as Roy Brown, Andrés Jiménez, Antonio Cabán Vale and the group Haciendo Punto en Otro Son. In response to Telegramgate, Puerto Rican musicians Bad Bunny, Residente, and iLE released the protest song "Afilando los cuchillos" on July 17, 2019. It is a Diss Track calling for the resignation of Ricardo Rosselló. Asia Mainland China Chinese-Korean Cui Jian's 1986 song "Nothing to My Name" was popular with protesters in Tiananmen Square. Hong Kong Hong Kong rock band Beyond's "Boundless Oceans Vast Skies" (1993) and "Glory Days" (光輝歲月)(1990) have been considered as protest anthems in various social movements. During the 2019–20 Hong Kong protests, Les Misérables' "Do You Hear The People Sing" (1980) and Thomas dgx yhl's "Glory to Hong Kong" (2019) were sung in support of the movement. The latter has been widely adopted as the anthem of these protests, with some even regarding it as the "national anthem of Hong Kong". India Cultural Activism in India has always been considered one of the most effective tools to mobilise people into making a social change since pre-independence times. India provided many examples of protest songs throughout its struggle for freedom from Britain. Indian rapper Raftaar's "Mantoiyat" lashes out at corrupt politicians and police and brings to light injustices that plague the country. In the song he talks about deep rooted issues and brings light to the hypocrisy of the people and the government. Artists such as Poojan Sahil, Seedhe Maut, Vishkyun, Prabh Deep, Rapper Shaz, Sumit Roy & Ahmer usually talk about social issues in their songs. The rock fusion band Indian Ocean's song "Chitu" was one of their first and prominent songs, a tribal anthem that Ram had come across over the course of being involved in the Narmada Movement. In 2019, India's citizenship Law led to a mass protest all over the country. Artists like Varun Grover, Poojan Sahil, Rapper Shaz & Madara joined the cause with their own sonic protest. In more contemporary times, protest music has been a regular feature of movements in India. The Dalit rights movement especially uses music to further its goals. The Kabir Kala Manch is one such well known troupe of singers who used their performances to raise awareness and support for their cause. The widely acclaimed documentary film, Jai Bhim Comrade, highlighted the work of Kabir Kala Manch and presented this form of protest music to both Indian as well as international audiences. Similar, albeit less known, Dalit musical groups exist in various parts of India. The leftist movements of India too use protest music along with street plays as a means to propagate their message amongst the masses. Protest music was a big feature of plays organized by the Indian People's Theatre Association (IPTA). Similar organisations formed after the break-up of IPTA and highly influenced by its work, like the Jana Natya Manch (JANAM), also made protest music a regular feature of their plays. In recent decades, however, the Left's cultural activism has increasingly been relegated to the margins of the cultural sphere. Some attribute this to the political decline of the mainstream Left in India, as well as a shift in focus to local movements and languages as identity politics took a greater hold of Indian Polity. Protest music also features regularly in protests held by other mainstream national parties of India. Malaysia Myanmar (Burma) During the 8888 Uprising, Naing Myanmar, a Burmese composer, penned "Kabar Makyay Bu" (ကမ္ဘာမကျေဘူး), rendered in English as "We Won’t Be Satisfied till the End of the World" as a protest song. Set to the tune of Kansas' "Dust in the Wind," the song quickly gained popularity across the country, as an emotional appeal for freedom. The song was recorded and distributed on cassette tapes, reaching millions of Burmese eventually becoming an anthem of the 8888 Uprising. In the aftermath of the 2021 Myanmar coup d'etat, the country's nascent civil disobedience movement has revitalized this song, performing it during protests and acts of civil disobedience. Pakistan Protest music in Pakistan has been deeply inspired by South Asian traditions since pre-independence times. The song "Hum Dekhenge" is just one example of protest music from Pakistan. Faiz Ahmed, a poet and a prominent Pakistani Marxist, originally penned the poem with the same title as a response to General Zia ul Haq's repressive dictatorship. The poem is considered a critical commentary of Zia's brand of authoritarian Islam. His political beliefs set him up as a natural critic of General Zia Ul Haq. In 1985, as part of Zia's programme of forced Islamicization, the sari, part of the traditional attire for women on the subcontinent was banned. That year, Iqbal Bano, one of Pakistan's best-loved singers and artists, sang Hum Dekhenge to an audience of 50,000 people in a Lahore stadium wearing a black sari. The recording was smuggled out and distributed on bootleg cassette tapes across the country. Cries of "Inquilab Zindabad" ("Long Live Revolution") and thunderous applause from the audience can be heard on the . Faiz was in prison at the time. The song has since the fall of the Zia dictatorship, regularly featured in protests in Pakistan. More recently, a newer rendition of the song by Pakistani singer, Rahat Fateh Ali Khan, was used as the title song for the political party, Pakistan Tehreek-e-Insaf, in the 2013 Pakistani general election, and in the Azadi march of 2014. The international anthem girti hui deewaron ko aik dhakka aur do by famous poet Ali Arshad Mir created in the 1970s found profound place in various protests. This revolutionary anthem is still in use in resistance movements against oppressive political regimes and failing institutions by politicians and common people alike. Philippines From the revolutionary songs of the Katipunan to the songs being sung by the New People's Army, Filipino protest music deals with poverty, oppression as well as anti-imperialism and independence. A typical example was during the American era, as Jose Corazon de Jesus created a well-known protest song entitled "Bayan Ko", which calls for redeeming the nation against oppression, mainly colonialism, and also became popular as a song against the Marcos regime. During the 1960s, Filipino protest music became aligned with the ideas of Communism as well as of revolution. The protest song "Ang Linyang Masa" came from Mao Zedong and his Mass Line and "Papuri sa Pag-aaral" was from Bertolt Brecht. These songs, although Filipinized, rose to become another part of Filipino protest music known as Revolutionary songs that became popular during protests and campaign struggles. South Korea Commonly, protest songs in South Korea are known as Minjung Gayo (, literally "People's song"), and the genre of protest songs is called "Norae Undong", translating to the literal meaning"Song movement". It was raised by people in the 1970s~1980s to be against the military governments of presidents Park Jeong-hee and Jeon Doo-hwan. The Minjung-Gayo (Hangul: 민중가요; Hanja: 民衆歌謠) is one of Korean modern singing culture, which has been used as musical means of pro-democracy movement. It was mainly enjoyed by the people who are critical of mainstream song culture in the process of democratization movement. The term of Minjung-Gayo was naturally coined by people in the mid-1980s. Since this was the period when protest songs were grown rapidly and the singing movement began, It was needed to take care of protest songs, a new term that could be used to differentiate them from popular songs was necessary. In a broad sense, The Minjung-Gayo includes the anti-Japanese song on Japanese colonial era which is continued to the early 1970s. But Generally, the Minjung-Gayo means the culture which is matured in the late 1970s and lasted in 1990. The Concept of Korean Protest Songs (Minjung-Gayo) Korean protest song called Minjung-Gayo reflects the will of crowd and voices of criticism of the day. Korean protest song has emerged on 1980s, especially before and after of the June Democracy Movement in 1987. Korean Protest Songs Before 1980s The starting point of Korean protest songs was the music culture of Korean students movements around 1970. With criticizing about pop music or overcoming, it started that their own unique music culture having certain coriander layer and own existing method distinguished with pop culture. a few songs called as 'Demo-ga' (demonstration songs) and others from the 1960s was chosen as 'Minjung-Gayo' (Korean protest songs). There're 'Haebang-ga(Hangul; 해방가)',' Tana-Tana', 'Barami-bunda'(Hangul; 바람이 분다), 'Stenka Razin' and so on. After 1975, another songs like 'Hula-song', 'Jungui-ga' was added in the list. Through the era of an emergency measure, the atmosphere of Korean universities was getting stiffer. Students who participated in the students' movements had to be prepared to die and they were required to have much stronger faith and actions. Students who participated in students' movements became critics of the old social systems and pop culture. Due to it being the result of old social system, they started to pursue progressive and political culture. Spreading the criticism against pop music, a series of certain music culture which had such unique criticism of university students was established and it is the base of Korean protest songs. Korean protest songs in 1980s The short 'Spring of democracy' before May 1980 coming after 10.26 situation in 1979 was such a big opportunity to show the protest songs hidden by a few students to many students in public demonstrations. the organizers of demonstrations was spreading papers that the lyrics and sheet music was written on in continued demonstrations and in this period, the most of demonstrations were started to make the atmosphere with learning the songs. The mainstream of Korean protest songs in 1980s could divided in three periods. The first period is the establishment of the protest songs. It is the period that many songs composed as marching song with minor like "The March for Her" (Hangeul: 임을 위한 행진곡) were being written and the number of the songs were increased massively from 1980 to 1984. The second period started with young men fresh just out of college, who had engaged in music club. They perform a concert the story of song "Eggplant Flower" (Hangul: 가지꽃) in Aeogae little theater by lending the name of theater "Handurae" (Hangeul: 한두레). In this period, music has taken a part of social movement. The third period is after the Democratic uprising in June 1987 and the first regular performing of 'People seeking music' held in Korean church 100th anniversary memorial in October that same year after the great labor conflict in July, August, and September 1987. In this period, they were trying to figure out how could they overcome limits that the music movement in universities had and find new ways that they should be on. After successive the great labor conflict in July to September, protest songs reflected the joys and sorrows of workers. After going through this period, protest song embraces not only the intellectuals, but also the working-class population. Korean protest songs after 1990s From the middle of the 1990s, since the social voices of the students' demonstrations and the labor demonstrations started getting decreased, Korean protest songs have lost their popularity in many other fields except the struggle scenes. It is the period that the music groups in universities and the professional cultural demonstration groups started trying to change the form of Korean protest songs and trying new things. It was not easy to change such generalized form of the music into the new wave. In the 2000s, the memorial candle demonstration for the middle school girls who were killed by U.S Army's tank to the demonstration against importing mad cow disease beef from U.S, such people participatory demonstration culture started being settled. In this period, the songs not having such solemn atmosphere like 'Fucking USA', 'The First Korean Constitution' was made, but the influence still could not spread wide and only stayed in the field. Representative artist Kim Min-ki Born in Iksan-si, Jeonlabuk-do, he moved to Seoul before he entered primary school. He formed a group named 'Dobidoo' when he was in Seoul University and started writing music. At that time, he met Heeuen Yang, who had been his primary school friend, and gave her the song 'Morning Dew'<아침 이슬>, which was released in 1970. In 1972, he was arrested for singing songs such as 'We Will Win'<우리 승리하리라>, 'Freedom Song'<해방가>, 'The Child Blowing Flowers' <꽃피우는 아이> and so on, and all of his songs were banned from broadcasting. After he got out of the Korean army, he wrote Heeuen Yang's song 'Like the Pine Needles in Wild Grassland' <거치른 들판의 푸르른 솔잎처럼> and also wrote the song 'The Light in the Factory' <공장의 불빛>. 'People who're finding songs' 'People who're finding songs'[노래를 찾는 사람들] is the music group writing Korean protest songs in 1980s–1990s (known as 'Nochatsa' [노찾사]). There were many demonstrations against the Korean military dictatorship around Korean universities in the 1980s, and since then, many protest songs have been written by the students in those universities. Korean protest songs [hangul: 민중가요 Minjung-gayo] reflected the reality of the period different from typical love songs so they wouldn't expect commercial success from the songs. However, the group's albums were actually commercially successful and have left footprints in Korean pop music history. 'Meari' from Seoul University, 'Norea-erl' from Korea University, 'Hansori' from Ehwa women University, 'Sori-sarang' from Sungkyunkwan University, etc. were participated in the group. Taiwan "Island's Sunrise" (Chinese: 島嶼天光) is the theme song of 2014 Sunflower Student Movement in Taiwan. Also, the theme song of Lan Ling Wang TV drama series Into The Array Song (Chinese: 入陣曲), sung by Mayday, expressed all the social and political controversies during Taiwan under the president Ma Ying-jeou administration. Thailand In Thailand, protest songs are known as Phleng phuea chiwit (, ; lit. "songs for life"), a music genre that originated in the '70s, by famous artists such as Caravan, Carabao, Pongthep Kradonchamnan and Pongsit Kamphee. Europe Belarus The first famous Belarusian protest songs were created at the beginning of the 20th century during the rise of the Belarusian People's Republic and war for independence from the Russian Empire and Soviet Russia. This period includes such protest songs as "Advieku My Spali" ("We've slept enough", also known as Belarusian Marselliese) and "Vajaćki Marš" ("March of the Warriors"), which was an anthem of the Belarusian People's Republic. The next period of protest songs was in the 1990s, with many created by such bands as NRM, Novaje Nieba and others, which led to the unspoken prohibition of these musicians. As an example, Lavon Volski, frontman of NRM, Mroja and Krambambulia, had issues with officials at the majority of his concert due to the criticism of the Belarusian political system. One of the most famous bands of Belarus, Lyapis Trubetskoy, was forbidden from performing in the country due to being critical of Aleksandr Lukashenka in his lyrics. These prohibitions lead most "forbidden" bands to organize concerts in Vilnius, which, though situated in modern Lithuania, is considered to be a Belarusian historical capital because less than a hundred years ago most dwellers of Vilnius (Vilnia, as it was called before it was given to Lithuania) were Belarusians. But in the middle of the 2010s, the situation began to change a bit and many protest bands started to organize concerts in Belarus. Britain and Ireland Early British protest songs English folk songs from the late medieval and early modern period reflect the social upheavals of their day. In 1944 the Marxist scholar A. L. Lloyd claimed that "The Cutty Wren" song constituted a coded anthem against feudal oppression and actually dated back to the English peasants' revolt of 1381, making it the oldest extant European protest song. He offered no evidence for his assertion, however and no trace of the song has been found before the 18th century. Despite Lloyd's dubious claim about its origins, however, the "Cutty Wren" was revived and used as a protest song in the 1950s folk revival, an example of what may be considered a protest song. In contrast, the rhyme, "When Adam delved and Eve span, who was then the gentleman?", is attested as authentically originating in the 1381 Peasant Revolt, though no tune associated with it has survived. Ballads celebrating social bandits like Robin Hood, from the 14th century onwards, can be seen as expressions of a desire for social justice, though although social criticism is implied and there is no overt questioning of the status quo. The era of civil and religious wars of the 17th century in Britain gave rise to the radical communistic millenarian Levellers and Diggers' movements and their associated ballads and hymns, as, for example, the "Diggers' Song". with the incendiary verse: <blockquote><poem> But the Gentry must come down, and the poor shall wear the crown. Stand up now, Diggers all!</poem></blockquote> The Digger movement was violently crushed, and so it is not surprising if few overt protest songs associated with it have survived. From roughly the same period, however, songs protesting wars and the human suffering they inflict abound, though such song do not generally explicitly condemn the wars or the leaders who wage them. For example, "The Maunding Souldier" or "The Fruits of Warre is Beggery", framed as a begging appeal from a crippled soldier of the Thirty Years War. Such songs have been known, strictly speaking, as songs of complaint rather than of protest, since they offered no solution or hint of rebellion against the status quo. The advent of industrialization in the 18th and early 19th centuries was accompanied by a series of protest movements and a corresponding increase in the number of topical social protest songs and ballads. An important example is "The Triumph of General Ludd", which built a fictional persona for the alleged leader of the early 19th century anti-technological Luddite movement in the cloth industry of the north midlands, and which made explicit reference to the Robin Hood tradition. A surprising English folk hero immortalized in song is Napoleon Bonaparte, the military figure most often the subject of popular ballads, many of them treating him as the champion of the common working man in songs such as the "Bonny Bunch of Roses" and "Napoleon's Dream". As labour became more organized songs were used as anthems and propaganda, for miners with songs such as "The Black Leg Miner", and for factory workers with songs such as "The Factory Bell". These industrial protest songs were largely ignored during the first English folk revival of the later 19th and early 20th century, which had focused on songs that had been collected in rural areas where they were still being sung and on music education. They were revived in the 1960s and performed by figures such as A. L. Lloyd on his album The Iron Muse (1963). In the 1980s the anarchist rock band Chumbawamba recorded several versions of traditional English protest songs as English Rebel Songs 1381–1914. See also Beef and Butt Beer 20th century Colin Irwin, a journalist for The Guardian, believes the modern British protest movement started in 1958 when the Campaign for Nuclear Disarmament organized a 53-mile march from Trafalgar Square to Aldermaston, to protest Britain's participation in the arms race and recent testing of the H-bomb. The protest "fired up young musicians to write campaigning new songs to argue the case against the bomb and whip up support along the way. Suddenly many of those in skiffle groups playing American songs were changing course and writing fierce topical songs to back direct action." A song composed for the march, "The H-Bomb's Thunder", set the words of a poem by novelist John Brunner to the tune of "Miner's Lifeguard": Men and women, stand together Do not heed the men of war Make your minds up now or never Ban the bomb for evermore. Folk singer Ewan MacColl was for some time one of the principal musical figures of the British nuclear disarmament movement. A former agitprop actor and playwright. MacColl, a prolific songwriter and committed leftist, some years earlier had penned "The Ballad of Ho Chi Minh" (1953), issued as single on Topic Records, and "The Ballad of Stalin" (1954), commemorating the death of that leader. Neither record has ever been reissued. According to Irwin, MacColl, when interviewed in the Daily Worker in 1958, declared that:There are now more new songs being written than at any other time in the past eighty years—young people are finding out for themselves that folk songs are tailor-made for expressing their thoughts and comments on contemporary topics, dreams, and worries, In 1965, folk-rock singer Donovan's cover of Buffy Sainte-Marie's "Universal Soldier" was a hit on the charts. His anti-Vietnam War song "The War Drags On" appeared that same year. This was a common trend in popular music of the 1960s and 1970s. The romantic lyrics of pop songs in the 1950s gave way to words of protest. As their fame and prestige increased in the late 1960s, The Beatles—and John Lennon in particular—added their voices to the Anti-war. In the documentary The US Versus John Lennon, Tariq Ali attributes the Beatles' activism to the fact that, in his opinion, "The whole culture had been radicalized: [Lennon] was engaged with the world, and the world was changing him." "Revolution", 1968, commemorated the worldwide student uprisings. In 1969, when Lennon and Yoko Ono were married, they staged a week-long "bed-in for peace" in the Amsterdam Hilton, attracting worldwide media coverage. At the second "Bed-in" in Montreal, in June 1969, they recorded "Give Peace a Chance" in their hotel room. The song was sung by over half a million demonstrators in Washington, DC, at the second Vietnam Moratorium Day, on October 15, 1969. In 1972 Lennon's most controversial protest song LP was released, Some Time in New York City, the title of whose lead single "Woman Is the Nigger of the World", a phrase coined by Ono in the late 1960s to protest sexism, set off a storm of controversy, and in consequence received little airplay and much banning. The Lennons went to great lengths (including a press conference attended by staff from Jet and Ebony magazines) to explain that they had used the word nigger in a symbolic sense and not as an affront to African Americans. The album also included "Attica State", about the Attica Prison riots of September 9, 1971; "Sunday Bloody Sunday" and "The Luck Of The Irish", about the massacre of demonstrators in Northern Ireland and "Angela", in support of black activist Angela Davis. Lennon also performed at the "Free John Sinclair" benefit concert in Ann Arbor, Michigan, on December 10, 1971. on behalf of the imprisoned antiwar activist and poet who was serving 10 years in state prison for selling two joints of marijuana to an undercover cop. On this occasion Lennon and Ono appeared on stage with among others singers Phil Ochs and Stevie Wonder, plus antiwar activists Jerry Rubin and Bobby Seale of the Black Panthers party. Lennon's song "John Sinclair" (which can be heard on his Some Time in New York City album), calls on the authorities to "Let him be, set him free, let him be like you and me". The benefit was attended by some 20,000 people, and three days later the State of Michigan released Sinclair from prison. The 1970s saw a number of notable songs by British acts that protested against war, including "Peace Train" by Cat Stevens (1971), and "War Pigs" by Black Sabbath (1970). Sabbath also protested environmental destruction, describing people leaving a ruined Earth ("Into the Void" including, "Iron Man"). Renaissance added political repression as a protest theme with "Mother Russia" being based on One Day in the Life of Ivan Denisovich and being joined on the second side of their 1974 album Turn of the Cards by two other protest songs in "Cold Is Being" (about ecological destruction) and "Black Flame" (about the Vietnam War). As the 1970s progressed, the louder, more aggressive Punk movement became the strongest voice of protest, particularly in the UK, featuring anti-war, anti-state, and anti-capitalist themes. The punk culture, in stark contrast with the 1960s' sense of power through union, concerned itself with individual freedom, often incorporating concepts of individualism, free thought and even anarchism. According to Search and Destroy founder V. Vale, "Punk was a total cultural revolt. It was a hardcore confrontation with the black side of history and culture, right-wing imagery, sexual taboos, a delving into it that had never been done before by any generation in such a thorough way." The most significant protest songs of the movement included "God Save the Queen" (1977) by the Sex Pistols, "If the Kids are United" by Sham 69, "Career Opportunities" (1977) (protesting the political and economic situation in England at the time, especially the lack of jobs available to the youth), and "White Riot" (1977) (about class economics and race issues) by The Clash, and "Right to Work" by Chelsea. See also Punk ideology. War was still the prevalent theme of British protest songs of the 1980s – such as Kate Bush's "Army Dreamers" (1980), which deals with the traumas of a mother whose son dies while away at war. Indeed the early 1980s was a remarkable period for anti-nuclear and anti-war UK political pop, much of it inspired directly or indirectly by the punk movement: 1980 saw '22 such Top 75 hits, by 18 different artists. For almost th[at] entire year ... (47 weeks), the UK singles charts contained at least one hit song that spoke of antiwar or antinuclear concerns, and usually more than one.' Further George McKay argues that 'it really is quite extraordinary to note that one-third of the year 1984 (17 weeks) had some kind of political pop song at the top of the British charts. Viewed from that lofty perspective, 1984 must be seen as a peak protest music time in Britain, most of it in the context of antiwar and antinuclear sentiment.' However, as the 1980s progressed, it was British prime minister Margaret Thatcher who came under the greatest degree of criticism from native protest singers, mostly for her strong stance against trade unions, and especially for her handling of the UK miners' strike (1984–1985). The leading voice of protest in Thatcherite Britain in the 1980s was Billy Bragg, whose style of protest song and grass-roots political activism was mostly reminiscent of those of Woody Guthrie, however with themes that were relevant to the contemporary Briton. He summarized his stance in "Between the Wars" (1985), in which he sings: "I'll give my consent to any government that does not deny a man a living wage." Also in the 1980s the band Frankie Goes to Hollywood released a political pop protest song Two Tribes a relentless bass-driven track depicting the futility and starkness of nuclear weapons and the Cold War. The video for the song depicted a wrestling match between then-President Ronald Reagan and then-Soviet leader Konstantin Chernenko for the benefit of group members and an eagerly belligerent assembly of representatives from the world's nations, the event ultimately degenerating into complete global destruction. This video was played several times at the 1984 Democratic National Convention. Due to some violent scenes ("Reagan" biting "Chernenko"'s ear, etc.), the unedited video could not be shown on MTV, and an edited version was substituted. The single quickly hit the number one spot in the United Kingdom. Several mixes of the track feature actor Patrick Allen, who recreated his narration from the Protect and Survive public information films for certain 12-inch mixes (the original Protect and Survive soundtracks were sampled for the 7-inch mixes). Irish rebel songs Irish rebel music is a subgenre of Irish folk music, played on typically Irish instruments (such as the Fiddle, tin whistle, Uilleann pipes, accordion, bodhrán etc.) and acoustic guitars. The lyrics deal with the fight for Irish independence, people who were involved in liberation movements, the persecution and violence during Northern Ireland's Troubles and the history of Ireland's numerous rebellions. Among the many examples of the genre, some of the most famous are "A Nation Once Again", "Come out Ye Black and Tans", "Erin go Bragh", "The Fields of Athenry", "The Men Behind the Wire" and the Republic of Ireland's national anthem "Amhrán na bhFiann" ("The Soldier's Song"). Music of this genre has often courted controversy, and some of the more outwardly anti-British songs have been effectively banned from the airwaves in both England and the Republic of Ireland. Paul McCartney also made a contribution to the genre with his 1972 single "Give Ireland Back to the Irish", which he wrote as a reaction to Bloody Sunday in Northern Ireland on January 30, 1972. The song also faced an all-out ban in the UK, and has never been re-released or appeared on any Paul McCartney or Wings best-ofs. The same year McCartney's former colleague John Lennon released two protest songs concerning the hardships of war-torn Northern Ireland: "Sunday Bloody Sunday", written shortly after the 1972 massacre of Irish civil rights activists (which differs from U2's 1983 song of the same title in that it directly supports the Irish Republican cause and does not call for peace), and "The Luck of the Irish", both from his
in a Lahore stadium wearing a black sari. The recording was smuggled out and distributed on bootleg cassette tapes across the country. Cries of "Inquilab Zindabad" ("Long Live Revolution") and thunderous applause from the audience can be heard on the . Faiz was in prison at the time. The song has since the fall of the Zia dictatorship, regularly featured in protests in Pakistan. More recently, a newer rendition of the song by Pakistani singer, Rahat Fateh Ali Khan, was used as the title song for the political party, Pakistan Tehreek-e-Insaf, in the 2013 Pakistani general election, and in the Azadi march of 2014. The international anthem girti hui deewaron ko aik dhakka aur do by famous poet Ali Arshad Mir created in the 1970s found profound place in various protests. This revolutionary anthem is still in use in resistance movements against oppressive political regimes and failing institutions by politicians and common people alike. Philippines From the revolutionary songs of the Katipunan to the songs being sung by the New People's Army, Filipino protest music deals with poverty, oppression as well as anti-imperialism and independence. A typical example was during the American era, as Jose Corazon de Jesus created a well-known protest song entitled "Bayan Ko", which calls for redeeming the nation against oppression, mainly colonialism, and also became popular as a song against the Marcos regime. During the 1960s, Filipino protest music became aligned with the ideas of Communism as well as of revolution. The protest song "Ang Linyang Masa" came from Mao Zedong and his Mass Line and "Papuri sa Pag-aaral" was from Bertolt Brecht. These songs, although Filipinized, rose to become another part of Filipino protest music known as Revolutionary songs that became popular during protests and campaign struggles. South Korea Commonly, protest songs in South Korea are known as Minjung Gayo (, literally "People's song"), and the genre of protest songs is called "Norae Undong", translating to the literal meaning"Song movement". It was raised by people in the 1970s~1980s to be against the military governments of presidents Park Jeong-hee and Jeon Doo-hwan. The Minjung-Gayo (Hangul: 민중가요; Hanja: 民衆歌謠) is one of Korean modern singing culture, which has been used as musical means of pro-democracy movement. It was mainly enjoyed by the people who are critical of mainstream song culture in the process of democratization movement. The term of Minjung-Gayo was naturally coined by people in the mid-1980s. Since this was the period when protest songs were grown rapidly and the singing movement began, It was needed to take care of protest songs, a new term that could be used to differentiate them from popular songs was necessary. In a broad sense, The Minjung-Gayo includes the anti-Japanese song on Japanese colonial era which is continued to the early 1970s. But Generally, the Minjung-Gayo means the culture which is matured in the late 1970s and lasted in 1990. The Concept of Korean Protest Songs (Minjung-Gayo) Korean protest song called Minjung-Gayo reflects the will of crowd and voices of criticism of the day. Korean protest song has emerged on 1980s, especially before and after of the June Democracy Movement in 1987. Korean Protest Songs Before 1980s The starting point of Korean protest songs was the music culture of Korean students movements around 1970. With criticizing about pop music or overcoming, it started that their own unique music culture having certain coriander layer and own existing method distinguished with pop culture. a few songs called as 'Demo-ga' (demonstration songs) and others from the 1960s was chosen as 'Minjung-Gayo' (Korean protest songs). There're 'Haebang-ga(Hangul; 해방가)',' Tana-Tana', 'Barami-bunda'(Hangul; 바람이 분다), 'Stenka Razin' and so on. After 1975, another songs like 'Hula-song', 'Jungui-ga' was added in the list. Through the era of an emergency measure, the atmosphere of Korean universities was getting stiffer. Students who participated in the students' movements had to be prepared to die and they were required to have much stronger faith and actions. Students who participated in students' movements became critics of the old social systems and pop culture. Due to it being the result of old social system, they started to pursue progressive and political culture. Spreading the criticism against pop music, a series of certain music culture which had such unique criticism of university students was established and it is the base of Korean protest songs. Korean protest songs in 1980s The short 'Spring of democracy' before May 1980 coming after 10.26 situation in 1979 was such a big opportunity to show the protest songs hidden by a few students to many students in public demonstrations. the organizers of demonstrations was spreading papers that the lyrics and sheet music was written on in continued demonstrations and in this period, the most of demonstrations were started to make the atmosphere with learning the songs. The mainstream of Korean protest songs in 1980s could divided in three periods. The first period is the establishment of the protest songs. It is the period that many songs composed as marching song with minor like "The March for Her" (Hangeul: 임을 위한 행진곡) were being written and the number of the songs were increased massively from 1980 to 1984. The second period started with young men fresh just out of college, who had engaged in music club. They perform a concert the story of song "Eggplant Flower" (Hangul: 가지꽃) in Aeogae little theater by lending the name of theater "Handurae" (Hangeul: 한두레). In this period, music has taken a part of social movement. The third period is after the Democratic uprising in June 1987 and the first regular performing of 'People seeking music' held in Korean church 100th anniversary memorial in October that same year after the great labor conflict in July, August, and September 1987. In this period, they were trying to figure out how could they overcome limits that the music movement in universities had and find new ways that they should be on. After successive the great labor conflict in July to September, protest songs reflected the joys and sorrows of workers. After going through this period, protest song embraces not only the intellectuals, but also the working-class population. Korean protest songs after 1990s From the middle of the 1990s, since the social voices of the students' demonstrations and the labor demonstrations started getting decreased, Korean protest songs have lost their popularity in many other fields except the struggle scenes. It is the period that the music groups in universities and the professional cultural demonstration groups started trying to change the form of Korean protest songs and trying new things. It was not easy to change such generalized form of the music into the new wave. In the 2000s, the memorial candle demonstration for the middle school girls who were killed by U.S Army's tank to the demonstration against importing mad cow disease beef from U.S, such people participatory demonstration culture started being settled. In this period, the songs not having such solemn atmosphere like 'Fucking USA', 'The First Korean Constitution' was made, but the influence still could not spread wide and only stayed in the field. Representative artist Kim Min-ki Born in Iksan-si, Jeonlabuk-do, he moved to Seoul before he entered primary school. He formed a group named 'Dobidoo' when he was in Seoul University and started writing music. At that time, he met Heeuen Yang, who had been his primary school friend, and gave her the song 'Morning Dew'<아침 이슬>, which was released in 1970. In 1972, he was arrested for singing songs such as 'We Will Win'<우리 승리하리라>, 'Freedom Song'<해방가>, 'The Child Blowing Flowers' <꽃피우는 아이> and so on, and all of his songs were banned from broadcasting. After he got out of the Korean army, he wrote Heeuen Yang's song 'Like the Pine Needles in Wild Grassland' <거치른 들판의 푸르른 솔잎처럼> and also wrote the song 'The Light in the Factory' <공장의 불빛>. 'People who're finding songs' 'People who're finding songs'[노래를 찾는 사람들] is the music group writing Korean protest songs in 1980s–1990s (known as 'Nochatsa' [노찾사]). There were many demonstrations against the Korean military dictatorship around Korean universities in the 1980s, and since then, many protest songs have been written by the students in those universities. Korean protest songs [hangul: 민중가요 Minjung-gayo] reflected the reality of the period different from typical love songs so they wouldn't expect commercial success from the songs. However, the group's albums were actually commercially successful and have left footprints in Korean pop music history. 'Meari' from Seoul University, 'Norea-erl' from Korea University, 'Hansori' from Ehwa women University, 'Sori-sarang' from Sungkyunkwan University, etc. were participated in the group. Taiwan "Island's Sunrise" (Chinese: 島嶼天光) is the theme song of 2014 Sunflower Student Movement in Taiwan. Also, the theme song of Lan Ling Wang TV drama series Into The Array Song (Chinese: 入陣曲), sung by Mayday, expressed all the social and political controversies during Taiwan under the president Ma Ying-jeou administration. Thailand In Thailand, protest songs are known as Phleng phuea chiwit (, ; lit. "songs for life"), a music genre that originated in the '70s, by famous artists such as Caravan, Carabao, Pongthep Kradonchamnan and Pongsit Kamphee. Europe Belarus The first famous Belarusian protest songs were created at the beginning of the 20th century during the rise of the Belarusian People's Republic and war for independence from the Russian Empire and Soviet Russia. This period includes such protest songs as "Advieku My Spali" ("We've slept enough", also known as Belarusian Marselliese) and "Vajaćki Marš" ("March of the Warriors"), which was an anthem of the Belarusian People's Republic. The next period of protest songs was in the 1990s, with many created by such bands as NRM, Novaje Nieba and others, which led to the unspoken prohibition of these musicians. As an example, Lavon Volski, frontman of NRM, Mroja and Krambambulia, had issues with officials at the majority of his concert due to the criticism of the Belarusian political system. One of the most famous bands of Belarus, Lyapis Trubetskoy, was forbidden from performing in the country due to being critical of Aleksandr Lukashenka in his lyrics. These prohibitions lead most "forbidden" bands to organize concerts in Vilnius, which, though situated in modern Lithuania, is considered to be a Belarusian historical capital because less than a hundred years ago most dwellers of Vilnius (Vilnia, as it was called before it was given to Lithuania) were Belarusians. But in the middle of the 2010s, the situation began to change a bit and many protest bands started to organize concerts in Belarus. Britain and Ireland Early British protest songs English folk songs from the late medieval and early modern period reflect the social upheavals of their day. In 1944 the Marxist scholar A. L. Lloyd claimed that "The Cutty Wren" song constituted a coded anthem against feudal oppression and actually dated back to the English peasants' revolt of 1381, making it the oldest extant European protest song. He offered no evidence for his assertion, however and no trace of the song has been found before the 18th century. Despite Lloyd's dubious claim about its origins, however, the "Cutty Wren" was revived and used as a protest song in the 1950s folk revival, an example of what may be considered a protest song. In contrast, the rhyme, "When Adam delved and Eve span, who was then the gentleman?", is attested as authentically originating in the 1381 Peasant Revolt, though no tune associated with it has survived. Ballads celebrating social bandits like Robin Hood, from the 14th century onwards, can be seen as expressions of a desire for social justice, though although social criticism is implied and there is no overt questioning of the status quo. The era of civil and religious wars of the 17th century in Britain gave rise to the radical communistic millenarian Levellers and Diggers' movements and their associated ballads and hymns, as, for example, the "Diggers' Song". with the incendiary verse: <blockquote><poem> But the Gentry must come down, and the poor shall wear the crown. Stand up now, Diggers all!</poem></blockquote> The Digger movement was violently crushed, and so it is not surprising if few overt protest songs associated with it have survived. From roughly the same period, however, songs protesting wars and the human suffering they inflict abound, though such song do not generally explicitly condemn the wars or the leaders who wage them. For example, "The Maunding Souldier" or "The Fruits of Warre is Beggery", framed as a begging appeal from a crippled soldier of the Thirty Years War. Such songs have been known, strictly speaking, as songs of complaint rather than of protest, since they offered no solution or hint of rebellion against the status quo. The advent of industrialization in the 18th and early 19th centuries was accompanied by a series of protest movements and a corresponding increase in the number of topical social protest songs and ballads. An important example is "The Triumph of General Ludd", which built a fictional persona for the alleged leader of the early 19th century anti-technological Luddite movement in the cloth industry of the north midlands, and which made explicit reference to the Robin Hood tradition. A surprising English folk hero immortalized in song is Napoleon Bonaparte, the military figure most often the subject of popular ballads, many of them treating him as the champion of the common working man in songs such as the "Bonny Bunch of Roses" and "Napoleon's Dream". As labour became more organized songs were used as anthems and propaganda, for miners with songs such as "The Black Leg Miner", and for factory workers with songs such as "The Factory Bell". These industrial protest songs were largely ignored during the first English folk revival of the later 19th and early 20th century, which had focused on songs that had been collected in rural areas where they were still being sung and on music education. They were revived in the 1960s and performed by figures such as A. L. Lloyd on his album The Iron Muse (1963). In the 1980s the anarchist rock band Chumbawamba recorded several versions of traditional English protest songs as English Rebel Songs 1381–1914. See also Beef and Butt Beer 20th century Colin Irwin, a journalist for The Guardian, believes the modern British protest movement started in 1958 when the Campaign for Nuclear Disarmament organized a 53-mile march from Trafalgar Square to Aldermaston, to protest Britain's participation in the arms race and recent testing of the H-bomb. The protest "fired up young musicians to write campaigning new songs to argue the case against the bomb and whip up support along the way. Suddenly many of those in skiffle groups playing American songs were changing course and writing fierce topical songs to back direct action." A song composed for the march, "The H-Bomb's Thunder", set the words of a poem by novelist John Brunner to the tune of "Miner's Lifeguard": Men and women, stand together Do not heed the men of war Make your minds up now or never Ban the bomb for evermore. Folk singer Ewan MacColl was for some time one of the principal musical figures of the British nuclear disarmament movement. A former agitprop actor and playwright. MacColl, a prolific songwriter and committed leftist, some years earlier had penned "The Ballad of Ho Chi Minh" (1953), issued as single on Topic Records, and "The Ballad of Stalin" (1954), commemorating the death of that leader. Neither record has ever been reissued. According to Irwin, MacColl, when interviewed in the Daily Worker in 1958, declared that:There are now more new songs being written than at any other time in the past eighty years—young people are finding out for themselves that folk songs are tailor-made for expressing their thoughts and comments on contemporary topics, dreams, and worries, In 1965, folk-rock singer Donovan's cover of Buffy Sainte-Marie's "Universal Soldier" was a hit on the charts. His anti-Vietnam War song "The War Drags On" appeared that same year. This was a common trend in popular music of the 1960s and 1970s. The romantic lyrics of pop songs in the 1950s gave way to words of protest. As their fame and prestige increased in the late 1960s, The Beatles—and John Lennon in particular—added their voices to the Anti-war. In the documentary The US Versus John Lennon, Tariq Ali attributes the Beatles' activism to the fact that, in his opinion, "The whole culture had been radicalized: [Lennon] was engaged with the world, and the world was changing him." "Revolution", 1968, commemorated the worldwide student uprisings. In 1969, when Lennon and Yoko Ono were married, they staged a week-long "bed-in for peace" in the Amsterdam Hilton, attracting worldwide media coverage. At the second "Bed-in" in Montreal, in June 1969, they recorded "Give Peace a Chance" in their hotel room. The song was sung by over half a million demonstrators in Washington, DC, at the second Vietnam Moratorium Day, on October 15, 1969. In 1972 Lennon's most controversial protest song LP was released, Some Time in New York City, the title of whose lead single "Woman Is the Nigger of the World", a phrase coined by Ono in the late 1960s to protest sexism, set off a storm of controversy, and in consequence received little airplay and much banning. The Lennons went to great lengths (including a press conference attended by staff from Jet and Ebony magazines) to explain that they had used the word nigger in a symbolic sense and not as an affront to African Americans. The album also included "Attica State", about the Attica Prison riots of September 9, 1971; "Sunday Bloody Sunday" and "The Luck Of The Irish", about the massacre of demonstrators in Northern Ireland and "Angela", in support of black activist Angela Davis. Lennon also performed at the "Free John Sinclair" benefit concert in Ann Arbor, Michigan, on December 10, 1971. on behalf of the imprisoned antiwar activist and poet who was serving 10 years in state prison for selling two joints of marijuana to an undercover cop. On this occasion Lennon and Ono appeared on stage with among others singers Phil Ochs and Stevie Wonder, plus antiwar activists Jerry Rubin and Bobby Seale of the Black Panthers party. Lennon's song "John Sinclair" (which can be heard on his Some Time in New York City album), calls on the authorities to "Let him be, set him free, let him be like you and me". The benefit was attended by some 20,000 people, and three days later the State of Michigan released Sinclair from prison. The 1970s saw a number of notable songs by British acts that protested against war, including "Peace Train" by Cat Stevens (1971), and "War Pigs" by Black Sabbath (1970). Sabbath also protested environmental destruction, describing people leaving a ruined Earth ("Into the Void" including, "Iron Man"). Renaissance added political repression as a protest theme with "Mother Russia" being based on One Day in the Life of Ivan Denisovich and being joined on the second side of their 1974 album Turn of the Cards by two other protest songs in "Cold Is Being" (about ecological destruction) and "Black Flame" (about the Vietnam War). As the 1970s progressed, the louder, more aggressive Punk movement became the strongest voice of protest, particularly in the UK, featuring anti-war, anti-state, and anti-capitalist themes. The punk culture, in stark contrast with the 1960s' sense of power through union, concerned itself with individual freedom, often incorporating concepts of individualism, free thought and even anarchism. According to Search and Destroy founder V. Vale, "Punk was a total cultural revolt. It was a hardcore confrontation with the black side of history and culture, right-wing imagery, sexual taboos, a delving into it that had never been done before by any generation in such a thorough way." The most significant protest songs of the movement included "God Save the Queen" (1977) by the Sex Pistols, "If the Kids are United" by Sham 69, "Career Opportunities" (1977) (protesting the political and economic situation in England at the time, especially the lack of jobs available to the youth), and "White Riot" (1977) (about class economics and race issues) by The Clash, and "Right to Work" by Chelsea. See also Punk ideology. War was still the prevalent theme of British protest songs of the 1980s – such as Kate Bush's "Army Dreamers" (1980), which deals with the traumas of a mother whose son dies while away at war. Indeed the early 1980s was a remarkable period for anti-nuclear and anti-war UK political pop, much of it inspired directly or indirectly by the punk movement: 1980 saw '22 such Top 75 hits, by 18 different artists. For almost th[at] entire year ... (47 weeks), the UK singles charts contained at least one hit song that spoke of antiwar or antinuclear concerns, and usually more than one.' Further George McKay argues that 'it really is quite extraordinary to note that one-third of the year 1984 (17 weeks) had some kind of political pop song at the top of the British charts. Viewed from that lofty perspective, 1984 must be seen as a peak protest music time in Britain, most of it in the context of antiwar and antinuclear sentiment.' However, as the 1980s progressed, it was British prime minister Margaret Thatcher who came under the greatest degree of criticism from native protest singers, mostly for her strong stance against trade unions, and especially for her handling of the UK miners' strike (1984–1985). The leading voice of protest in Thatcherite Britain in the 1980s was Billy Bragg, whose style of protest song and grass-roots political activism was mostly reminiscent of those of Woody Guthrie, however with themes that were relevant to the contemporary Briton. He summarized his stance in "Between the Wars" (1985), in which he sings: "I'll give my consent to any government that does not deny a man a living wage." Also in the 1980s the band Frankie Goes to Hollywood released a political pop protest song Two Tribes a relentless bass-driven track depicting the futility and starkness of nuclear weapons and the Cold War. The video for the song depicted a wrestling match between then-President Ronald Reagan and then-Soviet leader Konstantin Chernenko for the benefit of group members and an eagerly belligerent assembly of representatives from the world's nations, the event ultimately degenerating into complete global destruction. This video was played several times at the 1984 Democratic National Convention. Due to some violent scenes ("Reagan" biting "Chernenko"'s ear, etc.), the unedited video could not be shown on MTV, and an edited version was substituted. The single quickly hit the number one spot in the United Kingdom. Several mixes of the track feature actor Patrick Allen, who recreated his narration from the Protect and Survive public information films for certain 12-inch mixes (the original Protect and Survive soundtracks were sampled for the 7-inch mixes). Irish rebel songs Irish rebel music is a subgenre of Irish folk music, played on typically Irish instruments (such as the Fiddle, tin whistle, Uilleann pipes, accordion, bodhrán etc.) and acoustic guitars. The lyrics deal with the fight for Irish independence, people who were involved in liberation movements, the persecution and violence during Northern Ireland's Troubles and the history of Ireland's numerous rebellions. Among the many examples of the genre, some of the most famous are "A Nation Once Again", "Come out Ye Black and Tans", "Erin go Bragh", "The Fields of Athenry", "The Men Behind the Wire" and the Republic of Ireland's national anthem "Amhrán na bhFiann" ("The Soldier's Song"). Music of this genre has often courted controversy, and some of the more outwardly anti-British songs have been effectively banned from the airwaves in both England and the Republic of Ireland. Paul McCartney also made a contribution to the genre with his 1972 single "Give Ireland Back to the Irish", which he wrote as a reaction to Bloody Sunday in Northern Ireland on January 30, 1972. The song also faced an all-out ban in the UK, and has never been re-released or appeared on any Paul McCartney or Wings best-ofs. The same year McCartney's former colleague John Lennon released two protest songs concerning the hardships of war-torn Northern Ireland: "Sunday Bloody Sunday", written shortly after the 1972 massacre of Irish civil rights activists (which differs from U2's 1983 song of the same title in that it directly supports the Irish Republican cause and does not call for peace), and "The Luck of the Irish", both from his album Some Time in New York City (1972). The Wolfe Tones have become legendary in Ireland for their contribution to the Irish rebel genre. The band has been recording since 1963 and has attracted worldwide fame and attention through their renditions of traditional Irish songs and originals, dealing with the former conflict in Northern Ireland. In 2002 the Wolfe Tones' version of "A Nation Once Again", a nationalist song from the 19th century, was voted the greatest song in the world in a poll conducted by the BBC World Service. An Irish alternative rock/post punk band from Dublin, U2 broke with the rebel musical tradition when in 1983 they wrote their song "Sunday Bloody Sunday". The song makes reference to two separate massacres in Irish history of civilians by British forces – Bloody Sunday (1920) and Bloody Sunday 1972 – however, unlike other songs dealing with those events, the lyrics call for peace as opposed to revenge. The Cranberries' hit "Zombie", written during their English tour in 1993, is in memory of two boys, Jonathan Ball and Tim Parry, who were killed in an IRA bombing in Warrington. Estonia Many of the songs performed at the Estonian Laulupidu are protest songs, particularly those written during the Singing Revolution. Due to the official position of the Soviet Union at the time, the lyrics are frequently allusive, rather than explicitly anti-Soviet, such as Tõnis Mägi's song Koit. In contrast, Eestlane olen ja eestlaseks jään, sung by Ivo Linna and the group In Spe is explicitly in favour of an Estonian identity. Finland Finland has a tradition of socialist and communist protest songs going back to the Finnish Civil War, most of which were imported and translated from Soviet Russia. In the 21st century the socialist protest song tradition is somewhat continued by left wing rap artists and to lesser degree in more traditional Taistoist form by KOM-theatre choir. France "The Internationale" ("L'Internationale" in French) is a socialist, anarchist, communist, and social-democratic anthem."The International Anarchist Congress, Amsterdam, 1907" (PDF). www.fdca.it. Retrieved June 4, 2019 "The Internationale" became the anthem of international socialism. Its original French refrain is C'est la lutte finale/ Groupons-nous et demain/ L'Internationale/ Sera le genre humain. (Freely translated: "This is the final struggle/ Let us join together and tomorrow/ The Internationale/ Will be the human race.") The "Internationale" has been translated into most of the world's languages. Traditionally it is sung with the hand raised in a clenched fist salute. "The Internationale" is sung not only by communists but also (in many countries) by socialists or social democrats. The Chinese version was also a rallying song of the students and workers at the Tiananmen Square protests of 1989. There is not so much a protest song trend in France, but rather of a permanent background of criticism and contestation, and individuals who personify it. World War II and its horrors forced French singers to think more critically about war in general, forcing them to question their governments and the powers who ruled their society. Jazz
currently residing in Atlanta. He was a member of the hip hop group Public Enemy, serving as the group's Minister of Information. During his time with Public Enemy, he was an adherent of the ideas espoused by Nation of Islam leader Louis Farrakhan, which informed both Griffin's and Public Enemy's ideological views. Having served in the U.S. Army and cultivating an interest in martial arts, he trained the S1W security team that toured with Public Enemy dressed in military uniforms, doing choreographed military step drills on stage. Controversy and departure from Public Enemy Before the release of It Takes a Nation of Millions to Hold Us Back, Professor Griff, in his role as Minister of Information, gave interviews to UK magazines on behalf of Public Enemy, during which he made homophobic and anti-Semitic remarks. In a 1988 issue of Melody Maker he stated, "There's no place for gays. When God destroyed Sodom and Gomorrah, it was for that sort of behaviour" and "If the Palestinians took up arms, went into Israel and killed all the Jews, it'd be all right." However, there was little controversy until May 22, 1989, when Griffin was interviewed by The Washington Times. At the time, Public Enemy enjoyed unprecedented
Public Enemy's ideological views. Having served in the U.S. Army and cultivating an interest in martial arts, he trained the S1W security team that toured with Public Enemy dressed in military uniforms, doing choreographed military step drills on stage. Controversy and departure from Public Enemy Before the release of It Takes a Nation of Millions to Hold Us Back, Professor Griff, in his role as Minister of Information, gave interviews to UK magazines on behalf of Public Enemy, during which he made homophobic and anti-Semitic remarks. In a 1988 issue of Melody Maker he stated, "There's no place for gays. When God destroyed Sodom and Gomorrah, it was for that sort of behaviour" and "If the Palestinians took up arms, went into Israel and killed all the Jews, it'd be all right." However, there was little controversy until May 22, 1989, when Griffin was interviewed by The Washington Times. At the time, Public Enemy enjoyed unprecedented mainstream attention with the single "Fight the Power" from the soundtrack of Spike Lee's film Do the Right Thing. During the interview with David Mills, Griffin made numerous statements such as "Jews are responsible for the majority of the wickedness in the world". When the interview was published, a media firestorm emerged, and the band found itself under intense scrutiny. In a series of press conferences, Griffin was either fired, quit, or never left. Def Jam co-founder Rick Rubin had already left the label by then; taking his place alongside Russell Simmons was Lyor Cohen, the son of Israeli immigrants who had run Rush Artist Management since 1985. Before the dust settled, Cohen claims to have arranged for
this response to the blockers problem on the basis that since the non-physical properties of w1 aren't instantiated at a world in which there is a blocker, they are not positive properties in Chalmers' (1996) sense, and so statement 3 will count w1 as a world at which physicalism is true after all. A further problem for supervenience-based formulations of physicalism is the so-called "necessary beings problem". A necessary being in this context is a non-physical being that exists in all possible worlds (for example what theists refer to as God). A necessary being is compatible with all the definitions provided, because it is supervenient on everything; yet it is usually taken to contradict the notion that everything is physical. So any supervenience-based formulation of physicalism will at best state a necessary but not sufficient condition for the truth of physicalism. Additional objections have been raised to the above definitions provided for supervenience physicalism: one could imagine an alternate world that differs only by the presence of a single ammonium molecule (or physical property), and yet based on statement 1, such a world might be completely different in terms of its distribution of mental properties. Furthermore, there are differences expressed concerning the modal status of physicalism; whether it is a necessary truth, or is only true in a world which conforms to certain conditions (i.e. those of physicalism). Realisation physicalism Closely related to supervenience physicalism, is realisation physicalism, the thesis that every instantiated property is either physical or realised by a physical property. Token physicalism Token physicalism is the proposition that "for every actual particular (object, event or process) x, there is some physical particular y such that x = y". It is intended to capture the idea of "physical mechanisms". Token physicalism is compatible with property dualism, in which all substances are "physical", but physical objects may have mental properties as well as physical properties. Token physicalism is not however equivalent to supervenience physicalism. Firstly, token physicalism does not imply supervenience physicalism because the former does not rule out the possibility of non-supervenient properties (provided that they are associated only with physical particulars). Secondarily, supervenience physicalism does not imply token physicalism, for the former allows supervenient objects (such as a "nation", or "soul") that are not equal to any physical object. Reductionism and emergentism Reductionism There are multiple versions of reductionism. In the context of physicalism, the reductions referred to are of a "linguistic" nature, allowing discussions of, say, mental phenomena to be translated into discussions of physics. In one formulation, every concept is analysed in terms of a physical concept. One counter-argument to this supposes there may be an additional class of expressions which is non-physical but which increases the expressive power of a theory. Another version of reductionism is based on the requirement that one theory (mental or physical) be logically derivable from a second. The combination of reductionism and physicalism is usually called reductive physicalism in the philosophy of mind. The opposite view is non-reductive physicalism. Reductive physicalism is the view that mental states are both nothing over and above physical states and reducible to physical states. One version of reductive physicalism is type physicalism or mind-body identity theory. Type physicalism asserts that "for every actually instantiated property F, there is some physical property G such that F=G". Unlike token physicalism, type physicalism entails supervenience physicalism. Reductive versions of physicalism are increasingly unpopular as they do not account for mental lives. The brain on this position as a physical substance has only physical attributes such as a particular volume, a particular mass, a particular density, a particular location, a particular shape, and so on. However, the brain on this position does not have any mental attributes. The brain is not overjoyed or unhappy. The brain is not in pain. When a person's back aches and he or she is in pain, it is not the brain that is suffering even though the brain is associated with the neural circuitry that provides the experience of pain. Reductive physicalism therefore cannot explain mental lives. In the event of fear, for example, doubtlessly there is neural activity that is corresponding with the experience of fear. However, the brain itself is not fearful. Fear cannot be reduced to a physical brain state even though it is corresponding with neural activity in the brain. For this reason, reductive physicalism is argued to be indefensible as it cannot be reconciled with mental experience. Another common argument against type physicalism is multiple realizability, the possibility that a psychological process (say) could be instantiated by many different neurological processes (even non-neurological processes, in the case of machine or alien intelligence). For in this case, the neurological terms translating a psychological term must be disjunctions over the possible instantiations, and it is argued that no physical law can use these disjunctions as terms. Type physicalism was the original target of the multiple realizability argument, and it is not clear that token physicalism is susceptible to objections from multiple realizability. Emergentism There are two versions of emergentism, the strong version and the weak version. Supervenience physicalism has been seen as a strong version of emergentism, in which the subject's psychological experience is considered genuinely novel. Non-reductive physicalism, on the other side, is a weak version of emergentism because it does not need that the subject's psychological experience be novel. The strong version of emergentism is incompatible with physicalism. Since there are novel mental states, mental states are not nothing over and above physical states. However, the weak version of emergentism is compatible with physicalism. We can see that emergentism is actually a very broad view. Some forms of emergentism appear either incompatible with physicalism or equivalent to it (e.g. posteriori physicalism), others appear to merge both dualism and supervenience. Emergentism compatible with dualism claims that mental states and physical states are metaphysically distinct while maintaining the supervenience of mental states on physical states. This proposition however contradicts supervenience physicalism, which asserts a denial of dualism. A priori versus a posteriori physicalism Physicalists hold that physicalism is true. A natural question for physicalists, then, is whether the truth of physicalism is deducible a priori from the nature of the physical world (i.e., the inference is justified independently of experience, even though the nature of the physical world can itself only be determined through experience) or can only be deduced a posteriori (i.e., the justification of the inference itself is dependent upon experience). So-called "a priori physicalists" hold that from knowledge of the conjunction of all physical truths, a totality or that's-all truth (to rule out non-physical epiphenomena, and enforce the closure of the physical world), and some primitive indexical truths such as "I am A" and "now is B", the truth of physicalism is knowable a priori. Let "P" stand for the conjunction of all physical truths and laws, "T" for a that's-all truth, "I" for the
biological properties supervene on physical properties, it follows that two hypothetical worlds cannot be identical in their physical properties but differ in their mental, social or biological properties. Two common approaches to defining "physicalism" are the theory-based and object-based approaches. The theory-based conception of physicalism proposes that "a property is physical if and only if it either is the sort of property that physical theory tells us about or else is a property which metaphysically (or logically) supervenes on the sort of property that physical theory tells us about". Likewise, the object-based conception claims that "a property is physical if and only if: it either is the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents or else is a property which metaphysically (or logically) supervenes on the sort of property required by a complete account of the intrinsic nature of paradigmatic physical objects and their constituents". Physicalists have traditionally opted for a "theory-based" characterization of the physical either in terms of current physics, or a future (ideal) physics. These two theory-based conceptions of the physical represent both horns of Hempel's dilemma (named after the late philosopher of science and logical empiricist Carl Gustav Hempel): an argument against theory-based understandings of the physical. Very roughly, Hempel's dilemma is that if we define the physical by reference to current physics, then physicalism is very likely to be false, as it is very likely (by pessimistic meta-induction) that much of current physics is false. But if we instead define the physical in terms of a future (ideal) or completed physics, then physicalism is hopelessly vague or indeterminate. While the force of Hempel's dilemma against theory-based conceptions of the physical remains contested, alternative "non-theory-based" conceptions of the physical have also been proposed. Frank Jackson (1998) for example, has argued in favour of the aforementioned "object-based" conception of the physical. An objection to this proposal, which Jackson himself noted in 1998, is that if it turns out that panpsychism or panprotopsychism is true, then such a non-materialist understanding of the physical gives the counterintuitive result that physicalism is, nevertheless, also true since such properties will figure in a complete account of paradigmatic examples of the physical. David Papineau and Barbara Montero have advanced and subsequently defended a "via negativa" characterization of the physical. The gist of the via negativa strategy is to understand the physical in terms of what it is not: the mental. In other words, the via negativa strategy understands the physical as "the non-mental". An objection to the via negativa conception of the physical is that (like the object-based conception) it doesn't have the resources to distinguish neutral monism (or panprotopsychism) from physicalism. Further, Restrepo (2012) argues that this conception of the physical makes core non-physical entities of non-´physicalist metaphysics, like God, Cartesian souls and abstract numbers, physical and thus either false or trivially true: "God is non-mentally-and-non-biologically identifiable as the thing that created the universe. Sup- posing emergentism is true, non-physical emergent properties are non-mentally-and-non-biologically identifiable as non-linear effects of certain arrangements of matter. The immaterial Cartesian soul is non-mentally-and-non-biologically identifiable as one of the things that interact causally with certain particles (coincident with the pineal gland). The Platonic number eight is non-mentally-and-non-biologically identifiable as the number of planets orbiting the Sun". Supervenience-based definitions of physicalism Adopting a supervenience-based account of the physical, the definition of physicalism as "all properties are physical" can be unraveled to: 1) Physicalism is true at a possible world w if and only if any world that is a physical duplicate of w is also a duplicate of w simpliciter. Applied to the actual world (our world), statement 1 above is the claim that physicalism is true at the actual world if and only if at every possible world in which the physical properties and laws of the actual world are instantiated, the non-physical (in the ordinary sense of the word) properties of the actual world are instantiated as well. To borrow a metaphor from Saul Kripke (1972), the truth of physicalism at the actual world entails that once God has instantiated or "fixed" the physical properties and laws of our world, then God's work is done; the rest comes "automatically". Unfortunately, statement 1 fails to capture even a necessary condition for physicalism to be true at a world w. To see this, imagine a world in which there are only physical properties—if physicalism is true at any world it is true at this one. But one can conceive physical duplicates of such a world that are not also duplicates simpliciter of it: worlds that have the same physical properties as our imagined one, but with some additional property or properties. A world might contain "epiphenomenal ectoplasm", some additional pure experience that does not interact with the physical components of the world and is not necessitated by them (does not supervene on them). To handle the epiphenomenal ectoplasm problem, statement 1 can be modified to include a "that's-all" or "totality" clause or be restricted to "positive" properties. Adopting the former suggestion here, we can reformulate statement 1 as follows: 2) Physicalism is true at a possible world w if and only if any world that is a minimal physical duplicate of w is a duplicate of w simpliciter. Applied in the same way, statement 2 is the claim that physicalism is true at a possible world w if and only if any world that is a physical duplicate of w (without any further changes), is duplicate of w without qualification. This allows a world in which there are only physical properties to be counted as one at which physicalism is true, since worlds in which there is some extra stuff are not "minimal" physical duplicates of such a world, nor are they minimal physical duplicates of worlds that contain some non-physical properties that are metaphysically necessitated by the physical. But while statement 2 overcomes the problem of worlds at which there is some extra stuff (sometimes referred to as the "epiphenomenal ectoplasm problem") it faces a different challenge: the so-called "blockers problem". Imagine a world where the relation between the physical and non-physical properties at this world (call the world w1) is slightly weaker than metaphysical necessitation, such that a certain kind of non-physical intervener—"a blocker"—could, were it to exist at w1, prevent the non-physical properties in w1 from being instantiated by the instantiation of
between the parties Parallel computing, the simultaneous execution on multiple processors of different parts of a program In the analysis of parallel algorithms, the maximum possible speedup of a computation Parallel evolution, the independent emergence of a similar trait
parts of a program In the analysis of parallel algorithms, the maximum possible speedup of a computation Parallel evolution, the independent emergence of a similar trait in different unrelated species Parallel (geometry), the property of parallel
in adding colour. The resulting single image that subjects report as their experience is called a 'percept'. Studies involving rapidly changing scenes show the percept derives from numerous processes that involve time delays. Recent fMRI studies show that dreams, imaginings and perceptions of things such as faces are accompanied by activity in many of the same areas of brain as are involved with physical sight. Imagery that originates from the senses and internally generated imagery may have a shared ontology at higher levels of cortical processing. Sound is analyzed in term of pressure waves sensed by the cochlea in the ear. Data from the eyes and ears is combined to form a 'bound' percept. The problem of how this is produced, known as the binding problem. Perception is analyzed as a cognitive process in which information processing is used to transfer information into the mind where it is related to other information. Some psychologists propose that this processing gives rise to particular mental states (cognitivism) whilst others envisage a direct path back into the external world in the form of action (radical behaviourism). Behaviourists such as John B. Watson and B.F. Skinner have proposed that perception acts largely as a process between a stimulus and a response but have noted that Gilbert Ryle's "ghost in the machine of the brain" still seems to exist. "The objection to inner states is not that they do not exist, but that they are not relevant in a functional analysis". This view, in which experience is thought to be an incidental by-product of information processing, is known as epiphenomenalism. Contrary to the behaviouralist approach to understanding the elements of cognitive processes, gestalt psychology sought to understand their organization as a whole, studying perception as a process of figure and ground. Philosophical accounts of perception Important philosophical problems derive from the epistemology of perception—how we can gain knowledge via perception—such as the question of the nature of qualia. Within the biological study of perception naive realism is unusable. However, outside biology modified forms of naive realism are defended. Thomas Reid, the eighteenth-century founder of the Scottish School of Common Sense, formulated the idea that sensation was composed of a set of data transfers but also declared that there is still a direct connection between perception and the world. This idea, called direct realism, has again become popular in recent years with the rise of postmodernism. The succession of data transfers involved in perception suggests that sense data are somehow available to a perceiving subject that is the substrate of the percept. Indirect realism, the view held by John Locke and Nicolas Malebranche, proposes that we can only be aware of mental representations of objects. However, this may imply an infinite regress (a perceiver within a perceiver within a perceiver...), though a finite regress is perfectly possible. It also assumes that perception is entirely due to data transfer and information processing, an argument that can be avoided by proposing that the percept does not depend wholly upon the transfer and rearrangement of data. This still involves basic ontological issues of the sort raised by Leibniz Locke, Hume, Whitehead and others, which remain outstanding particularly in relation to the binding problem, the question of how different perceptions (e.g. color and contour in vision) are "bound" to the same object when they are processed by separate areas of the brain. Indirect realism (representational views) provides an account of issues such as perceptual contents, qualia, dreams, imaginings, hallucinations, illusions, the resolution of binocular rivalry, the resolution of multistable perception, the modelling of motion that allows us to watch TV, the sensations that result from direct brain stimulation, the update of the mental image by saccades of the eyes and the referral of events backwards in time. Direct realists must either argue that these experiences do not occur or else refuse to define them as perceptions. Idealism holds that reality is limited to mental qualities while skepticism challenges our ability to know anything outside our minds. One of the most influential proponents of idealism was George Berkeley who maintained that everything was mind or dependent upon mind. Berkeley's idealism has two main strands, phenomenalism in which physical events are viewed as a special kind of mental event and subjective idealism. David Hume is probably the most influential proponent of skepticism. A fourth theory of perception in opposition to naive realism, enactivism, attempts to find a middle path between direct realist and indirect realist theories, positing that cognition is a process of dynamic interplay between an organism's sensory-motor capabilities and the environment it brings forth. Instead of seeing perception as a passive process determined entirely by the features of an independently existing world, enactivism suggests that organism and environment are structurally coupled and co-determining. The theory was first formalized by Francisco Varela, Evan Thompson, and Eleanor Rosch in "The Embodied Mind". Spatial
a set of data transfers but also declared that there is still a direct connection between perception and the world. This idea, called direct realism, has again become popular in recent years with the rise of postmodernism. The succession of data transfers involved in perception suggests that sense data are somehow available to a perceiving subject that is the substrate of the percept. Indirect realism, the view held by John Locke and Nicolas Malebranche, proposes that we can only be aware of mental representations of objects. However, this may imply an infinite regress (a perceiver within a perceiver within a perceiver...), though a finite regress is perfectly possible. It also assumes that perception is entirely due to data transfer and information processing, an argument that can be avoided by proposing that the percept does not depend wholly upon the transfer and rearrangement of data. This still involves basic ontological issues of the sort raised by Leibniz Locke, Hume, Whitehead and others, which remain outstanding particularly in relation to the binding problem, the question of how different perceptions (e.g. color and contour in vision) are "bound" to the same object when they are processed by separate areas of the brain. Indirect realism (representational views) provides an account of issues such as perceptual contents, qualia, dreams, imaginings, hallucinations, illusions, the resolution of binocular rivalry, the resolution of multistable perception, the modelling of motion that allows us to watch TV, the sensations that result from direct brain stimulation, the update of the mental image by saccades of the eyes and the referral of events backwards in time. Direct realists must either argue that these experiences do not occur or else refuse to define them as perceptions. Idealism holds that reality is limited to mental qualities while skepticism challenges our ability to know anything outside our minds. One of the most influential proponents of idealism was George Berkeley who maintained that everything was mind or dependent upon mind. Berkeley's idealism has two main strands, phenomenalism in which physical events are viewed as a special kind of mental event and subjective idealism. David Hume is probably the most influential proponent of skepticism. A fourth theory of perception in opposition to naive realism, enactivism, attempts to find a middle path between direct realist and indirect realist theories, positing that cognition is a process of dynamic interplay between an organism's sensory-motor capabilities and the environment it brings forth. Instead of seeing perception as a passive process determined entirely by the features of an independently existing world, enactivism suggests that organism and environment are structurally coupled and co-determining. The theory was first formalized by Francisco Varela, Evan Thompson, and Eleanor Rosch in "The Embodied Mind". Spatial representation An aspect of perception that is common to both realists and anti-realists is the idea of mental or perceptual space. David Hume concluded that things appear extended because they have attributes of colour and solidity. A popular modern philosophical view is that the brain cannot contain images so our sense of space must be due to the actual space occupied by physical things. However, as René Descartes noticed, perceptual space has a projective geometry, things within it appear as if they are viewed from a point. The phenomenon of perspective was closely studied by artists and architects in the Renaissance, who relied mainly on the 11th century polymath, Alhazen (Ibn al-Haytham), who affirmed the visibility of perceptual space in geometric structuring projections. Mathematicians now know of many types of projective geometry such as complex Minkowski space that might describe the layout of things in perception (see Peters (2000)) and it has also emerged that parts of the brain contain patterns of electrical activity that correspond closely to the layout of the retinal image (this is known as retinotopy). How or whether these become conscious experience is still unknown (see McGinn (1995)). Beyond spatial representation Traditionally, the philosophical investigation of perception has focused on the sense of vision as the paradigm of sensory perception. However, studies on
Cicero is Roman, it is unclear what semantic content the proper name Cicero provides to the proposition. One may intuitively assume that the name refers to a person who may or may not be Roman, and that the truth value depends on whether or not that is the case. But from the point of view of a theory of meaning the question is how the word Cicero establishes its referent. Another problem, known as "Frege's puzzle", asks why it can be the case that the two names can refer to the same referent, yet not necessarily be considered entirely synonymous. His example is that the proposition "Hesperus is Hesperus" (Hesperus being the Greek name of the morning star) is tautological and vacuous while the proposition "Hesperus is Phosphorus" (Phosphorus being the Greek name of the evening star) conveys information. This puzzle suggests that there is something more to the meaning of the proper name than simply pointing out its referent. Theories Many theories have been proposed about proper names, each attempting to solve the problems of reference and identity inherent in the concept. Millian theory John Stuart Mill distinguished between connotative and denotative meaning, and argued that proper names included no other semantic content to a proposition than identifying the referent of the name and were hence purely denotative. Some contemporary proponents of a Millian theory of proper names argue that the process through which something becomes a proper name is exactly the gradual loss of connotation for pure denotation such as the process that turned the descriptive propositions "long island" into the proper name Long Island. Sense-based theory of names Gotlob Frege argued that one had to distinguish between the sense (Sinn) and the reference of the name, and that different names for the same entity might identify the same referent without being formally synonymous. For example, although the morning star and the evening star are the same astronomical object, the proposition "the morning star is the evening star" is not a tautology, but provides actual information to someone who did not know this. Hence, to Frege, the two names for the object must have a different sense. Philosophers such as John McDowell have elaborated on Frege's theory of proper names. Descriptive theory The descriptive theory of proper names is the view that the meaning of a given use of a proper name is a set of properties that can be expressed as a description that picks out an object that satisfies the description. Bertrand Russell espoused such a view arguing that the name refers to a description, and that description, like a definition, picks out the bearer of the name. The name then functions as an abbreviation or a truncated form of the description. The distinction between the embedded description and the bearer itself is similar to that between the extension and the intension (Frege's terms) of a general term, or between connotation and denotation (Mill's terms). John Searle elaborated Russell's theory, suggesting that the proper name refers to a cluster of propositions that in combination pick out a unique referent. This was meant to deal with the objection by some critics of Russell's theory that a descriptive theory of meaning would make the referent of a name dependent on the knowledge that the person saying the name has about the referent. In 1973, Tyler Burge proposed a metalinguistic descriptivist theory of proper names which holds that names have the meaning that corresponds to the description of the individual entities to whom the name is applied. This, however, opens up the possibility that names are not proper, when, for example, more than one person shares the same name. This leads Burge to argue that plural usages of names, such as "all the Alfreds I know have red hair", support this view. Causal theory of names The causal-historical theory originated by Saul Kripke in Naming and Necessity, building on work by, among others, Keith Donnellan, combines the referential view with the idea that a name's referent is fixed by a baptismal act, whereupon the name becomes a rigid designator of the referent. Kripke did not emphasize causality, but rather the historical relation between the
suggests that there is something more to the meaning of the proper name than simply pointing out its referent. Theories Many theories have been proposed about proper names, each attempting to solve the problems of reference and identity inherent in the concept. Millian theory John Stuart Mill distinguished between connotative and denotative meaning, and argued that proper names included no other semantic content to a proposition than identifying the referent of the name and were hence purely denotative. Some contemporary proponents of a Millian theory of proper names argue that the process through which something becomes a proper name is exactly the gradual loss of connotation for pure denotation such as the process that turned the descriptive propositions "long island" into the proper name Long Island. Sense-based theory of names Gotlob Frege argued that one had to distinguish between the sense (Sinn) and the reference of the name, and that different names for the same entity might identify the same referent without being formally synonymous. For example, although the morning star and the evening star are the same astronomical object, the proposition "the morning star is the evening star" is not a tautology, but provides actual information to someone who did not know this. Hence, to Frege, the two names for the object must have a different sense. Philosophers such as John McDowell have elaborated on Frege's theory of proper names. Descriptive theory The descriptive theory of proper names is the view that the meaning of a given use of a proper name is a set of properties that can be expressed as a description that picks out an object that satisfies the description. Bertrand Russell espoused such a view arguing that the name refers to a description, and that description, like a definition, picks out the bearer of the name. The name then functions as an abbreviation or a truncated form of the description. The distinction between the embedded description and the bearer itself is similar to that between the extension and the intension (Frege's terms) of a general term, or between connotation and denotation (Mill's terms). John Searle elaborated Russell's theory, suggesting that the proper name refers to a cluster of propositions that in combination pick out a unique referent. This was meant to deal with the objection by some critics of Russell's theory that a descriptive theory of meaning would make the referent of a name dependent on the knowledge that the person saying the name has about the referent. In 1973, Tyler Burge proposed a metalinguistic descriptivist theory of proper names which holds that names have the meaning that corresponds to the description of the individual entities to whom the name is applied. This, however, opens up the possibility that names are not proper, when, for example, more than one person shares the same name. This leads Burge to argue that plural usages of names, such as "all the Alfreds I know have red hair", support this view. Causal theory of names The causal-historical theory originated by Saul Kripke in Naming and Necessity, building on work by, among others, Keith Donnellan, combines the referential view with the idea that a name's referent is fixed by a baptismal act, whereupon the name becomes a rigid designator of the referent. Kripke did not emphasize causality, but rather the
called the query. Logically, the Prolog engine tries to find a resolution refutation of the negated query. The resolution method used by Prolog is called SLD resolution. If the negated query can be refuted, it follows that the query, with the appropriate variable bindings in place, is a logical consequence of the program. In that case, all generated variable bindings are reported to the user, and the query is said to have succeeded. Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative. If any goal fails in the course of executing the program, all variable bindings that were made since the most recent choice-point was created are undone, and execution continues with the next alternative of that choice-point. This execution strategy is called chronological backtracking. For example: mother_child(trude, sally). father_child(tom, sally). father_child(tom, erica). father_child(mike, tom). sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y). parent_child(X, Y) :- father_child(X, Y). parent_child(X, Y) :- mother_child(X, Y). This results in the following query being evaluated as true: ?- sibling(sally, erica). Yes This is obtained as follows: Initially, the only matching clause-head for the query sibling(sally, erica) is the first one, so proving the query is equivalent to proving the body of that clause with the appropriate variable bindings in place, i.e., the conjunction (parent_child(Z,sally), parent_child(Z,erica)). The next goal to be proved is the leftmost one of this conjunction, i.e., parent_child(Z, sally). Two clause heads match this goal. The system creates a choice-point and tries the first alternative, whose body is father_child(Z, sally). This goal can be proved using the fact father_child(tom, sally), so the binding Z = tom is generated, and the next goal to be proved is the second part of the above conjunction: parent_child(tom, erica). Again, this can be proved by the corresponding fact. Since all goals could be proved, the query succeeds. Since the query contained no variables, no bindings are reported to the user. A query with variables, like: ?- father_child(Father, Child). enumerates all valid answers on backtracking. Notice that with the code as stated above, the query ?- sibling(sally, sally). also succeeds. One would insert additional goals to describe the relevant restrictions, if desired. Loops and recursion Iterative algorithms can be implemented by means of recursive predicates. Negation The built-in Prolog predicate \+/1 provides negation as failure, which allows for non-monotonic reasoning. The goal \+ illegal(X) in the rule legal(X) :- \+ illegal(X). is evaluated as follows: Prolog attempts to prove illegal(X). If a proof for that goal can be found, the original goal (i.e., \+ illegal(X)) fails. If no proof can be found, the original goal succeeds. Therefore, the \+/1 prefix operator is called the "not provable" operator, since the query ?- \+ Goal. succeeds if Goal is not provable. This kind of negation is sound if its argument is "ground" (i.e. contains no variables). Soundness is lost if the argument contains variables and the proof procedure is complete. In particular, the query ?- legal(X). now cannot be used to enumerate all things that are legal. Programming in Prolog In Prolog, loading code is referred to as consulting. Prolog can be used interactively by entering queries at the Prolog prompt ?-. If there is no solution, Prolog writes no. If a solution exists then it is printed. If there are multiple solutions to the query, then these can be requested by entering a semi-colon ;. There are guidelines on good programming practice to improve code efficiency, readability and maintainability. Here follow some example programs written in Prolog. Hello World An example of a query: ?- write('Hello World!'), nl. Hello World! true. ?- Compiler optimization Any computation can be expressed declaratively as a sequence of state transitions. As an example, an optimizing compiler with three optimization passes could be implemented as a relation between an initial program and its optimized form: program_optimized(Prog0, Prog) :- optimization_pass_1(Prog0, Prog1), optimization_pass_2(Prog1, Prog2), optimization_pass_3(Prog2, Prog). or equivalently using DCG notation: program_optimized --> optimization_pass_1, optimization_pass_2, optimization_pass_3. Quicksort The quicksort sorting algorithm, relating a list to its sorted version: partition([], _, [], []). partition([X|Xs], Pivot, Smalls, Bigs) :- ( X @< Pivot -> Smalls = [X|Rest], partition(Xs, Pivot, Rest, Bigs) ; Bigs = [X|Rest], partition(Xs, Pivot, Smalls, Rest) ). quicksort([]) --> []. quicksort([X|Xs]) --> { partition(Xs, X, Smaller, Bigger) }, quicksort(Smaller), [X], quicksort(Bigger). Design patterns of Prolog A design pattern is a general reusable solution to a commonly occurring problem in software design. Some design patterns in Prolog are skeletons, techniques, cliches, program schemata, logic description schemata, and higher order programming. Higher-order programming A higher-order predicate is a predicate that takes one or more other predicates as arguments. Although support for higher-order programming takes Prolog outside the domain of first-order logic, which does not allow quantification over predicates, ISO Prolog now has some built-in higher-order predicates such as call/1, call/2, call/3, findall/3, setof/3, and bagof/3. Furthermore, since arbitrary Prolog goals can be constructed and evaluated at run-time, it is easy to write higher-order predicates like maplist/2, which applies an arbitrary predicate to each member of a given list, and sublist/3, which filters elements that satisfy a given predicate, also allowing for currying. To convert solutions from temporal representation (answer substitutions on backtracking) to spatial representation (terms), Prolog has various all-solutions predicates that collect all answer substitutions of a given query in a list. This can be used for list comprehension. For example, perfect numbers equal the sum of their proper divisors: perfect(N) :- between(1, inf, N), U is N // 2, findall(D, (between(1,U,D), N mod D =:= 0), Ds), sumlist(Ds, N). This can be used to enumerate perfect numbers, and also to check whether a number is perfect. As another example, the predicate maplist applies a predicate P to all corresponding positions in a pair of lists: maplist(_, [], []). maplist(P, [X|Xs], [Y|Ys]) :- call(P, X, Y), maplist(P, Xs, Ys). When P is a predicate that for all X, P(X,Y) unifies Y with a single unique value, maplist(P, Xs, Ys) is equivalent to applying the map function in functional programming as Ys = map(Function, Xs). Higher-order programming style in Prolog was pioneered in HiLog and λProlog. Modules For programming in the large, Prolog provides a module system. The module system is standardised by ISO. However, not all Prolog compilers support modules, and there are compatibility problems between the module systems of the major Prolog compilers. Consequently, modules written on one Prolog compiler will not necessarily work on others. Parsing There is a special notation called definite clause grammars (DCGs). A rule defined via -->/2 instead of :-/2 is expanded by the preprocessor (expand_term/2, a facility analogous to macros in other languages) according to a few straightforward rewriting rules, resulting in ordinary Prolog clauses. Most notably, the rewriting equips the predicate with two additional arguments, which can be used to implicitly thread state around, analogous to monads in other languages. DCGs are often used to write parsers or list generators, as they also provide a convenient interface to difference lists. Meta-interpreters and reflection Prolog is a homoiconic language and provides many facilities for reflection. Its implicit execution strategy makes it possible to write a concise meta-circular evaluator (also called meta-interpreter) for pure Prolog code: solve(true). solve((Subgoal1,Subgoal2)) :- solve(Subgoal1), solve(Subgoal2). solve(Head) :- clause(Head, Body), solve(Body). where true represents an empty conjunction, and clause(Head, Body) unifies with clauses in the database of the form Head :- Body. Since Prolog programs are themselves sequences of Prolog terms (:-/2 is an infix operator) that are easily read and inspected using built-in mechanisms (like read/1), it is possible to write customized interpreters that augment Prolog with domain-specific features. For example, Sterling and Shapiro present a meta-interpreter that performs reasoning with uncertainty, reproduced here with slight modifications: solve(true, 1) :- !. solve((Subgoal1,Subgoal2), Certainty) :- !, solve(Subgoal1, Certainty1), solve(Subgoal2, Certainty2), Certainty is min(Certainty1, Certainty2). solve(Goal, 1) :- builtin(Goal), !, Goal. solve(Head, Certainty) :- clause_cf(Head, Body, Certainty1), solve(Body, Certainty2), Certainty is Certainty1 * Certainty2. This interpreter uses a table of built-in Prolog predicates of the form builtin(A is B). builtin(read(X)). % etc. and clauses represented as clause_cf(Head, Body, Certainty). Given those, it can be called as solve(Goal, Certainty) to execute Goal and obtain a measure of certainty about the result. Turing completeness Pure Prolog is based on a subset of first-order predicate logic, Horn clauses, which is Turing-complete. Turing completeness of Prolog can be shown by using it to simulate a Turing machine: turing(Tape0, Tape) :- perform(q0, [], Ls, Tape0, Rs), reverse(Ls, Ls1), append(Ls1, Rs, Tape). perform(qf, Ls, Ls, Rs, Rs) :- !. perform(Q0, Ls0, Ls, Rs0, Rs) :- symbol(Rs0, Sym, RsRest), once(rule(Q0, Sym, Q1, NewSym, Action)), action(Action, Ls0, Ls1, [NewSym|RsRest], Rs1), perform(Q1, Ls1, Ls, Rs1, Rs). symbol([], b, []). symbol([Sym|Rs], Sym, Rs). action(left, Ls0, Ls, Rs0, Rs) :- left(Ls0, Ls, Rs0, Rs). action(stay, Ls, Ls, Rs, Rs). action(right, Ls0, [Sym|Ls0], [Sym|Rs], Rs). left([], [], Rs0, [b|Rs0]). left([L|Ls], Ls, Rs, [L|Rs]). A simple example Turing machine is specified by the facts: rule(q0, 1, q0, 1, right). rule(q0, b, qf, 1, stay). This machine performs incrementation by one of a number in unary encoding: It loops over any number of "1" cells and appends an additional "1" at the end. Example query and result: ?- turing([1,1,1], Ts). Ts = [1, 1, 1, 1] ; This illustrates how any computation can be expressed declaratively as a sequence of state transitions, implemented in Prolog as a relation between successive states of interest. Implementation ISO Prolog The ISO Prolog standard consists of two parts. ISO/IEC 13211-1, published in 1995, aims to standardize the existing practices of the many implementations of the core elements of Prolog. It has clarified aspects of the language that were previously ambiguous and leads to portable programs. There are three corrigenda: Cor.1:2007, Cor.2:2012, and Cor.3:2017. ISO/IEC 13211-2, published in 2000, adds support for modules to the standard. The standard is maintained by the ISO/IEC JTC1/SC22/WG17 working group. ANSI X3J17 is the US Technical Advisory Group for the standard. Compilation For efficiency, Prolog code is typically compiled to abstract machine code, often influenced by the register-based Warren Abstract Machine (WAM) instruction set. Some implementations employ abstract interpretation to derive type and mode information of predicates at compile time, or compile to real machine code for high performance. Devising efficient implementation methods for Prolog code is a field of active research in the logic programming community, and various other execution methods are employed in some implementations. These include clause binarization and stack-based virtual machines. Tail recursion Prolog systems typically implement a well-known optimization method called tail call optimization (TCO) for deterministic predicates exhibiting tail recursion or, more generally, tail calls: A clause's stack frame is discarded before performing a call in a tail position. Therefore, deterministic tail-recursive predicates are executed with constant stack space, like loops in other languages. Term indexing Finding clauses that are unifiable with a term in a query is linear in the number of clauses. Term indexing uses a data structure that enables sub-linear-time lookups. Indexing only affects program performance, it does not affect semantics. Most Prologs only use indexing on the first term, as indexing on all terms is expensive, but techniques based on field-encoded words or superimposed codewords provide fast indexing across the full query and head. Hashing Some Prolog systems, such as WIN-PROLOG and SWI-Prolog, now implement hashing to help handle large datasets more efficiently. This tends to yield very large performance gains when working with large corpora such as WordNet. Tabling Some Prolog systems, (B-Prolog, XSB, SWI-Prolog, YAP, and Ciao), implement a memoization method called tabling, which frees the user from manually storing intermediate results. Tabling is a space–time tradeoff; execution time can be reduced by using more memory to store intermediate results: Subgoals encountered in a query evaluation are maintained in a table, along with answers to these subgoals. If a subgoal is re-encountered, the evaluation reuses information from the table rather than re-performing resolution against program clauses. Tabling can be extended in various directions. It can support recursive predicates through SLG-resolution or linear tabling. In a multi-threaded Prolog system tabling results could be kept private to a thread or shared among all threads. And in incremental tabling, tabling might react to changes. Implementation in hardware During the Fifth Generation Computer Systems project, there were attempts to implement Prolog in hardware with the aim of achieving faster execution with dedicated architectures. Furthermore, Prolog has a number of properties that may allow speed-up through parallel execution. A more recent approach has been to compile restricted Prolog programs to a field programmable gate array. However, rapid progress in general-purpose hardware has consistently overtaken more specialised architectures. Sega implemented Prolog for use with the Sega AI Computer, released for the Japanese market in 1986. Prolog was used for reading natural language inputs, in the Japanese language, via a touch pad. Limitations Although Prolog is widely used in research and education, Prolog and other logic programming languages have not had a significant impact on the computer industry in general. Most applications are small by industrial standards, with few exceeding 100,000 lines of code. Programming in the large is considered to be complicated because not all Prolog compilers support modules, and there are compatibility problems between the module systems of the major Prolog compilers. Portability of Prolog code across implementations has also been a problem, but developments since 2007 have meant: "the portability within the family of Edinburgh/Quintus derived Prolog implementations is good enough to allow for maintaining portable real-world applications." Software developed in Prolog has been criticised for having a high performance penalty compared to conventional programming languages. In particular, Prolog's non-deterministic evaluation strategy can be problematic when programming deterministic computations, or when even using "don't care non-determinism" (where a single choice is made instead of backtracking over all possibilities). Cuts and other language constructs may have to be used to achieve desirable performance, destroying one of Prolog's main attractions, the ability to run programs "backwards and forwards". Prolog is not purely declarative: because of constructs like the cut operator, a procedural reading of a Prolog program is needed to understand it. The order of clauses in a Prolog program is significant, as the execution strategy of the language depends on it. Other logic programming languages, such as Datalog, are truly declarative but restrict the language. As a result, many practical Prolog programs are written to conform to Prolog's depth-first search order, rather than as purely declarative logic programs. Extensions Various implementations have been developed from Prolog to extend logic programming capabilities in numerous directions. These include types, modes, constraint logic programming (CLP), object-oriented logic programming (OOLP), concurrency, linear logic (LLP), functional and higher-order logic programming capabilities, plus interoperability with knowledge bases: Types Prolog is an untyped language. Attempts to introduce types date back to the 1980s, and as of 2008 there are still attempts to extend Prolog with types. Type information is useful not only for type safety but also for reasoning about Prolog programs. Modes The syntax of Prolog does not specify which arguments of a predicate are inputs and which are outputs. However, this information is significant and it is recommended that it be included in the comments. Modes provide valuable information when reasoning about Prolog programs and can also be used to accelerate execution. Constraints Constraint logic programming extends
Goal. solve(Head, Certainty) :- clause_cf(Head, Body, Certainty1), solve(Body, Certainty2), Certainty is Certainty1 * Certainty2. This interpreter uses a table of built-in Prolog predicates of the form builtin(A is B). builtin(read(X)). % etc. and clauses represented as clause_cf(Head, Body, Certainty). Given those, it can be called as solve(Goal, Certainty) to execute Goal and obtain a measure of certainty about the result. Turing completeness Pure Prolog is based on a subset of first-order predicate logic, Horn clauses, which is Turing-complete. Turing completeness of Prolog can be shown by using it to simulate a Turing machine: turing(Tape0, Tape) :- perform(q0, [], Ls, Tape0, Rs), reverse(Ls, Ls1), append(Ls1, Rs, Tape). perform(qf, Ls, Ls, Rs, Rs) :- !. perform(Q0, Ls0, Ls, Rs0, Rs) :- symbol(Rs0, Sym, RsRest), once(rule(Q0, Sym, Q1, NewSym, Action)), action(Action, Ls0, Ls1, [NewSym|RsRest], Rs1), perform(Q1, Ls1, Ls, Rs1, Rs). symbol([], b, []). symbol([Sym|Rs], Sym, Rs). action(left, Ls0, Ls, Rs0, Rs) :- left(Ls0, Ls, Rs0, Rs). action(stay, Ls, Ls, Rs, Rs). action(right, Ls0, [Sym|Ls0], [Sym|Rs], Rs). left([], [], Rs0, [b|Rs0]). left([L|Ls], Ls, Rs, [L|Rs]). A simple example Turing machine is specified by the facts: rule(q0, 1, q0, 1, right). rule(q0, b, qf, 1, stay). This machine performs incrementation by one of a number in unary encoding: It loops over any number of "1" cells and appends an additional "1" at the end. Example query and result: ?- turing([1,1,1], Ts). Ts = [1, 1, 1, 1] ; This illustrates how any computation can be expressed declaratively as a sequence of state transitions, implemented in Prolog as a relation between successive states of interest. Implementation ISO Prolog The ISO Prolog standard consists of two parts. ISO/IEC 13211-1, published in 1995, aims to standardize the existing practices of the many implementations of the core elements of Prolog. It has clarified aspects of the language that were previously ambiguous and leads to portable programs. There are three corrigenda: Cor.1:2007, Cor.2:2012, and Cor.3:2017. ISO/IEC 13211-2, published in 2000, adds support for modules to the standard. The standard is maintained by the ISO/IEC JTC1/SC22/WG17 working group. ANSI X3J17 is the US Technical Advisory Group for the standard. Compilation For efficiency, Prolog code is typically compiled to abstract machine code, often influenced by the register-based Warren Abstract Machine (WAM) instruction set. Some implementations employ abstract interpretation to derive type and mode information of predicates at compile time, or compile to real machine code for high performance. Devising efficient implementation methods for Prolog code is a field of active research in the logic programming community, and various other execution methods are employed in some implementations. These include clause binarization and stack-based virtual machines. Tail recursion Prolog systems typically implement a well-known optimization method called tail call optimization (TCO) for deterministic predicates exhibiting tail recursion or, more generally, tail calls: A clause's stack frame is discarded before performing a call in a tail position. Therefore, deterministic tail-recursive predicates are executed with constant stack space, like loops in other languages. Term indexing Finding clauses that are unifiable with a term in a query is linear in the number of clauses. Term indexing uses a data structure that enables sub-linear-time lookups. Indexing only affects program performance, it does not affect semantics. Most Prologs only use indexing on the first term, as indexing on all terms is expensive, but techniques based on field-encoded words or superimposed codewords provide fast indexing across the full query and head. Hashing Some Prolog systems, such as WIN-PROLOG and SWI-Prolog, now implement hashing to help handle large datasets more efficiently. This tends to yield very large performance gains when working with large corpora such as WordNet. Tabling Some Prolog systems, (B-Prolog, XSB, SWI-Prolog, YAP, and Ciao), implement a memoization method called tabling, which frees the user from manually storing intermediate results. Tabling is a space–time tradeoff; execution time can be reduced by using more memory to store intermediate results: Subgoals encountered in a query evaluation are maintained in a table, along with answers to these subgoals. If a subgoal is re-encountered, the evaluation reuses information from the table rather than re-performing resolution against program clauses. Tabling can be extended in various directions. It can support recursive predicates through SLG-resolution or linear tabling. In a multi-threaded Prolog system tabling results could be kept private to a thread or shared among all threads. And in incremental tabling, tabling might react to changes. Implementation in hardware During the Fifth Generation Computer Systems project, there were attempts to implement Prolog in hardware with the aim of achieving faster execution with dedicated architectures. Furthermore, Prolog has a number of properties that may allow speed-up through parallel execution. A more recent approach has been to compile restricted Prolog programs to a field programmable gate array. However, rapid progress in general-purpose hardware has consistently overtaken more specialised architectures. Sega implemented Prolog for use with the Sega AI Computer, released for the Japanese market in 1986. Prolog was used for reading natural language inputs, in the Japanese language, via a touch pad. Limitations Although Prolog is widely used in research and education, Prolog and other logic programming languages have not had a significant impact on the computer industry in general. Most applications are small by industrial standards, with few exceeding 100,000 lines of code. Programming in the large is considered to be complicated because not all Prolog compilers support modules, and there are compatibility problems between the module systems of the major Prolog compilers. Portability of Prolog code across implementations has also been a problem, but developments since 2007 have meant: "the portability within the family of Edinburgh/Quintus derived Prolog implementations is good enough to allow for maintaining portable real-world applications." Software developed in Prolog has been criticised for having a high performance penalty compared to conventional programming languages. In particular, Prolog's non-deterministic evaluation strategy can be problematic when programming deterministic computations, or when even using "don't care non-determinism" (where a single choice is made instead of backtracking over all possibilities). Cuts and other language constructs may have to be used to achieve desirable performance, destroying one of Prolog's main attractions, the ability to run programs "backwards and forwards". Prolog is not purely declarative: because of constructs like the cut operator, a procedural reading of a Prolog program is needed to understand it. The order of clauses in a Prolog program is significant, as the execution strategy of the language depends on it. Other logic programming languages, such as Datalog, are truly declarative but restrict the language. As a result, many practical Prolog programs are written to conform to Prolog's depth-first search order, rather than as purely declarative logic programs. Extensions Various implementations have been developed from Prolog to extend logic programming capabilities in numerous directions. These include types, modes, constraint logic programming (CLP), object-oriented logic programming (OOLP), concurrency, linear logic (LLP), functional and higher-order logic programming capabilities, plus interoperability with knowledge bases: Types Prolog is an untyped language. Attempts to introduce types date back to the 1980s, and as of 2008 there are still attempts to extend Prolog with types. Type information is useful not only for type safety but also for reasoning about Prolog programs. Modes The syntax of Prolog does not specify which arguments of a predicate are inputs and which are outputs. However, this information is significant and it is recommended that it be included in the comments. Modes provide valuable information when reasoning about Prolog programs and can also be used to accelerate execution. Constraints Constraint logic programming extends Prolog to include concepts from constraint satisfaction. A constraint logic program allows constraints in the body of clauses, such as: A(X,Y) :- X+Y>0. It is suited to large-scale combinatorial optimisation problems and is thus useful for applications in industrial settings, such as automated time-tabling and production scheduling. Most Prolog systems ship with at least one constraint solver for finite domains, and often also with solvers for other domains like rational numbers. Object-orientation Flora-2 is an object-oriented knowledge representation and reasoning system based on F-logic and incorporates HiLog, Transaction logic, and defeasible reasoning. Logtalk is an object-oriented logic programming language that can use most Prolog implementations as a back-end compiler. As a multi-paradigm language, it includes support for both prototypes and classes. Oblog is a small, portable, object-oriented extension to Prolog by Margaret McDougall of EdCAAD, University of Edinburgh. Objlog was a frame-based language combining objects and Prolog II from CNRS, Marseille, France. Prolog++ was developed by Logic Programming Associates and first released in 1989 for MS-DOS PCs. Support for other platforms was added, and a second version was released in 1995. A book about Prolog++ by Chris Moss was published by Addison-Wesley in 1994. Visual Prolog is a multi-paradigm language with interfaces, classes, implementations and object expressions. Graphics Prolog systems that provide a graphics library are SWI-Prolog, Visual Prolog, WIN-PROLOG, and B-Prolog. Concurrency Prolog-MPI is an open-source SWI-Prolog extension for distributed computing over the Message Passing Interface. Also there are various concurrent Prolog programming languages. Web programming Some Prolog implementations, notably Visual Prolog, SWI-Prolog and Ciao, support server-side web programming with support for web protocols, HTML and XML. There are also extensions to support semantic web formats such as RDF and OWL. Prolog has also been suggested as a client-side language. In addition Visual Prolog supports JSON-RPC and Websockets. Adobe Flash Cedar is a free and basic Prolog interpreter. From version 4 and above Cedar has a FCA (Flash Cedar App)
of Zimmermann for violations of U.S. export restrictions as a result of the international spread of PGP's use. After the government dropped its case without indictment in early 1996, Zimmermann founded PGP Inc. and released an updated version of PGP and some additional related products. That company was acquired by Network Associates (NAI) in December 1997, and Zimmermann stayed on for three years as a Senior Fellow. NAI decided to drop the product line and in 2002, PGP was acquired from NAI by a new company called PGP Corporation. Zimmermann served as a special advisor and consultant to that firm until Symantec acquired PGP Corporation in 2010. Zimmermann is also a fellow at the Stanford Law School's Center for Internet and Society. He was a principal designer of the cryptographic key agreement protocol (the "association model") for the Wireless USB standard. Silent Circle Along with Mike Janke and Jon Callas, in 2012 he co-founded Silent Circle, a secure hardware and subscription based software security company. Dark Mail Alliance In October 2013, Zimmermann, along with other key employees from Silent Circle, teamed up with Lavabit founder Ladar Levison to create the Dark Mail Alliance. The goal of the organization is to work on a new protocol to replace PGP that will encrypt email metadata, among other things that PGP is not capable of. Okuna Zimmermann is also involved in the social network Okuna, formerly Openbook, which aims to be an ethical and privacy-friendly alternative to existing social networks, especially Facebook. He sees today's established social media platforms as a threat to democracy and privacy, because of their profit-oriented revenue models that "are all about exploiting our personal information" and "[deepen] the political divides in our culture", and Okuna as the solution to these problems. Zimmermann's Law In 2013, an article on "Zimmermann's Law" quoted Phil Zimmermann as saying "The natural flow of technology tends to move in the direction of making surveillance easier", and "the ability of computers to track us doubles every eighteen months", in reference to Moore's law. Awards and other recognition Zimmermann has received numerous technical and humanitarian awards for his pioneering work in cryptography: In 2018, Zimmermann was inducted into Information Systems Security Association (ISSA) hall of fame by the ISSA International Organization on October 16, 2018. In 2012, Zimmermann was inducted into the Internet Hall of Fame by the Internet Society. In 2008, PC World named Zimmermann one of the "Top 50 Tech Visionaries" of the last 50 years. In 2006, eWeek ranked PGP 9th in the 25 Most Influential and Innovative Products introduced since the invention of the PC in 1981. In 2003, Reason named him a "Hero of Freedom" In 2001, Zimmermann was inducted into the CRN Industry Hall of Fame. In 2000, InfoWorld named him one of the "Top 10 Innovators in E-business". In 1999, he received the Louis Brandeis Award from Privacy International. In 1998, he received a Lifetime Achievement Award from Secure Computing Magazine. In 1996, he received the Norbert Wiener Award for Social and Professional Responsibility for promoting the responsible use of technology.
Silent Circle, teamed up with Lavabit founder Ladar Levison to create the Dark Mail Alliance. The goal of the organization is to work on a new protocol to replace PGP that will encrypt email metadata, among other things that PGP is not capable of. Okuna Zimmermann is also involved in the social network Okuna, formerly Openbook, which aims to be an ethical and privacy-friendly alternative to existing social networks, especially Facebook. He sees today's established social media platforms as a threat to democracy and privacy, because of their profit-oriented revenue models that "are all about exploiting our personal information" and "[deepen] the political divides in our culture", and Okuna as the solution to these problems. Zimmermann's Law In 2013, an article on "Zimmermann's Law" quoted Phil Zimmermann as saying "The natural flow of technology tends to move in the direction of making surveillance easier", and "the ability of computers to track us doubles every eighteen months", in reference to Moore's law. Awards and other recognition Zimmermann has received numerous technical and humanitarian awards for his pioneering work in cryptography: In 2018, Zimmermann was inducted into Information Systems Security Association (ISSA) hall of fame by the ISSA International Organization on October 16, 2018. In 2012, Zimmermann was inducted into the Internet Hall of Fame by the Internet Society. In 2008, PC World named Zimmermann one of the "Top 50 Tech Visionaries" of the last 50 years. In 2006, eWeek ranked PGP 9th in the 25 Most Influential and Innovative Products introduced since the invention of the PC in 1981. In 2003, Reason named him a "Hero of Freedom" In 2001, Zimmermann was inducted into the CRN Industry Hall of Fame. In 2000, InfoWorld named him one of the "Top 10 Innovators in E-business". In 1999, he received the Louis Brandeis Award from Privacy International. In 1998, he received a Lifetime Achievement Award from Secure Computing Magazine. In 1996, he received the Norbert Wiener Award for Social and Professional Responsibility for promoting the responsible use of technology. In 1996, he received the Thomas S. Szasz Award for Outstanding Contributions to the Cause of Civil Liberties
political research Eysenck's political views related to his research: Eysenck was an outspoken opponent of what he perceived as the authoritarian abuses of the left and right and accordingly he believed that with this T axis he had found the link between nazism and communism. According to Eysenck, members of both ideologies were tough-minded. Central to Eysenck's thesis was the claim that tender-minded ideologies were democratic and friendly to human freedoms, while tough-minded ideologies were aggressive and authoritarian, a claim that is open to political criticism. In this context, Eysenck carried out studies on nazism and communist groups, claiming to find members of both groups to be more "dominant" and more "aggressive" than control groups. Eysenck left Nazi Germany to live in Britain and was not shy in attacking Stalinism, noting the anti-Semitic prejudices of the Russian government, the luxurious lifestyles of the Soviet Union leadership and the Orwellian "doublethink" of East Germany's naming itself the German Democratic Republic despite being "one of the most undemocratic regimes in the world today". While Eysenck was an opponent of Nazism, his relationship with fascist organizations was more complex. Eysenck himself lent theoretical support to the English National Party (which also opposed "Hitlerite" Nazism) and was interviewed in the first issue of their journal The Beacon in relation to his controversial views on relative intelligence between different races. At one point during the interview, Eysenck was asked whether or not he was of Jewish origin before the interviewer proceeded. His political allegiances were called into question by other researchers, notably Steven Rose, who alleged that his scientific research was used for political purposes. Subsequent criticism of Eysenck's research Eysenck's conception of tough-mindedness has been criticized for a number of reasons. Virtually no values were found to load only on the tough/tender dimension. The interpretation of tough-mindedness as a manifestation of "authoritarian" versus tender-minded "democratic" values was incompatible with the Frankfurt School's single-axis model, which conceptualized authoritarianism as being a fundamental manifestation of conservatism and many researchers took issue with the idea of "left-wing authoritarianism". The theory which Eysenck developed to explain individual variation in the observed dimensions, relating tough-mindedness to extroversion and psychoticism, returned ambiguous research results. Eysenck's finding that Nazis and communists were more tough-minded than members of mainstream political movements was criticised on technical grounds by Milton Rokeach. Eysenck's method of analysis involves the finding of an abstract dimension (a factor) that explains the spread of a given set of data (in this case, scores on a political survey). This abstract dimension may or may not correspond to a real material phenomenon and obvious problems arise when it is applied to human psychology. The second factor in such an analysis (such as Eysenck's T-factor) is the second best explanation for the spread of the data, which is by definition drawn at right angles to the first factor. While the first factor, which describes the bulk of the variation in a set of data, is more likely to represent something objectively real, subsequent factors become more and more abstract. Thus one would expect to find a factor that roughly corresponds to "left" and "right", as this is the dominant framing for politics in our society, but the basis of Eysenck's "tough/tender-minded" thesis (the second, T-factor) may well represent nothing beyond an abstract mathematical construct. Such a construct would be expected to appear in factor analysis whether or not it corresponded to something real, thus rendering Eysenck's thesis unfalsifiable through factor analysis. Milton Rokeach Dissatisfied with Hans J. Eysenck's work, Milton Rokeach developed his own two-axis model of political values in 1973, basing this on the ideas of freedom and equality, which he described in his book, The Nature of Human Values. Rokeach claimed that the defining difference between the left and right was that the left stressed the importance of equality more than the right. Despite his criticisms of Eysenck's tough–tender axis, Rokeach also postulated a basic similarity between communism and Nazism, claiming that these groups would not value freedom as greatly as more conventional social democrats, democratic socialists and capitalists would and he wrote that "the two value model presented here most resembles Eysenck's hypothesis". To test this model, Rokeach and his colleagues used content analysis on works exemplifying Nazism (written by Adolf Hitler), communism (written by Vladimir Lenin), capitalism (by Barry Goldwater) and socialism (written by various authors). This method has been criticized for its reliance on the experimenter's familiarity with the content under analysis and its dependence on the researcher's particular political outlooks. Multiple raters made frequency counts of sentences containing synonyms for a number of values identified by Rokeach—including freedom and equality—and Rokeach analyzed these results by comparing the relative frequency rankings of all the values for each of the four texts: Socialists (socialism) — freedom ranked 1st, equality ranked 2nd Hitler (Nazism) – freedom ranked 16th, equality ranked 17th Goldwater (capitalism) — freedom ranked 1st, equality ranked 16th Lenin (communism) — freedom ranked 17th, equality ranked 1st Later studies using samples of American ideologues and American presidential inaugural addresses attempted to apply this model. Later research In further research, Eysenck refined his methodology to include more questions on economic issues. Doing this, he revealed a split in the left–right axis between social policy and economic policy, with a previously undiscovered dimension of socialism-capitalism (S-factor). While factorially distinct from Eysenck's previous R factor, the S-factor did positively correlate with the R-factor, indicating that a basic left–right or right–left tendency underlies both social values and economic values, although S tapped more into items discussing economic inequality and big business, while R relates more to the treatment of criminals and to sexual issues and military issues. Most research and political theory since this time has replicated the factors shown above. Another replication came from Ronald Inglehart's research into national opinions based on the World Values Survey, although Inglehart's research described the values of countries rather than individuals or groups of individuals within nations. Inglehart's two-factor solution took the form of Ferguson's original religionism and humanitarianism dimensions; Inglehart labelled them "secularism–traditionalism", which covered issues of tradition and religion, like patriotism, abortion, euthanasia and the importance of obeying the law and authority figures, and "survivalism – self expression", which measured issues like everyday conduct and dress, acceptance of diversity (including foreigners) and innovation and attitudes towards people with specific controversial lifestyles such as homosexuality and vegetarianism, as well as willingness to engage in political activism. See for Inglehart's national chart. Though not directly related to Eysenck's research, evidence suggests there may be as many as 6 dimensions of political opinions in the United States and 10 dimensions in the United Kingdom. This conclusion was based on two large datasets and uses a Bayesian approach rather than the traditional factor analysis method. Other double-axis models Greenberg and Jonas: left–right, ideological rigidity In a 2003 Psychological Bulletin paper, Jeff Greenberg and Eva Jonas posit a model comprising the standard left–right axis and an axis representing ideological rigidity. For Greenberg and Jonas, ideological rigidity has "much in common with the related concepts of dogmatism and authoritarianism" and is characterized by "believing in strong leaders and submission, preferring one’s own in-group, ethnocentrism and nationalism, aggression against dissidents, and control with the help of police and military". Greenberg and Jonas posit that high ideological rigidity can be motivated by "particularly strong needs to reduce fear and uncertainty" and is a primary shared characteristic of "people who subscribe to any extreme government or ideology, whether it is right-wing or left-wing". Inglehart: traditionalist–secular and self expressionist–survivalist In its 4 January 2003 issue, The Economist discussed a chart, proposed by Ronald Inglehart and supported by the World Values Survey (associated with the University of Michigan), to plot cultural ideology onto two dimensions. On the y-axis it covered issues of tradition and religion, like patriotism, abortion, euthanasia and the importance of obeying the law and authority figures. At the bottom of the chart is the traditionalist position on issues like these (with loyalty to country and family and respect for life considered important), while at the top is the secular position. The x-axis deals with self-expression, issues like everyday conduct and dress, acceptance of diversity (including foreigners) and innovation, and attitudes towards people with specific controversial lifestyles such as vegetarianism, as well as willingness to engage in political activism. At the right of the chart is the open self-expressionist position, while at the left is its opposite position, which Inglehart calls survivalist. This chart not only has the power to map the values of individuals, but also to compare the values of people in different countries. Placed on this chart, European Union countries in continental Europe come out on the top right, Anglophone countries on the middle right, Latin American countries on the bottom right, African, Middle Eastern and South Asian countries on the bottom left and ex-Communist countries on the top left. Pournelle: liberty–control, irrationalism–rationalism This very distinct two-axis model was created by Jerry Pournelle in 1963 for his doctoral dissertation in political science. The Pournelle chart has liberty on one axis, with those on the left seeking freedom from control or protections for social deviance and those on the right emphasizing state authority or protections for norm enforcement (farthest right being state worship, farthest left being the idea of a state as the "ultimate evil"). The other axis
syncretic politics, although the label tends to mischaracterize positions that have a logical location on a two-axis spectrum because they seem randomly brought together on a one-axis left–right spectrum. Political scientists have frequently noted that a single left–right axis is too simplistic and insufficient for describing the existing variation in political beliefs and included other axes. Although the descriptive words at polar opposites may vary, the axes of popular biaxial spectra are usually split between economic issues (on a left–right dimension) and socio-cultural issues (on an authority–liberty dimension). Historical origin of the terms The terms right and left refer to political affiliations originating early in the French Revolutionary era of 1789–1799 and referred originally to the seating arrangements in the various legislative bodies of France. As seen from the Speaker's seat at the front of the Assembly, the aristocracy sat on the right (traditionally the seat of honor) and the commoners sat on the left, hence the terms right-wing politics and left-wing politics. Originally, the defining point on the ideological spectrum was the Ancien Régime ("old order"). "The Right" thus implied support for aristocratic or royal interests and the church, while "The Left" implied support for republicanism, secularism and civil liberties. Because the political franchise at the start of the revolution was relatively narrow, the original "Left" represented mainly the interests of the bourgeoisie, the rising capitalist class (with notable exceptions such as the proto-communist Gracchus Babeuf). Support for laissez-faire commerce and free markets were expressed by politicians sitting on the left because these represented policies favorable to capitalists rather than to the aristocracy, but outside parliamentary politics these views are often characterized as being on the Right. The reason for this apparent contradiction lies in the fact that those "to the left" of the parliamentary left, outside official parliamentary structures (such as the sans-culottes of the French Revolution), typically represent much of the working class, poor peasantry and the unemployed. Their political interests in the French Revolution lay with opposition to the aristocracy and so they found themselves allied with the early capitalists. However, this did not mean that their economic interests lay with the laissez-faire policies of those representing them politically. As capitalist economies developed, the aristocracy became less relevant and were mostly replaced by capitalist representatives. The size of the working class increased as capitalism expanded and began to find expression partly through trade unionist, socialist, anarchist and communist politics rather than being confined to the capitalist policies expressed by the original "left". This evolution has often pulled parliamentary politicians away from laissez-faire economic policies, although this has happened to different degrees in different countries, especially those with a history of issues with more authoritarian-left countries, such as the Soviet Union or China under Mao Zedong. Thus, the word "Left" in American political parlance may refer to "liberalism" and be identified with the Democratic Party, whereas in a country such as France these positions would be regarded as relatively more right-wing, or centrist overall, and "left" is more likely to refer to "socialist" or "social-democratic" positioned rather than "liberal" ones. Academic investigation For almost a century, social scientists have considered the problem of how to best describe political variation. Leonard W. Ferguson In 1950, Leonard W. Ferguson analyzed political values using ten scales measuring attitudes toward: birth control, capital punishment, censorship, communism, evolution, law, patriotism, theism, treatment of criminals and war. Submitting the results to factor analysis, he was able to identify three factors, which he named religionism, humanitarianism and nationalism. He defined religionism as belief in God and negative attitudes toward evolution and birth control; humanitarianism as being related to attitudes opposing war, capital punishment and harsh treatment of criminals; and nationalism as describing variation in opinions on censorship, law, patriotism and communism. This system was derived empirically, as rather than devising a political model on purely theoretical grounds and testing it, Ferguson's research was exploratory. As a result of this method, care must be taken in the interpretation of Ferguson's three factors, as factor analysis will output an abstract factor whether an objectively real factor exists or not. Although replication of the nationalism factor was inconsistent, the finding of religionism and humanitarianism had a number of replications by Ferguson and others. Hans Eysenck Shortly afterward, Hans Eysenck began researching political attitudes in the United Kingdom. He believed that there was something essentially similar about the National Socialists (Nazis) on the one hand and the communists on the other, despite their opposite positions on the left–right axis. As Hans Eysenck described in his 1956 book Sense and Nonsense in Psychology, Eysenck compiled a list of political statements found in newspapers and political tracts and asked subjects to rate their agreement or disagreement with each. Submitting this value questionnaire to the same process of factor analysis used by Ferguson, Eysenck drew out two factors, which he named "Radicalism" (R-factor) and "Tender-Mindedness" (T-factor). Such analysis produces a factor whether or not it corresponds to a real-world phenomenon and so caution must be exercised in its interpretation. While Eysenck's R-factor is easily identified as the classical "left–right" dimension, the T-factor (representing a factor drawn at right angles to the R-factor) is less intuitive, as high-scorers favored pacifism, racial equality, religious education and restrictions on abortion, while low-scorers had attitudes more friendly to militarism, harsh punishment, easier divorce laws and companionate marriage. According to social scientist Bojan Todosijevic, radicalism was defined as positively viewing evolution theory, strikes, welfare state, mixed marriages, student protests, law reform, women's liberation, United Nations, nudist camps, pop-music, modern art, immigration, abolishing private property, and rejection of patriotism. Conservatism was defined as positively viewing white superiority, birching, death penalty, anti-Semitism, opposition to nationalization of property, and birth control. Tender-mindedness was defined by moral training, inborn conscience, Bible truth, chastity, self-denial, pacifism, anti-discrimination, being against the death penalty, and harsh treatment of criminals. Tough-mindedness was defined by compulsory sterilization, euthanasia, easier divorce laws, racism, anti-Semitism, compulsory military training, wife swapping, casual living, death penalty, and harsh treatment of criminals. Despite the difference in methodology, location and theory, the results attained by Eysenck and Ferguson matched. Simply rotating Eysenck's two factors 45 degrees renders the same factors of religionism and humanitarianism identified by Ferguson in America. Eysenck's dimensions of R and T were found by factor analyses of values in Germany and Sweden, France and Japan. One interesting result Eysenck noted in his 1956 work was that in the United States and the United Kingdom, most of the political variance was subsumed by the left/right axis, while in France the T-axis was larger and in the Middle East the only dimension to be found was the T-axis: "Among mid-Eastern Arabs it has been found that while the tough-minded/tender-minded dimension is still clearly expressed in the relationships observed between different attitudes, there is nothing that corresponds to the radical-conservative continuum". Relationship between Eysenck's political views and political research Eysenck's political views related to his research: Eysenck was an outspoken opponent of what he perceived as the authoritarian abuses of the left and right and accordingly he believed that with this T axis he had found the link between nazism and communism. According to Eysenck, members of both ideologies were tough-minded. Central to Eysenck's thesis was the claim that tender-minded ideologies were democratic and friendly to human freedoms, while tough-minded ideologies were aggressive and authoritarian, a claim that is open to political criticism. In this context, Eysenck carried out studies on nazism and communist groups, claiming to find members of both groups to be more "dominant" and more "aggressive" than control groups. Eysenck left Nazi Germany to live in Britain and was not shy in attacking Stalinism, noting the anti-Semitic prejudices of the Russian government, the luxurious lifestyles of the Soviet Union leadership and the Orwellian "doublethink" of East Germany's naming itself the German Democratic Republic despite being "one of the most undemocratic regimes in the world today". While Eysenck was an opponent of Nazism, his relationship with fascist organizations was more complex. Eysenck himself lent theoretical support to the English National Party (which also opposed "Hitlerite" Nazism) and was interviewed in the first issue of their journal The Beacon in relation to his controversial views on relative intelligence between different races. At one point during the interview, Eysenck was asked whether or not he was of Jewish origin before the interviewer proceeded. His political allegiances were called into question by other researchers, notably Steven Rose, who alleged that his scientific research was used for political purposes. Subsequent criticism of Eysenck's research Eysenck's conception of tough-mindedness has been criticized for a number of reasons. Virtually no values were found to load only on the tough/tender dimension. The interpretation of tough-mindedness as a manifestation of "authoritarian" versus tender-minded "democratic" values was incompatible with the Frankfurt School's single-axis model, which conceptualized authoritarianism as being a fundamental manifestation of conservatism and many researchers took issue with the idea of "left-wing authoritarianism". The theory which Eysenck developed to explain individual variation in the observed dimensions, relating tough-mindedness to extroversion and psychoticism, returned ambiguous research results. Eysenck's finding that Nazis and communists were more tough-minded than members of mainstream political movements was criticised on technical grounds by Milton Rokeach. Eysenck's method of analysis involves the finding of an abstract dimension (a factor) that explains the spread of a given set of data (in this case, scores on a political survey). This abstract dimension may or may not correspond to a real material phenomenon and obvious problems arise when it is applied to human psychology. The second factor in such an analysis (such as Eysenck's T-factor) is the second best explanation for the spread of the data, which is by definition drawn at right angles to the first factor. While the first factor, which describes the bulk of the variation in a set of data, is more likely to represent something objectively real, subsequent factors become more and more abstract. Thus one would expect to find a factor that roughly corresponds to "left" and "right", as this is the dominant framing for politics in our society, but the basis of Eysenck's "tough/tender-minded" thesis (the second, T-factor) may well represent nothing beyond an abstract mathematical construct. Such a construct would be expected to appear in factor analysis whether or not it corresponded to something real, thus rendering Eysenck's thesis unfalsifiable through factor analysis. Milton Rokeach Dissatisfied with Hans J. Eysenck's work, Milton Rokeach developed his own two-axis model of political values in 1973, basing this on the ideas of freedom and equality, which he described in his book, The Nature of Human Values. Rokeach claimed that the defining difference between the left and right was that the left stressed the importance of equality more than the right. Despite his criticisms of Eysenck's tough–tender axis, Rokeach also postulated a basic similarity between communism and Nazism, claiming that these groups would not value freedom as greatly as more conventional social democrats, democratic socialists and capitalists would and he wrote that "the two value model presented here most resembles Eysenck's hypothesis". To test this model, Rokeach and his colleagues used content analysis on works exemplifying Nazism (written by Adolf Hitler), communism (written by Vladimir Lenin), capitalism (by Barry Goldwater) and socialism (written by various authors). This method has been criticized for its reliance on the experimenter's familiarity with the content under analysis and its dependence on the researcher's particular political outlooks. Multiple raters made frequency counts of sentences containing synonyms for a number of values identified by Rokeach—including freedom and equality—and Rokeach analyzed these results by comparing the relative frequency rankings of all the values for each of the four texts: Socialists (socialism) — freedom ranked 1st, equality ranked 2nd Hitler (Nazism) – freedom ranked 16th, equality ranked 17th Goldwater (capitalism) — freedom ranked 1st, equality ranked 16th Lenin (communism) — freedom ranked 17th, equality ranked 1st Later studies using samples of American ideologues and American presidential inaugural addresses attempted to apply this model. Later research In further research, Eysenck refined his methodology to include more questions on economic issues. Doing this, he revealed a split in the left–right axis between social policy and economic policy, with a previously undiscovered dimension of socialism-capitalism (S-factor). While factorially distinct from Eysenck's previous R factor, the S-factor did positively correlate with the R-factor, indicating that a basic left–right or right–left tendency underlies both social values and economic values, although S tapped more into items discussing economic inequality and big business, while R relates more to the treatment of criminals and to sexual issues and military issues. Most research and political theory since this time has replicated the factors shown above. Another replication came from Ronald Inglehart's research into national opinions based on the World Values Survey, although Inglehart's research described the values of countries rather than individuals or groups of individuals within nations. Inglehart's two-factor solution took the form of Ferguson's original religionism and humanitarianism dimensions; Inglehart labelled them "secularism–traditionalism", which covered issues of tradition and religion, like patriotism, abortion, euthanasia and the importance of obeying the law and authority figures, and "survivalism – self expression", which measured issues like everyday conduct and dress, acceptance of diversity (including foreigners) and innovation and attitudes towards people with specific controversial lifestyles such as homosexuality and vegetarianism, as well as willingness to engage in political activism. See for Inglehart's national chart. Though not directly related to Eysenck's research, evidence suggests there may be as many as 6 dimensions of political opinions in the United States and 10 dimensions in the United Kingdom. This conclusion was based on two large datasets and uses a Bayesian approach rather than the traditional factor analysis method. Other double-axis models Greenberg and Jonas: left–right, ideological rigidity In a 2003 Psychological Bulletin paper, Jeff Greenberg and Eva Jonas posit a model comprising the standard left–right axis and an axis representing ideological rigidity. For Greenberg and Jonas, ideological rigidity has "much in common with the related concepts of dogmatism and authoritarianism" and is characterized by "believing in strong leaders and submission, preferring one’s own in-group, ethnocentrism and nationalism, aggression against dissidents, and control with the help of police and military". Greenberg and Jonas posit that high
fetus, the ectoderm, mesoderm, and endoderm, develop. The narrow line of cells begin to form the endoderm and mesoderm. The ectoderm begins to grow rapidly as a result of chemicals being produced by the mesoderm. These three layers give rise to all the various types of tissue in the body. The endoderm later forms the lining of the tongue, digestive tract, lungs, bladder and several glands. The mesoderm forms muscle, bone, and lymph tissue, as well as the interior of the lungs, heart, and reproductive and excretory systems. It also gives rise to the spleen, and produces blood cells. The ectoderm forms the skin, nails, hair, cornea, lining of the internal and external ear, nose, sinuses, mouth, anus, teeth, pituitary gland, mammary glands, eyes, and all parts of the nervous system. Approximately 18 days after fertilization, the embryo has divided to form much of the tissue it will need. It is shaped like a pear, where the head region is larger than the tail. The embryo's nervous system is one of the first organic systems to grow. It begins growing in a concave area known as the neural groove. The blood system continues to grow networks which allow the blood to flow around the embryo. Blood cells are already being produced and are flowing through these developing networks. Secondary blood vessels also begin to develop around the placenta, to supply it with more nutrients. Blood cells begin to form on the sac in the center of the embryo, as well as cells which begin to differentiate into blood vessels. Endocardial cells begin to form the myocardium. At about 24 days past fertilization, there is a primitive S-shaped tubule heart which begins beating. The flow of fluids throughout the embryo begins at this stage. Gestation periods For mammals the gestation period is the time in which a fetus develops, beginning with fertilization and ending at birth. The duration of this period varies between species. For most species, the amount a fetus grows before birth determines the length of the gestation period. Smaller species normally have a shorter gestation period than larger animals. For example, a cat's gestation normally takes 58–65 days while an elephant's takes nearly 2 years (21 months). However, growth does not necessarily determine the length of gestation for all species, especially for those with a breeding season. Species that use a breeding season usually give birth during a specific time of year when food is available. Various other factors can come into play in determining the duration of gestation. For humans, male fetuses normally gestate several days longer than females and multiple pregnancies gestate for a shorter period. Ethnicity in humans is also a factor that may lengthen or shorten gestation. In
skin, nails, hair, cornea, lining of the internal and external ear, nose, sinuses, mouth, anus, teeth, pituitary gland, mammary glands, eyes, and all parts of the nervous system. Approximately 18 days after fertilization, the embryo has divided to form much of the tissue it will need. It is shaped like a pear, where the head region is larger than the tail. The embryo's nervous system is one of the first organic systems to grow. It begins growing in a concave area known as the neural groove. The blood system continues to grow networks which allow the blood to flow around the embryo. Blood cells are already being produced and are flowing through these developing networks. Secondary blood vessels also begin to develop around the placenta, to supply it with more nutrients. Blood cells begin to form on the sac in the center of the embryo, as well as cells which begin to differentiate into blood vessels. Endocardial cells begin to form the myocardium. At about 24 days past fertilization, there is a primitive S-shaped tubule heart which begins beating. The flow of fluids throughout the embryo begins at this stage. Gestation periods For mammals the gestation period is the time in which a fetus develops, beginning with fertilization and ending at birth. The duration of this period varies between species. For most species, the amount a fetus grows before birth determines the length of the gestation period. Smaller species normally have a shorter gestation period than larger animals. For example, a cat's gestation normally takes 58–65 days while an elephant's takes nearly 2 years
(PNH). It has also been noted as a symptom of gratification disorder in children. The word paroxysm means "sudden attack, outburst", and comes from the Greek παροξυσμός (paroxusmos), "irritation, exasperation". Paroxysmal attacks in various disorders have been reported extensively and ephaptic coupling of demyelinated nerves has been presumed as one of the underlying mechanisms of this phenomenon. This is supported by the presence of these attacks in multiple sclerosis and tabes dorsalis, which both involve demyelination of spinal cord neurons. Exercise, tactile stimuli, hot water, anxiety and neck flexion may provoke paroxysmal attacks. Most reported
spinal cord neurons. Exercise, tactile stimuli, hot water, anxiety and neck flexion may provoke paroxysmal attacks. Most reported paroxysmal attacks are painful tonic spasms, dysarthria and ataxia, numbness and hemiparesis. They are typically different from other transient symptoms by their brevity (lasting no more than 2 minutes), frequency (from 1–2 times/day up to a few hundred times/day), stereotyped fashion and excellent response to drugs (usually carbamazepine). Withdrawal of
cut to include at least one or two eyes, or cuttings, a practice used in greenhouses for the production of healthy seed tubers. Plants propagated from tubers are clones of the parent, whereas those propagated from seed produce a range of different varieties. Genetics There are about 5,000 potato varieties worldwide. Three thousand of them are found in the Andes alone, mainly in Peru, Bolivia, Ecuador, Chile, and Colombia. They belong to eight or nine species, depending on the taxonomic school. Apart from the 5,000 cultivated varieties, there are about 200 wild species and subspecies, many of which can be cross-bred with cultivated varieties. Cross-breeding has been done repeatedly to transfer resistances to certain pests and diseases from the gene pool of wild species to the gene pool of cultivated potato species. The major species grown worldwide is Solanum tuberosum (a tetraploid with 48 chromosomes), and modern varieties of this species are the most widely cultivated. There are also four diploid species (with 24 chromosomes): S. stenotomum, S. phureja, S. goniocalyx, and S. ajanhuiri. There are two triploid species (with 36 chromosomes): S. chaucha and S. juzepczukii. There is one pentaploid cultivated species (with 60 chromosomes): S. curtilobum. There are two major subspecies of Solanum tuberosum: andigena, or Andean; and tuberosum, or Chilean. The Andean potato is adapted to the short-day conditions prevalent in the mountainous equatorial and tropical regions where it originated; the Chilean potato, however, native to the Chiloé Archipelago, is adapted to the long-day conditions prevalent in the higher latitude region of southern Chile. The International Potato Center, based in Lima, Peru, holds 4,870 types of potato germplasm, most of which are traditional landrace cultivars. The international Potato Genome Sequencing Consortium announced in 2009 that they had achieved a draft sequence of the potato genome, containing 12 chromosomes and 860 million base pairs, making it a medium-sized plant genome. More than 99 percent of all current varieties of potatoes currently grown are direct descendants of a subspecies that once grew in the lowlands of south-central Chile. Nonetheless, genetic testing of the wide variety of cultivars and wild species affirms that all potato subspecies derive from a single origin in the area of present-day southern Peru and extreme Northwestern Bolivia (from a species in the Solanum brevicaule complex). Most modern potatoes grown in North America arrived through European settlement and not independently from the South American sources, although at least one wild potato species, Solanum fendleri, naturally ranges from Peru into Texas, where it is used in breeding for resistance to a nematode species that attacks cultivated potatoes. A secondary center of genetic variability of the potato is Mexico, where important wild species that have been used extensively in modern breeding are found, such as the hexaploid Solanum demissum, as a source of resistance to the devastating late blight disease. Another relative native to this region, Solanum bulbocastanum, has been used to genetically engineer the potato to resist potato blight. Varieties There are close to 4,000 varieties of potato each of which has specific agricultural or culinary attributes. Around 80 varieties are commercially available in the UK. In general, varieties are categorized into a few main groups based on common characteristics, such as russet potatoes (rough brown skin), red potatoes, white potatoes, yellow potatoes (also called Yukon potatoes) and purple potatoes. For culinary purposes, varieties are often differentiated by their waxiness: floury or mealy baking potatoes have more starch (20–22%) than waxy boiling potatoes (16–18%). The distinction may also arise from variation in the comparative ratio of two different potato starch compounds: amylose and amylopectin. Amylose, a long-chain molecule, diffuses from the starch granule when cooked in water, and lends itself to dishes where the potato is mashed. Varieties that contain a slightly higher amylopectin content, which is a highly branched molecule, help the potato retain its shape after being boiled in water. Potatoes that are good for making potato chips or potato crisps are sometimes called "chipping potatoes", which means they meet the basic requirements of similar varietal characteristics, being firm, fairly clean, and fairly well-shaped. Immature potatoes may be sold fresh from the field as "creamer" or "new" potatoes and are particularly valued for their taste. They are typically small in size and tender, with a loose skin, and flesh containing a lower level of starch than other potatoes. In the USA they are generally either a Yukon Gold potato or a red potato, called gold creamers or red creamers respectively. In the UK, the Jersey Royal is a famous type of new potato. They are distinct from "baby", "salad" or "fingerling" potatoes, which are small and tend to have waxy flesh, but are grown to maturity and can be stored for months before being sold. The European Cultivated Potato Database (ECPD) is an online collaborative database of potato variety descriptions that is updated and maintained by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks (ECP/GR)—which is run by the International Plant Genetic Resources Institute (IPGRI). Pigmentation Dozens of potato cultivars have been selectively bred specifically for their skin or, more commonly, flesh color, including gold, red, and blue varieties that contain varying amounts of phytochemicals, including carotenoids for gold/yellow or polyphenols for red or blue cultivars. Carotenoid compounds include provitamin A alpha-carotene and beta-carotene, which are converted to the essential nutrient, vitamin A, during digestion. Anthocyanins mainly responsible for red or blue pigmentation in potato cultivars do not have nutritional significance, but are used for visual variety and consumer appeal. In 2010, potatoes were bioengineered specifically for these pigmentation traits. Genetically engineered potatoes Genetic research has produced several genetically modified varieties. 'New Leaf', owned by Monsanto Company, incorporates genes from Bacillus thuringiensis, which confers resistance to the Colorado potato beetle; 'New Leaf Plus' and 'New Leaf Y', approved by US regulatory agencies during the 1990s, also include resistance to viruses. McDonald's, Burger King, Frito-Lay, and Procter & Gamble announced they would not use genetically modified potatoes, and Monsanto published its intent to discontinue the line in March 2001. Waxy potato varieties produce two main kinds of potato starch, amylose and amylopectin, the latter of which is most industrially useful. BASF developed the Amflora potato, which was modified to express antisense RNA to inactivate the gene for granule bound starch synthase, an enzyme which catalyzes the formation of amylose. Amflora potatoes therefore produce starch consisting almost entirely of amylopectin, and are thus more useful for the starch industry. In 2010, the European Commission cleared the way for 'Amflora' to be grown in the European Union for industrial purposes only—not for food. Nevertheless, under EU rules, individual countries have the right to decide whether they will allow this potato to be grown on their territory. Commercial planting of 'Amflora' was expected in the Czech Republic and Germany in the spring of 2010, and Sweden and the Netherlands in subsequent years. Another GM potato variety developed by BASF is 'Fortuna' which was made resistant to late blight by adding two resistance genes, blb1 and blb2, which originate from the Mexican wild potato Solanum bulbocastanum. In October 2011 BASF requested cultivation and marketing approval as a feed and food from the EFSA. In 2012, GMO development in Europe was stopped by BASF. In November 2014, the USDA approved a genetically modified potato developed by J.R. Simplot Company, which contains genetic modifications that prevent bruising and produce less acrylamide when fried than conventional potatoes; the modifications do not cause new proteins to be made, but rather prevent proteins from being made via RNA interference. Genetically modified varieties have met public resistance in the United States and in the European Union. Biosynthesis of starch Sucrose is a product of photosynthesis. Ferreira et al. (2010) found that the genes for starch biosynthesis start to be transcribed at the same time as sucrose synthase activity begins. This transcription - including starch synthase - also shows a diurnal rhythm, correlating with the sucrose supply arriving from the leaves. History The potato was first domesticated in the region of modern-day southern Peru and northwestern Bolivia by pre-Columbian farmers, around Lake Titicaca. It has since spread around the world and become a staple crop in many countries. The earliest archaeologically verified potato tuber remains have been found at the coastal site of Ancon (central Peru), dating to 2500 BC. The most widely cultivated variety, Solanum tuberosum tuberosum, is indigenous to the Chiloé Archipelago, and has been cultivated by the local indigenous people since before the Spanish conquest. According to conservative estimates, the introduction of the potato was responsible for a quarter of the growth in Old World population and urbanization between 1700 and 1900. In the Altiplano, potatoes provided the principal energy source for the Inca civilization, its predecessors, and its Spanish successor. Following the Spanish conquest of the Inca Empire, the Spanish introduced the potato to Europe in the second half of the 16th century, part of the Columbian exchange. The staple was subsequently conveyed by European (possibly including Russian) mariners to territories and ports throughout the world, especially their colonies. The potato was slow to be adopted by European and colonial farmers, but after 1750 it became an important food staple and field crop and played a major role in the European 19th century population boom. However, lack of genetic diversity, due to the very limited number of varieties initially introduced, left the crop vulnerable to disease. In 1845, a plant disease known as late blight, caused by the fungus-like oomycete Phytophthora infestans, spread rapidly through the poorer communities of western Ireland as well as parts of the Scottish Highlands, resulting in the crop failures that led to the Great Irish Famine. Thousands of varieties still persist in the Andes however, where over 100 cultivars might be found in a single valley, and a dozen or more might be maintained by a single agricultural household. Production In 2020, world production of potatoes was 359 million tonnes, led by China with 22% of the total (table). Other major producers were India, Russia, Ukraine and the United States. It remains an essential crop in Europe (especially northern and eastern Europe), where per capita production is still the highest in the world, but the most rapid expansion over the past few decades has occurred in southern and eastern Asia. Nutrition According to the United States Department of Agriculture, a typical raw potato is 79% water, 17% carbohydrates (88% is starch), 2% protein, and contains negligible fat (see table). In a portion, raw potato provides of food energy and is a rich source of vitamin B6 and vitamin C (23% and 24% of the Daily Value, respectively), with no other vitamins or minerals in significant amount (see table). The potato is rarely eaten raw because raw potato starch is poorly digested by humans. When a potato is baked, its contents of vitamin B6 and vitamin C decline notably, while there is little significant change in the amount of other nutrients. Potatoes are often broadly classified as having a high glycemic index (GI) and so are often excluded from the diets of individuals trying to follow a low-GI diet. The GI of potatoes can vary considerably depending on the cultivar, growing conditions and storage, preparation methods (by cooking method, whether it is eaten hot or cold, whether it is mashed or cubed or consumed whole), and accompanying foods consumed (especially the addition of various high-fat or high-protein toppings). Consuming reheated or pre-cooked and cooled potatoes may yield a lower GI effect due to the formation of resistant starch. In the UK, potatoes are not considered by the National Health Service (NHS) as counting or contributing towards the recommended daily five portions of fruit and vegetables, the 5-A-Day program. Comparison
although this tendency has been minimized in commercial varieties. After flowering, potato plants produce small green fruits that resemble green cherry tomatoes, each containing about 300 seeds. Like all parts of the plant except the tubers, the fruit contain the toxic alkaloid solanine and are therefore unsuitable for consumption. All new potato varieties are grown from seeds, also called "true potato seed", "TPS" or "botanical seed" to distinguish it from seed tubers. New varieties grown from seed can be propagated vegetatively by planting tubers, pieces of tubers cut to include at least one or two eyes, or cuttings, a practice used in greenhouses for the production of healthy seed tubers. Plants propagated from tubers are clones of the parent, whereas those propagated from seed produce a range of different varieties. Genetics There are about 5,000 potato varieties worldwide. Three thousand of them are found in the Andes alone, mainly in Peru, Bolivia, Ecuador, Chile, and Colombia. They belong to eight or nine species, depending on the taxonomic school. Apart from the 5,000 cultivated varieties, there are about 200 wild species and subspecies, many of which can be cross-bred with cultivated varieties. Cross-breeding has been done repeatedly to transfer resistances to certain pests and diseases from the gene pool of wild species to the gene pool of cultivated potato species. The major species grown worldwide is Solanum tuberosum (a tetraploid with 48 chromosomes), and modern varieties of this species are the most widely cultivated. There are also four diploid species (with 24 chromosomes): S. stenotomum, S. phureja, S. goniocalyx, and S. ajanhuiri. There are two triploid species (with 36 chromosomes): S. chaucha and S. juzepczukii. There is one pentaploid cultivated species (with 60 chromosomes): S. curtilobum. There are two major subspecies of Solanum tuberosum: andigena, or Andean; and tuberosum, or Chilean. The Andean potato is adapted to the short-day conditions prevalent in the mountainous equatorial and tropical regions where it originated; the Chilean potato, however, native to the Chiloé Archipelago, is adapted to the long-day conditions prevalent in the higher latitude region of southern Chile. The International Potato Center, based in Lima, Peru, holds 4,870 types of potato germplasm, most of which are traditional landrace cultivars. The international Potato Genome Sequencing Consortium announced in 2009 that they had achieved a draft sequence of the potato genome, containing 12 chromosomes and 860 million base pairs, making it a medium-sized plant genome. More than 99 percent of all current varieties of potatoes currently grown are direct descendants of a subspecies that once grew in the lowlands of south-central Chile. Nonetheless, genetic testing of the wide variety of cultivars and wild species affirms that all potato subspecies derive from a single origin in the area of present-day southern Peru and extreme Northwestern Bolivia (from a species in the Solanum brevicaule complex). Most modern potatoes grown in North America arrived through European settlement and not independently from the South American sources, although at least one wild potato species, Solanum fendleri, naturally ranges from Peru into Texas, where it is used in breeding for resistance to a nematode species that attacks cultivated potatoes. A secondary center of genetic variability of the potato is Mexico, where important wild species that have been used extensively in modern breeding are found, such as the hexaploid Solanum demissum, as a source of resistance to the devastating late blight disease. Another relative native to this region, Solanum bulbocastanum, has been used to genetically engineer the potato to resist potato blight. Varieties There are close to 4,000 varieties of potato each of which has specific agricultural or culinary attributes. Around 80 varieties are commercially available in the UK. In general, varieties are categorized into a few main groups based on common characteristics, such as russet potatoes (rough brown skin), red potatoes, white potatoes, yellow potatoes (also called Yukon potatoes) and purple potatoes. For culinary purposes, varieties are often differentiated by their waxiness: floury or mealy baking potatoes have more starch (20–22%) than waxy boiling potatoes (16–18%). The distinction may also arise from variation in the comparative ratio of two different potato starch compounds: amylose and amylopectin. Amylose, a long-chain molecule, diffuses from the starch granule when cooked in water, and lends itself to dishes where the potato is mashed. Varieties that contain a slightly higher amylopectin content, which is a highly branched molecule, help the potato retain its shape after being boiled in water. Potatoes that are good for making potato chips or potato crisps are sometimes called "chipping potatoes", which means they meet the basic requirements of similar varietal characteristics, being firm, fairly clean, and fairly well-shaped. Immature potatoes may be sold fresh from the field as "creamer" or "new" potatoes and are particularly valued for their taste. They are typically small in size and tender, with a loose skin, and flesh containing a lower level of starch than other potatoes. In the USA they are generally either a Yukon Gold potato or a red potato, called gold creamers or red creamers respectively. In the UK, the Jersey Royal is a famous type of new potato. They are distinct from "baby", "salad" or "fingerling" potatoes, which are small and tend to have waxy flesh, but are grown to maturity and can be stored for months before being sold. The European Cultivated Potato Database (ECPD) is an online collaborative database of potato variety descriptions that is updated and maintained by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks (ECP/GR)—which is run by the International Plant Genetic Resources Institute (IPGRI). Pigmentation Dozens of potato cultivars have been selectively bred specifically for their skin or, more commonly, flesh color, including gold, red, and blue varieties that contain varying amounts of phytochemicals, including carotenoids for gold/yellow or polyphenols for red or blue cultivars. Carotenoid compounds include provitamin A alpha-carotene and beta-carotene, which are converted to the essential nutrient, vitamin A, during digestion. Anthocyanins mainly responsible for red or blue pigmentation in potato cultivars do not have nutritional significance, but are used for visual variety and consumer appeal. In 2010, potatoes were bioengineered specifically for these pigmentation traits. Genetically engineered potatoes Genetic research has produced several genetically modified varieties. 'New Leaf', owned by Monsanto Company, incorporates genes from Bacillus thuringiensis, which confers resistance to the Colorado potato beetle; 'New Leaf Plus' and 'New Leaf Y', approved by US regulatory agencies during the 1990s, also include resistance to viruses. McDonald's, Burger King, Frito-Lay, and Procter & Gamble announced they would not use genetically modified potatoes, and Monsanto published its intent to discontinue the line in March 2001. Waxy potato varieties produce two main kinds of potato starch, amylose and amylopectin, the latter of which is most industrially useful. BASF developed the Amflora potato, which was modified to express antisense RNA to inactivate the gene for granule bound starch synthase, an enzyme which catalyzes the formation of amylose. Amflora potatoes therefore produce starch consisting almost entirely of amylopectin, and are thus more useful for the starch industry. In 2010, the European Commission cleared the way for 'Amflora' to be grown in the European Union for industrial purposes only—not for food. Nevertheless, under EU rules, individual countries have the right to decide whether they will allow this potato to be grown on their territory. Commercial planting of 'Amflora' was expected in the Czech Republic and Germany in the spring of 2010, and Sweden and the Netherlands in subsequent years. Another GM potato variety developed by BASF is 'Fortuna' which was made resistant to late blight by adding two resistance genes, blb1 and blb2, which originate from the Mexican wild potato Solanum bulbocastanum. In October 2011 BASF requested cultivation and marketing approval as a feed and food from the EFSA. In 2012, GMO development in Europe was stopped by BASF. In November 2014, the USDA approved a genetically modified potato developed by J.R. Simplot Company, which contains genetic modifications that prevent bruising and produce less acrylamide when fried than conventional potatoes; the modifications do not cause new proteins to be made, but rather prevent proteins from being made via RNA interference. Genetically modified varieties have met public resistance in the United States and in the European Union. Biosynthesis of starch Sucrose is a product of photosynthesis. Ferreira et al. (2010) found that the genes for starch biosynthesis start to be transcribed at the same time as sucrose synthase activity begins. This transcription - including starch synthase - also shows a diurnal rhythm, correlating with the sucrose supply arriving from the leaves. History The potato was first domesticated in the region of modern-day southern Peru and northwestern Bolivia by pre-Columbian farmers, around Lake Titicaca. It has since spread around the world and become a staple crop in many countries. The earliest archaeologically verified potato tuber remains have been found at the coastal site of Ancon (central Peru), dating to 2500 BC. The most widely cultivated variety, Solanum tuberosum tuberosum, is indigenous to the Chiloé Archipelago, and has been cultivated by the local indigenous people since before the Spanish conquest. According to conservative estimates, the introduction of the potato was responsible for a quarter of the growth in Old World population and urbanization between 1700 and 1900. In the Altiplano, potatoes provided the principal energy source for the Inca civilization, its predecessors, and its Spanish successor. Following the Spanish conquest of the Inca Empire, the Spanish introduced the potato to Europe in the second half of the 16th century, part of the Columbian exchange. The staple was subsequently conveyed by European (possibly including Russian) mariners to territories and ports throughout the world, especially their colonies. The potato was slow to be adopted by European and colonial farmers, but after 1750 it became an important food staple and field crop and played a major role in the European 19th century population boom. However, lack of genetic diversity, due to the very limited number of varieties initially introduced, left the crop vulnerable to disease. In 1845, a plant disease known as late blight, caused by the fungus-like oomycete Phytophthora infestans, spread rapidly through the poorer communities of western Ireland as well as parts of the Scottish Highlands, resulting in the crop failures that led to the Great Irish Famine. Thousands of varieties still persist in the Andes however, where over 100 cultivars might be found in a single valley, and a dozen or more might be maintained by a single agricultural household. Production In 2020, world production of potatoes was 359 million tonnes, led by China with 22% of the total (table). Other major producers were India, Russia, Ukraine and the United States. It remains an essential crop in Europe (especially northern and eastern Europe), where per capita production is still the highest in the world, but the most rapid expansion over the past few decades has occurred in southern and eastern Asia. Nutrition According to the United States Department of Agriculture, a typical raw potato is 79% water, 17% carbohydrates (88% is starch), 2% protein, and contains negligible fat (see table). In a portion, raw potato provides of food energy and is a rich source of vitamin B6 and vitamin C (23% and 24% of the Daily Value, respectively), with no other vitamins or minerals in significant amount (see table). The potato is rarely eaten raw because raw potato starch is poorly digested by humans. When a potato is baked, its contents of vitamin B6 and vitamin C decline notably, while there is little significant change in the amount of other nutrients. Potatoes are often broadly classified as having a high glycemic index (GI) and so are often excluded from the diets of individuals trying to follow a low-GI diet. The GI of potatoes can vary considerably depending on the cultivar, growing conditions and storage, preparation methods (by cooking method, whether it is eaten hot or cold, whether it is mashed or cubed or consumed whole), and accompanying foods consumed (especially the addition of various high-fat or high-protein toppings). Consuming reheated or pre-cooked and cooled potatoes may yield a lower GI effect due to the formation of resistant starch. In the UK, potatoes are not considered by the National Health Service (NHS) as counting or contributing towards the recommended daily five portions of fruit and vegetables, the 5-A-Day program. Comparison to other staple foods This table shows the nutrient content of potatoes next to other major staple foods, each one measured in its respective raw state on a dry weight basis to account for their different water contents, even though staple foods are not commonly eaten raw and are usually sprouted or cooked before eating. In sprouted and cooked form, the relative nutritional and anti-nutritional contents of each of these grains (or other foods) may be different from the values in this table. Each nutrient (every row) has the highest number highlighted to show the staple food with the greatest amount in a dry 100 gram portion. Toxicity Potatoes contain toxic compounds known as glycoalkaloids, of which the most prevalent are solanine and chaconine. Solanine is found in other plants in the same family, Solanaceae, which includes such plants as deadly nightshade (Atropa belladonna), henbane (Hyoscyamus niger) and tobacco (Nicotiana spp.), as well as the food plants eggplant and tomato. These compounds, which protect the potato plant from its predators, are generally concentrated in its leaves, flowers, sprouts, and fruits (in contrast to the tubers). In a summary of several studies, the glycoalkaloid content was highest in the flowers and sprouts and lowest in the tuber flesh. (The glycoalkaloid content was, in order from highest to lowest: flowers, sprouts, leaves, tuber skin, roots, berries, peel [skin plus outer cortex of tuber flesh], stems, and tuber flesh). Exposure to light, physical damage, and age increase glycoalkaloid content within the tuber. Cooking at high temperatures—over —partly destroys these compounds. The concentration of glycoalkaloids in wild potatoes is sufficient to produce toxic effects in humans. Glycoalkaloid poisoning may cause headaches, diarrhea, cramps, and, in severe cases, coma and death. However, poisoning from cultivated potato varieties is very rare. Light exposure causes greening from chlorophyll synthesis, giving a visual clue as to which areas of the tuber may have become more toxic. However, this does not provide a definitive guide, as greening and glycoalkaloid accumulation can occur independently of each other. Different potato varieties contain different levels of glycoalkaloids. The Lenape variety was released in 1967 but was withdrawn in 1970 as it contained high levels of glycoalkaloids. Since then, breeders developing new varieties test for this, and sometimes have to discard an otherwise promising cultivar. Breeders try to keep glycoalkaloid levels below 200 mg/kg (200 ppmw). However, when these commercial varieties turn green, they can still approach solanine concentrations of 1000 mg/kg (1000 ppmw). In normal potatoes, analysis has shown solanine levels may be as little as 3.5% of the breeders' maximum, with 7–187 mg/kg being found. While a normal potato tuber has 12–20 mg/kg of glycoalkaloid content, a green potato tuber contains 250–280 mg/kg and its skin has 1500–2200 mg/kg. Growth and cultivation Seed potatoes Potatoes are generally grown from seed potatoes, tubers specifically grown to be free from disease and to provide consistent and healthy plants. To be disease free, the areas where seed potatoes are grown are selected with care. In the US, this restricts production of seed potatoes to only 15 states out of all 50 states where potatoes are grown. These locations are selected for their cold, hard winters that kill pests and summers with long sunshine hours for optimum growth. In the UK, most seed potatoes originate in Scotland, in areas where westerly winds reduce aphid attack and the spread of potato virus pathogens. Phases of growth Potato growth can be divided into five phases. During the first phase, sprouts emerge from the seed potatoes and root growth begins. During the second, photosynthesis begins as the plant develops leaves and branches above-ground and stolons develop from lower leaf axils on the below-ground stem. In the third phase the tips of the stolons swell forming new tubers and the shoots continue to grow and flowers typically develop soon after. Tuber bulking occurs during the fourth phase, when the plant begins investing the majority of its resources in its newly formed tubers. At this phase, several factors are critical to a good yield: optimal soil moisture and temperature, soil nutrient availability and balance, and resistance to pest attacks. The fifth phase is the maturation of the tubers: the leaves and stems senesce and the tuber skins harden. Challenges New tubers
the league, a streak that has now reached 70+ matches. The Timbers season ticket waiting list has reached 10,000+, the longest waiting list in MLS. In 2015, they became the first team in the Northwest to win the MLS Cup. Player Diego Valeri marked a new record for fastest goal in MLS Cup history at 27 seconds into the game. The annual Cambia Portland Classic women's golf tournament in September, now in its 50th year, is the longest-running non-major tournament on the LPGA Tour, plays in the southern suburb of West Linn. Two rival universities exist within Portland city limits: the University of Portland Pilots and the Portland State University Vikings, both of whom field teams in popular spectator sports including soccer, baseball, and basketball. Portland State also has a football team. Additionally, the University of Oregon Ducks and the Oregon State University Beavers both receive substantial attention and support from many Portland residents, despite their campuses being 110 and 84 miles from the city, respectively. Running is a popular activity in Portland, and every year the city hosts the Portland Marathon as well as parts of the Hood to Coast Relay, the world's largest long-distance relay race (by number of participants). Portland served as the center to an elite running group, the Nike Oregon Project until its 2019 disbandment following coach Alberto Salazar's ban due to doping violations and is the residence of elite runners including American record holder at 10,000m Galen Rupp. Historic Erv Lind Stadium is located in Normandale Park. It has been home to professional and college softball. Portland also hosts numerous cycling events and has become an elite bicycle racing destination. The Oregon Bicycle Racing Association supports hundreds of official bicycling events every year. Weekly events at Alpenrose Velodrome and Portland International Raceway allow for racing nearly every night of the week from March through September. Cyclocross races, such as the Cross Crusade, can attract over 1,000 riders and spectators. On December 4, 2019, the Vancouver Riptide of the American Ultimate Disc League announced that they ceased team operations in Vancouver in 2017 and are moving down to Portland Oregon for the 2020 AUDL season. Parks and recreation Parks and greenspace planning date back to John Charles Olmsted's 1903 Report to the Portland Park Board. In 1995, voters in the Portland metropolitan region passed a regional bond measure to acquire valuable natural areas for fish, wildlife, and people. Ten years later, more than of ecologically valuable natural areas had been purchased and permanently protected from development. Portland is one of only four cities in the U.S. with extinct volcanoes within its boundaries (along with Pilot Butte in Bend, Oregon, Jackson Volcano in Jackson, Mississippi, and Diamond Head in Honolulu, Hawaii). Mount Tabor Park is known for its scenic views and historic reservoirs. Forest Park is the largest wilderness park within city limits in the United States, covering more than . Portland is also home to Mill Ends Park, the world's smallest park (a two-foot-diameter circle, the park's area is only about 0.3 m2). Washington Park is just west of downtown and is home to the Oregon Zoo, Hoyt Arboretum, the Portland Japanese Garden, and the International Rose Test Garden. Portland is also home to Lan Su Chinese Garden (formerly the Portland Classical Chinese Garden), an authentic representation of a Suzhou-style walled garden. Portland's east side has several formal public gardens: the historic Peninsula Park Rose Garden, the rose gardens of Ladd's Addition, the Crystal Springs Rhododendron Garden, the Leach Botanical Garden, and The Grotto. Portland's downtown features two groups of contiguous city blocks dedicated for park space: the North and South Park Blocks. The Tom McCall Waterfront Park was built in 1974 along the length of the downtown waterfront after Harbor Drive was removed; it now hosts large events throughout the year. The nearby historically significant Burnside Skatepark and five indoor skateparks give Portland a reputation as possibly "the most skateboard-friendly town in America." Tryon Creek State Natural Area is one of three Oregon State Parks in Portland and the most popular; its creek has a run of steelhead. The other two State Parks are Willamette Stone State Heritage Site, in the West Hills, and the Government Island State Recreation Area in the Columbia River near Portland International Airport. Portland's city park system has been proclaimed one of the best in America. In its 2013 ParkScore ranking, the Trust for Public Land reported Portland had the seventh-best park system among the 50 most populous U.S. cities. In February 2015, the City Council approved a total ban on smoking in all city parks and natural areas and the ban has been in force since July 1, 2015. The ban includes cigarettes, vaping, as well as marijuana. Government The city of Portland is governed by the Portland City Council, which includes a mayor, four commissioners, and an auditor. Each is elected citywide to serve a four-year term. Each commissioner oversees one or more bureaus responsible for the day-to-day operation of the city. The mayor serves as chairman of the council and is principally responsible for allocating department assignments to his fellow commissioners. The auditor provides checks and balances in the commission form of government and accountability for the use of public resources. In addition, the auditor provides access to information and reports on various matters of city government. Portland is the only large city left in the United States with the commission form of government. The city's Community & Civic Life (formerly Office of Neighborhood Involvement) serves as a conduit between city government and Portland's 95 officially recognized neighborhoods. Each neighborhood is represented by a volunteer-based neighborhood association which serves as a liaison between residents of the neighborhood and the city government. The city provides funding to neighborhood associations through seven district coalitions, each of which is a geographical grouping of several neighborhood associations. Most (but not all) neighborhood associations belong to one of these district coalitions. Portland and its surrounding metropolitan area are served by Metro, the United States' only directly elected metropolitan planning organization. Metro's charter gives it responsibility for land use and transportation planning, solid waste management, and map development. Metro also owns and operates the Oregon Convention Center, Oregon Zoo, Portland Center for the Performing Arts, and Portland Metropolitan Exposition Center. The Multnomah County government provides many services to the Portland area, as do Washington and Clackamas counties to the west and south. Law enforcement is provided by the Portland Police Bureau. Fire and emergency services are provided by Portland Fire & Rescue. Politics Portland is a territorial charter city, and strongly favors the Democratic Party. All city offices are non-partisan. However, a Republican has not been elected as mayor since Fred L. Peterson in 1952, and has not served as mayor even on an interim basis since Connie McCready held the post from 1979 to 1980. Portland's delegation to the Oregon Legislative Assembly is entirely Democratic. In the current 76th Oregon Legislative Assembly, which first convened in 2011, four state Senators represent Portland in the state Senate: Diane Rosenbaum (District 21), Chip Shields (District 22), Jackie Dingfelder (District 23), and Rod Monroe (District 24). Portland sends six Representatives to the state House of Representatives: Rob Nosse (District 42), Tawna Sanchez (District 43), Tina Kotek (District 44), Barbara Smith Warner (District 45), Alissa Keny-Guyer (District 46), and Diego Hernandez (District 47). Portland is split among three U.S. congressional districts. Most of the city is in the 3rd District, represented by Earl Blumenauer, who served on the city council from 1986 until his election to Congress in 1996. Most of the city west of the Willamette River is part of the 1st District, represented by Suzanne Bonamici. A small portion of southwestern Portland is in the 5th District, represented by Kurt Schrader. All three are Democrats; a Republican has not represented a significant portion of Portland in the U.S. House of Representatives since 1975. Both of Oregon's senators, Ron Wyden and Jeff Merkley, are from Portland and are also both Democrats. In the 2008 presidential election, Democratic candidate Barack Obama easily carried Portland, winning 245,464 votes from city residents to 50,614 for his Republican rival, John McCain. In the 2012 presidential election, Democratic candidate Barack Obama again easily carried Portland, winning 256,925 votes from Multnomah county residents to 70,958 for his Republican rival, Mitt Romney. Sam Adams, the former mayor of Portland, became the city's first openly gay mayor in 2009. In 2004, 59.7 percent of Multnomah County voters cast ballots against Oregon Ballot Measure 36, which amended the Oregon Constitution to prohibit recognition of same-sex marriages. The measure passed with 56.6% of the statewide vote. Multnomah County is one of two counties where a majority voted against the initiative; the other is Benton County, which includes Corvallis, home of Oregon State University. On April 28, 2005, Portland became the only city in the nation to withdraw from a Joint Terrorism Task Force. As of February 19, 2015, the Portland city council approved permanently staffing the JTTF with two of its city's police officers. Planning and development The city consulted with urban planners as far back as 1904, resulting in the development of Washington Park and the 40-Mile Loop greenway, which interconnects many of the city's parks. Portland is often cited as an example of a city with strong land use planning controls. This is largely the result of statewide land conservation policies adopted in 1973 under Governor Tom McCall, in particular the requirement for an urban growth boundary (UGB) for every city and metropolitan area. The opposite extreme, a city with few or no controls, is typically illustrated by Houston. Portland's urban growth boundary, adopted in 1979, separates urban areas (where high-density development is encouraged and focused) from traditional farm land (where restrictions on non-agricultural development are very strict). This was atypical in an era when automobile use led many areas to neglect their core cities in favor of development along interstate highways, in suburbs, and satellite cities. The original state rules included a provision for expanding urban growth boundaries, but critics felt this wasn't being accomplished. In 1995, the State passed a law requiring cities to expand UGBs to provide enough undeveloped land for a 20-year supply of future housing at projected growth levels. Oregon's 1973 "urban growth boundary" law limits the boundaries for large-scale development in each metropolitan area in Oregon. This limits access to utilities such as sewage, water and telecommunications, as well as coverage by fire, police and schools. Originally this law mandated the city must maintain enough land within the boundary to provide an estimated 20 years of growth; however, in 2007 the legislature changed the law to require the maintenance of an estimated 50 years of growth within the boundary, as well as the protection of accompanying farm and rural lands. The growth boundary, along with efforts of the Portland Development Commission to create economic development zones, has led to the development of a large portion of downtown, a large number of mid- and high-rise developments, and an overall increase in housing and business density. Prosper Portland (formerly Portland Development Commission) is a semi-public agency that plays a major role in downtown development; city voters created it in 1958 to serve as the city's urban renewal agency. It provides housing and economic development programs within the city and works behind the scenes with major local developers to create large projects. In the early 1960s, the Portland Development Commission led the razing of a large Italian-Jewish neighborhood downtown, bounded roughly by I-405, the Willamette River, 4th Avenue and Market street. Mayor Neil Goldschmidt took office in 1972 as a proponent of bringing housing and the associated vitality back to the downtown area, which was seen as emptying out after 5 pm. The effort has had dramatic effects in the 30 years since, with many thousands of new housing units clustered in three areas: north of Portland State University (between I-405, SW Broadway, and SW Taylor St.); the RiverPlace development along the waterfront under the Marquam (I-5) bridge; and most notably in the Pearl District (between I-405, Burnside St., NW Northrup St., and NW 9th Ave.). Historically, environmental consciousness has weighed significantly in the city's planning and development efforts. Portland was one of the first cities in the United States to promote and integrate alternative forms of transportation, such as the MAX Light Rail and extensive bike paths. The Urban Greenspaces Institute, housed in Portland State University Geography Department's Center for Mapping Research, promotes better integration of the built and natural environments. The institute works on urban park, trail, and natural areas planning issues, both at the local and regional levels. In October 2009, the Portland City Council unanimously adopted a climate action plan that will cut the city's greenhouse gas emissions to 80% below 1990 levels by 2050. The city's longstanding efforts were recognized in a 2010 Reuters report, which named Portland the second-most environmentally conscious or "green" city in the world after Reykjavík, Iceland. As of 2012, Portland was the largest city in the United States that did not add fluoride to its public water supply, and fluoridation has historically been a subject of controversy in the city. Portland voters have four times voted against fluoridation, in 1956, 1962, 1980 (repealing a 1978 vote in favor), and 2013. In 2012 the city council, responding to advocacy from public health organizations and others, voted unanimously to begin fluoridation by 2014. Fluoridation opponents forced a public vote on the issue, and on May 21, 2013, city voters again rejected fluoridation. Education Primary and secondary education Nine public school districts and many private schools serve Portland. Portland Public Schools is the largest school district, operating 85 public schools. David Douglas High School, in the Powellhurst neighborhood, has the largest enrollment of any public high school in the city. Other high schools include Benson, Cleveland, Franklin, Grant, Jefferson, Madison, Parkrose, Roosevelt, and Ida B Wells-Barnett (formerly Woodrow Wilson), and several suburban high schools which serve the city's outer areas. Established in 1869, Lincoln High School (formerly Portland High School) is the city's oldest public education institution, and is one of two of the oldest high schools west of the Mississippi River (after San Francisco's Lowell High School). Former public schools in the city included Washington High School, which operated from 1906 until 1981, as well as Adams and Jackson, which also closed the same year. The area's private schools include The Northwest Academy, Portland Jewish Academy, Rosemary Anderson High School, Portland Adventist Academy, Portland Lutheran School, Trinity Academy, Catlin Gabel School, and Oregon Episcopal School. The city and surrounding metropolitan area are also home to a large number of Roman Catholic-affiliated private schools, including St. Mary's Academy, an all-girls school; De La Salle North Catholic High School; the co-educational Jesuit High School; La Salle High School; and Central Catholic High School, the only archdiocesan high school in the Roman Catholic Archdiocese of Portland. Higher education Portland State University has the second-largest enrollment rate of any university in the state (after Oregon State University), with a student body of nearly 30,000. It has been named among the top fifteen percentile of American regional universities by The Princeton Review for undergraduate education, and has been internationally recognized for its degrees in Master of Business Administration and urban planning. The city is also home to the Oregon Health & Science University, as well as Portland Community College. Notable private universities include the University of Portland, a Roman Catholic university affiliated with the Congregation of Holy Cross; Reed College, a liberal arts college, and Lewis & Clark College. Other institutions of higher learning within the city are: Media The Oregonian is the only daily general-interest newspaper serving Portland. It also circulates throughout the state and in Clark County, Washington. Smaller local newspapers, distributed free of charge in newspaper boxes and at venues around the city, include the Portland Tribune (general-interest paper published on Tuesdays and Thursdays), Willamette Week (general-interest alternative weekly published on Wednesdays), and The Portland Mercury (another alt-weekly, targeted at younger urban readers and published every other Thursday). The Portland area also has newspapers that are published for specific communities, including The Asian Reporter (a weekly covering Asian news, both international and local) and The Skanner (a weekly African-American newspaper covering both local and national news). The Portland Business Journal covers business-related news on a weekly basis, as does The Daily Journal of Commerce, its main competitor. Portland Monthly is a monthly news and culture magazine. The Bee, over 105 years old, is another neighborhood newspaper serving the inner southeast neighborhoods. Infrastructure Healthcare Legacy Health, a non-profit healthcare system in Portland, operates multiple facilities in the city and surrounding suburbs. These include Legacy Emanuel, founded in 1912, in Northeast Portland; and Legacy Good Samaritan, founded in 1875, and in Northwest Portland. Randall's Children's Hospital operates at the Legacy Emanuel Campus. Good Samaritan has centers for breast health, cancer, and stroke, and is home to the Legacy Devers Eye Institute, the Legacy Obesity and Diabetes Institute, the Legacy Diabetes and Endocrinology Center, the Legacy Rehabilitation Clinic of Oregon, and the Linfield-Good Samaritan School of Nursing. The Catholic-affiliated Providence Health & Services operates Providence Portland Medical Center in the North Tabor neighborhood of the city. Oregon Health & Science University is a university hospital formed in 1974. The Veterans Affairs Medical Center operates next to the Oregon Health & Science University main campus. Adventist Medical Center also serves the city. Shriners Hospital for Children is a small children's hospital established in 1923. Transportation The Portland metropolitan area has transportation services common to major U.S. cities, though Oregon's emphasis on proactive land-use planning and transit-oriented development within the urban growth boundary means commuters have multiple well-developed options. In 2014, Travel + Leisure magazine rated Portland as the No. 1 most pedestrian and transit-friendly city in the United States. A 2011 study by Walk Score ranked Portland 12th most walkable of fifty largest U.S. cities. In 2008, 12.6% of all commutes in Portland were on public transit. TriMet operates most of the region's buses and the MAX (short for Metropolitan Area Express) light rail system, which connects the city and suburbs. The 1986-opened MAX system has expanded to five lines, with the latest being the Orange Line to Milwaukie, in service as of September 2015. WES Commuter Rail opened in February 2009 in Portland's western suburbs, linking Beaverton and Wilsonville. The city-owned Portland Streetcar serves two routes in the Central City – downtown and adjacent districts. The first line, which opened in 2001 and was extended in 2005–07, operates from the South
banned African American settlement in 1849. In the 19th century, certain laws allowed the immigration of Chinese laborers but prohibited them from owning property or bringing their families. The early 1920s saw the rapid growth of the Ku Klux Klan, which became very influential in Oregon politics, culminating in the election of Walter M. Pierce as governor. The largest influxes of minority populations occurred during World War II, as the African American population grew by a factor of 10 for wartime work. After World War II, the Vanport flood in 1948 displaced many African Americans. As they resettled, redlining directed the displaced workers from the wartime settlement to neighboring Albina. There and elsewhere in Portland, they experienced police hostility, lack of employment, and mortgage discrimination, leading to half the black population leaving after the war. In the 1980s and 1990s, radical skinhead groups flourished in Portland. In 1988, Mulugeta Seraw, an Ethiopian immigrant, was killed by three skinheads. The response to his murder involved a community-driven series of rallies, campaigns, nonprofits and events designed to address Portland's racial history, leading to a city considered significantly more tolerant than in 1988 at Seraw's death. Households As of the 2010 census, there were 583,776 people living in the city, organized into 235,508 households. The population density was 4,375.2 people per square mile. There were 265,439 housing units at an average density of 1989.4 per square mile (1,236.3/km). Population growth in Portland increased 10.3% between 2000 and 2010. Population growth in the Portland metropolitan area has outpaced the national average during the last decade, and this is expected to continue over the next 50 years. Out of 223,737 households, 24.5% had children under the age of 18 living with them, 38.1% were married couples living together, 10.8% had a female householder with no husband present, and 47.1% were non-families. 34.6% of all households were made up of individuals, and 9% had someone living alone who was 65 years of age or older. The average household size was 2.3 and the average family size was 3. The age distribution was 21.1% under the age of 18, 10.3% from 18 to 24, 34.7% from 25 to 44, 22.4% from 45 to 64, and 11.6% who were 65 years of age or older. The median age was 35 years. For every 100 females, there were 97.8 males. For every 100 females age 18 and over, there were 95.9 males. The median income for a household in the city was $40,146, and the median income for a family was $50,271. Males had a reported median income of $35,279 versus $29,344 reported for females. The per capita income for the city was $22,643. 13.1% of the population and 8.5% of families were below the poverty line. Out of the total population, 15.7% of those under the age of 18 and 10.4% of those 65 and older were living below the poverty line. Figures delineating the income levels based on race are not available at this time. According to the Modern Language Association, in 2010 80.9% (539,885) percent of Multnomah County residents ages 5 and over spoke English as their primary language at home. 8.1% of the population spoke Spanish (54,036), with Vietnamese speakers making up 1.9%, and Russian 1.5%. Social The Portland metropolitan area has historically had a significant LGBT population throughout the late 20th and early 21st century. In 2015, the city metro had the second highest percentage of LGBT residents in the United States with 5.4% of residents identifying as gay, lesbian, bisexual, or transgender, second only to San Francisco. In 2006, it was reported to have the seventh highest LGBT population in the country, with 8.8% of residents identifying as gay, lesbian, or bisexual, and the metro ranking fourth in the nation at 6.1%. The city held its first pride festival in 1975 on the Portland State University campus. As recently as 2012, Portland has been cited as the least religious city in the United States, with over 42% of residents identifying as religiously "unaffiliated", according to the nonpartisan and nonprofit Public Religion Research Institute's American Values Atlas. Homelessness A 2019 survey by the city's budget office showed that homelessness is perceived as the top challenge facing Portland, and was cited as a reason people move and do not participate in park programs. Calls to 911 concerning "unwanted persons" have significantly increased between 2013 and 2018, and the police are increasingly dealing with homeless and mentally ill. It is taking a toll on sense of safety among visitors and residents and business owners are adversely impacted. Even though homeless services and shelter beds have increased, as of 2020 homelessness is considered an intractable problem in Portland. Crime According to the Federal Bureau of Investigation's Uniform Crime Report in 2009, Portland ranked 53rd in violent crime out of the top 75 U.S. cities with a population greater than 250,000. The murder rate in Portland in 2013 averaged 2.3 murders per 100,000 people per year, which was lower than the national average. In October 2009, Forbes magazine rated Portland as the third safest city in America. In 2011, 72% of arrested male subjects tested positive for illegal drugs and the city was dubbed the "deadliest drug market in the Pacific Northwest" due to drug related deaths. In 2010, ABC's Nightline reported that Portland is one of the largest hubs for child sex trafficking. In the Portland Metropolitan statistical area which includes Clackamas, Columbia, Multnomah, Washington, and Yamhill Counties, OR and Clark and Skamania Counties, WA for 2017, the murder rate was 2.6, violent crime was 283.2 per 100,000 people per year. In 2017, the population within the city of Portland was 649,408 and there were 24 murders and 3,349 violent crimes. In the first quarter of 2021, Portland recorded the largest increase in homicides of any American city during that time period. There were 21 homicides, an increase of 950 percent over that same quarter the year before. Below is a sortable table containing violent crime data from each Portland neighborhood during the calendar year of 2014. Economy Portland's location is beneficial for several industries. Relatively low energy cost, accessible resources, north–south and east–west Interstates, international air terminals, large marine shipping facilities, and both west coast intercontinental railroads are all economic advantages. The city's marine terminals alone handle over 13 million tons of cargo per year, and the port is home to one of the largest commercial dry docks in the country. The Port of Portland is the third-largest export tonnage port on the west coast of the U.S., and being about upriver, it is the largest fresh-water port. The scrap steel industry's history in Portland predates World War II. By the 1950s, the scrap steel industry became the city's number one industry for employment. The scrap steel industry thrives in the region, with Schnitzer Steel Industries, a prominent scrap steel company, shipping a record 1.15 billion tons of scrap metal to Asia during 2003. Other heavy industry companies include ESCO Corporation and Oregon Steel Mills. Technology is a major component of the city's economy, with more than 1,200 technology companies existing within the metro. This high density of technology companies has led to the nickname Silicon Forest being used to describe the Portland area, a reference to the abundance of trees in the region and to the Silicon Valley region in Northern California. The area also hosts facilities for software companies and online startup companies, some supported by local seed funding organizations and business incubators. Computer components manufacturer Intel is the Portland area's largest employer, providing jobs for more than 15,000 people, with several campuses to the west of central Portland in the city of Hillsboro. The Portland metro area has become a business cluster for athletic/outdoor gear and footwear manufacturer's headquarters. Shoes are not manufactured in Portland. The area is home to the global, North American or U.S. headquarters of Nike, Adidas, Columbia Sportswear, LaCrosse Footwear, Dr. Martens, Li-Ning, Keen, and Hi-Tec Sports. While headquartered elsewhere, Merrell, Amer Sports and Under Armour have design studios and local offices in the Portland area. Portland-based Precision Castparts is one of two Fortune 500 companies headquartered in Oregon, the other being Nike. Other notable Portland-based companies include film animation studio Laika; commercial vehicle manufacturer Daimler Trucks North America; advertising firm Wieden+Kennedy; bankers Umpqua Holdings; and retailers Fred Meyer, New Seasons Market, KinderCare Learning Centers and Storables. Breweries are another major industry in Portland, which is home to 139 breweries/microbreweries, the 7th most in the nation, as of December 2018. Additionally, the city boasts a robust coffee culture that now rivals Seattle and hosts over 20 coffee roasters. Housing In 2016, home prices in Portland grew faster than in any other city in the United States. Apartment rental costs in Portland reported in November 2019 was $1,337 for two bedroom and $1,133 for one bedroom. In 2017, developers projected an additional 6,500 apartments to be built in the Portland Metro Area over the next year. However, as of December 2019, the number of homes available for rent or purchase in Portland continues to shrink. Over the past year, housing prices in Portland have risen 2.5%. Housing prices in Portland continue to rise, the median price rising from $391,400 in November 2018 to $415,000 in November 2019. There has been a rise of people from out of state moving to Portland, which impacts housing availability. Because of the demand for affordable housing and influx of new residents, more Portlanders in their 20s and 30s are still living in their parents' homes. Arts and culture Music, film, and performing arts Portland is home to a range of classical performing arts institutions, including the Portland Opera, the Oregon Symphony, and the Portland Youth Philharmonic; the latter, established in 1924, was the first youth orchestra established in the United States. The city is also home to several theaters and performing arts institutions, including the Oregon Ballet Theatre, Northwest Children's Theatre, Portland Center Stage, Artists Repertory Theatre, Miracle Theatre, and Tears of Joy Theatre. In 2013, the Guardian named the city's music scene as one of the "most vibrant" in the United States. Portland is home to famous bands such as the Kingsmen and Paul Revere & the Raiders, both famous for their association with the song "Louie Louie" (1963). Other widely known musical groups include the Dandy Warhols, Quarterflash, Everclear, Pink Martini, Sleater-Kinney, Blitzen Trapper, the Decemberists, and the late Elliott Smith. More recently, Portugal. the Man, Modest Mouse, and the Shins have made their home in Portland as well. In the 1980s, the city was home to a burgeoning punk scene, which included bands such as the Wipers and Dead Moon. The city's now-demolished Satyricon nightclub was a punk venue notorious for being the place where Nirvana frontman Kurt Cobain first encountered future wife and Hole frontwoman Courtney Love in 1990. Love was then a resident of Portland and started several bands there with Kat Bjelland, later of Babes in Toyland. Multi-Grammy award-winning jazz artist Esperanza Spalding is from Portland and performed with the Chamber Music Society of Oregon at a young age. A wide range of films have been shot in Portland, from various independent features to major big-budget productions. Director Gus Van Sant has notably set and shot many of his films in the city. The city has also been featured in various television programs, notably the IFC sketch comedy series Portlandia. The series, which ran for eight seasons from 2011 to 2018, was shot on location in Portland, and satirized the city as a hub of liberal politics, organic food, alternative lifestyles, and anti-establishment attitudes. MTV's long-time running reality show The Real World was also shot in Portland for the show's 29th season: The Real World: Portland premiered on MTV in 2013. Other television series shot in the city include Leverage, The Librarians, Under Suspicion, Grimm, and Nowhere Man. An unusual feature of Portland entertainment is the large number of movie theaters serving beer, often with second-run or revival films. Notable examples of these "brew and view" theaters include the Bagdad Theater and Pub, a former vaudeville theater built in 1927 by Universal Studios; Cinema 21; and the Laurelhurst Theater, in operation since 1923. Portland hosts the world's longest-running H. P. Lovecraft Film Festival at the Hollywood Theatre. Museums and recreation Portland is home to numerous museums and educational institutions, ranging from art museums to institutions devoted to science and wildlife. Among the science-oriented institutions are the Oregon Museum of Science and Industry (OMSI), which consists of five main halls and other ticketed attractions, such as the submarine, the ultra-large-screen Empirical Theater (which replaced an OMNIMAX theater in 2013), and the Kendall Planetarium. The World Forestry Center Discovery Museum, located in the city's Washington Park area, offers educational exhibits on forests and forest-related subjects. Also located in Washington Park are the Hoyt Arboretum, the International Rose Test Garden, the Japanese Garden, and the Oregon Zoo. The Portland Art Museum owns the city's largest art collection and presents a variety of touring exhibitions each year and, with the recent addition of the Modern and Contemporary Art wing, it became one of the United States' 25 largest museums. Other museums include the Portland Children's Museum, a museum specifically geared for early childhood development; and the Oregon Historical Society Museum, founded in 1898, which has a variety of books, film, pictures, artifacts, and maps dating back throughout Oregon's history. It houses permanent and temporary exhibits about Oregon history, and hosts traveling exhibits about the history of the United States. Oaks Amusement Park, in the Sellwood district of Southeast Portland, is the city's only amusement park and is also one of the country's longest-running amusement parks. It has operated since 1905 and was known as the "Coney Island of the Northwest" upon its opening. Cuisine and breweries Portland has been named the best city in the world for street food by several publications and news outlets, including the U.S. News & World Report and CNN. Food carts are extremely popular within the city, with over 600 licensed carts, making Portland one of the most robust street food scenes in North America. In 2014, the Washington Post called Portland the fourth best city for food in the United States. Portland is also known as a leader in specialty coffee. The city is home to Stumptown Coffee Roasters as well as dozens of other micro-roasteries and cafes. It is frequently claimed that Portland has the most breweries and independent microbreweries of any city in the world, with 58 active breweries within city limits and 70+ within the surrounding metro area. However, data compiled by the Brewers Association ranks Portland seventh in the United States as of 2018. Portland hosts a number of festivals throughout the year that celebrate beer and brewing, including the Oregon Brewers Festival, held in Tom McCall Waterfront Park. Held each summer during the last full weekend of July, it is the largest outdoor craft beer festival in North America, with over 70,000 attendees in 2008. Other major beer festivals throughout the calendar year include the Spring Beer and Wine Festival in April, the North American Organic Brewers Festival in June, the Portland International Beerfest in July, and the Holiday Ale Festival in December. Sustainability Popular Science awarded Portland the title of the Greenest City in America in 2008, and Grist magazine listed it in 2007 as the second greenest city in the world. Ten years later, WalletHub rated the city as the 10th greenest. The city became a pioneer of state-directed metropolitan planning, a program which was instituted statewide in 1969 to compact the urban growth boundaries of the city. Portland was the first city to enact a comprehensive plan to reduce carbon dioxide emissions. Free speech Strong free speech protections of the Oregon Constitution upheld by the Oregon Supreme Court in State v. Henry, specifically found that full nudity and lap dances in strip clubs are protected speech. Portland has the highest number of strip clubs per-capita in a city in the United States, and Oregon ranks as the highest state for per-capita strip clubs. In November 2008, a Multnomah County judge dismissed charges against a nude bicyclist arrested on June 26, 2008. The judge stated that the city's annual World Naked Bike Rideheld each year in June since 2004has created a "well-established tradition" in Portland where cyclists may ride naked as a form of protest against cars and fossil fuel dependence. The defendant was not riding in the official World Naked Bike Ride at the time of his arrest as it had occurred 12 days earlier that year, on June 14. From November 10 to 12, 2016, protests in Portland turned into a riot, when a group of anarchists broke off from a larger group of peaceful protesters who were opposed to the election of Donald Trump as president of the United States. Sports Portland is home to three major league sports franchises: the Portland Trail Blazers of the NBA, the Portland Timbers of Major League Soccer, and the Portland Thorns FC of the National Women's Soccer League. In 2015, the Timbers won the MLS Cup, which was the first male professional sports championship for a team from Portland since the Trail Blazers won the NBA championship in 1977. Despite being the 19th most populated metro area in the United States, Portland contains only one franchise from the NFL, NBA, NHL, or MLB, making it United States second most populated metro area with that distinction, behind San Antonio. The city has been often rumored to receive an additional franchise, although efforts to acquire a team have failed due to stadium funding issues. An organization known as the Portland Diamond Project (PDP) has worked with the MLB and local government, and there are plans to have an MLB stadium constructed in the industrial district of Portland. The PDP has not yet received the funding for this project. Portland sports fans are characterized by their passionate support. The Trail Blazers sold out every home game between 1977 and 1995, a span of 814 consecutive games, the second-longest streak in American sports history. The Timbers joined MLS in 2011 and have sold out every home match since joining the league, a streak that has now reached 70+ matches. The Timbers season ticket waiting list has reached 10,000+, the longest waiting list in MLS. In 2015, they became the first team in the Northwest to win the MLS Cup. Player Diego Valeri marked a new record for fastest goal in MLS Cup history at 27 seconds into the game. The annual Cambia Portland Classic women's golf tournament in September, now in its 50th year, is the longest-running non-major tournament on the LPGA Tour, plays in the southern suburb of West Linn. Two rival universities exist within Portland city limits: the University of Portland Pilots and the Portland State University Vikings, both of whom field teams in popular spectator sports including soccer, baseball, and basketball. Portland State also has a football team. Additionally, the University of Oregon Ducks and the Oregon State University Beavers both receive substantial attention and support from many Portland residents, despite their campuses being 110 and 84 miles from the city, respectively. Running is a popular activity in Portland, and every year the city hosts the Portland Marathon as well as parts of the Hood to Coast Relay, the world's largest long-distance relay race (by number of participants). Portland served as the center to an elite running group, the Nike Oregon Project until its 2019 disbandment following coach Alberto Salazar's ban due to doping violations and is the residence of elite runners including American record holder at 10,000m Galen Rupp. Historic Erv Lind Stadium is located in Normandale Park. It has been home to professional and college softball. Portland also hosts numerous cycling events and has become an elite bicycle racing destination. The Oregon Bicycle Racing Association supports hundreds of official bicycling events every year. Weekly events at Alpenrose Velodrome and Portland International Raceway allow for racing nearly every night of the week from March through September. Cyclocross races, such as the Cross Crusade, can attract over 1,000 riders and spectators. On December 4, 2019, the Vancouver Riptide of the American Ultimate Disc League announced that they ceased team operations in Vancouver in 2017 and are moving down to Portland Oregon for the 2020 AUDL season. Parks and recreation Parks and greenspace planning date back to John Charles Olmsted's 1903 Report to the Portland Park Board. In 1995, voters in the Portland metropolitan region passed a regional bond measure to acquire valuable natural areas for fish, wildlife, and people. Ten years later, more than of ecologically valuable natural areas had been purchased and permanently protected from development. Portland is one of only four cities in the U.S. with extinct volcanoes within its boundaries (along with Pilot Butte in Bend, Oregon, Jackson Volcano in Jackson, Mississippi, and Diamond Head in Honolulu, Hawaii). Mount Tabor Park is known for its scenic views and historic reservoirs. Forest Park is the largest wilderness park within city limits in the United States, covering more than . Portland is also home to Mill Ends Park, the world's smallest park (a two-foot-diameter circle, the park's area is only about 0.3 m2). Washington Park is just west of downtown and is home to the Oregon Zoo, Hoyt Arboretum, the Portland Japanese Garden, and the International Rose Test Garden. Portland is also home to Lan Su Chinese Garden (formerly the Portland Classical Chinese Garden), an authentic representation of a Suzhou-style walled garden. Portland's east side has several formal public gardens: the historic Peninsula Park Rose Garden, the rose gardens of Ladd's Addition, the Crystal Springs Rhododendron Garden, the Leach Botanical Garden, and The Grotto. Portland's downtown features two groups of contiguous city blocks dedicated for park space: the North and South Park Blocks. The Tom McCall Waterfront Park was built in 1974 along the length of the downtown waterfront after Harbor Drive was removed; it now hosts large events throughout the year. The nearby historically significant Burnside Skatepark and five indoor skateparks give Portland a reputation as possibly "the most skateboard-friendly town in America." Tryon Creek State Natural Area is one of three Oregon State Parks in Portland and the most popular; its creek has a run of steelhead. The other two State Parks are Willamette Stone State Heritage Site, in the West Hills, and the Government Island State Recreation Area in the Columbia River near Portland International Airport. Portland's city park system has been proclaimed one of the best in America. In its 2013 ParkScore ranking, the Trust for Public Land reported Portland had the seventh-best park system among the 50 most populous U.S. cities. In February 2015, the City Council approved a total ban on smoking in all city parks and natural areas and the ban has been in force since July 1, 2015. The ban includes cigarettes, vaping, as well as marijuana. Government The city of Portland is governed by the Portland City Council, which includes a mayor, four commissioners, and an auditor. Each is elected citywide to serve a four-year term. Each commissioner oversees one or more bureaus responsible for the day-to-day operation of the city. The mayor serves as chairman of the council and is principally responsible for allocating department assignments to his fellow commissioners. The auditor provides checks and balances in the commission form of government and accountability for the use of public resources.
known as fullscreen. However, it also has several drawbacks. Some visual information is necessarily cropped out. It can also change a shot in which the camera was originally stationary to one in which it is frequently panning, or change a single continuous shot into one with frequent cuts. In a shot which was originally panned to show something new, or one in which something enters the shot from off-camera, it changes the timing of these appearances to the audience. As an example, in the film Oliver!, made in Panavision, the criminal Bill Sikes commits a murder. The murder takes place mostly offscreen, behind a staircase wall, and Oliver is a witness to it. As Sikes steps back from behind the wall, we see Oliver from the back watching him in terror. In the pan-and-scan version of the film, we see Oliver's reaction as the murder is being committed, but not when Sikes steps backward from the wall having done it. Often in a pan and scan telecast, a character will seem to be speaking offscreen, when what has really happened is that the pan and scan technique has cut his image out of the screen. Shoot and protect As television screenings of feature films became more common and more financially important, cinematographers began to work for compositions that would keep the vital information within the "TV safe area" of the frame. For example, the BBC suggested programme makers who were recording in 16:9 frame their shots in a 14:9 aspect ratio which was then broadcast on analogue services with small black bars at the top and bottom of the picture, while owners of widescreen TV sets receiving digital broadcasts would see the full 16:9 picture (this is known as Shoot and protect). Reframing One modern alternative to pan and scan is to directly adjust the source material. This is very rare: the only known uses are computer-generated features, such as those produced by Pixar and video games such as BioShock. They call their approach to full-screen versions reframing: some shots are pan and scan, while others (such as notably Warner Bros.' The Lego Movie) are transferred open matte (a full widescreen image extended with added image above and below; though for The Lego Movie, the transferred open matte used a widescreen image cropped to 16:9 with added image above and below to create a 1.37:1-framed Academy ratio image; this version was created for theaters that do not have the anamorphic lens projection equipment). Another method is to keep the camera angle as tight as a pan shot, but move the location of characters, objects, or the camera, so that the subjects fit in the frame. The advent of DVDs and their use of anamorphic presentation, coupled with the increasing popularity of widescreen televisions and computer monitors, have rendered pan and scan less important. Fullscreen versions of films originally produced in widescreen are still available in the United States. Open matte Film makers may also create an original image that includes visual information that extends above and below the widescreen theatrical image; this is called "open matte". This may still be pan-and-scanned, but gives the compositor the freedom to "zoom out" or "uncrop" the image to include not only the full width of the wide-format image, but additional visual content at the top and/or bottom of the screen, not included in the widescreen version. As a general rule (prior to the adoption of DVD), special effects would be done within the theatrical aspect ratio, but not the full-frame thereof; also the expanded image area can sometimes include extraneous objects—such as cables, microphone booms, jet vapor trails, or overhead telephone wires—not intended to be included in the frame, depending upon the nature of the shot and how well the full frame was protected. A more unusual use of the technique is present in the 17 original Dragon Ball
or DVD are often known as fullscreen. However, it also has several drawbacks. Some visual information is necessarily cropped out. It can also change a shot in which the camera was originally stationary to one in which it is frequently panning, or change a single continuous shot into one with frequent cuts. In a shot which was originally panned to show something new, or one in which something enters the shot from off-camera, it changes the timing of these appearances to the audience. As an example, in the film Oliver!, made in Panavision, the criminal Bill Sikes commits a murder. The murder takes place mostly offscreen, behind a staircase wall, and Oliver is a witness to it. As Sikes steps back from behind the wall, we see Oliver from the back watching him in terror. In the pan-and-scan version of the film, we see Oliver's reaction as the murder is being committed, but not when Sikes steps backward from the wall having done it. Often in a pan and scan telecast, a character will seem to be speaking offscreen, when what has really happened is that the pan and scan technique has cut his image out of the screen. Shoot and protect As television screenings of feature films became more common and more financially important, cinematographers began to work for compositions that would keep the vital information within the "TV safe area" of the frame. For example, the BBC suggested programme makers who were recording in 16:9 frame their shots in a 14:9 aspect ratio which was then broadcast on analogue services with small black bars at the top and bottom of the picture, while owners of widescreen TV sets receiving digital broadcasts would see the full 16:9 picture (this is known as Shoot and protect). Reframing One modern alternative to pan and scan is to directly adjust the source material. This is very rare: the only known uses are computer-generated features, such as those produced by Pixar and video games such as BioShock. They call their approach to full-screen versions reframing: some shots are pan and scan, while others (such as notably Warner Bros.' The Lego Movie) are transferred open matte (a full widescreen image extended with added image above and below; though for The Lego Movie, the transferred open matte used a widescreen image cropped to 16:9 with added image above and below to create a 1.37:1-framed Academy ratio image; this version was created for theaters that do not have the anamorphic lens projection equipment). Another method is to keep the camera angle as tight as a pan shot, but move the location of characters, objects, or the camera, so that the subjects fit in the frame. The advent of DVDs and their use of anamorphic presentation, coupled with the increasing popularity of widescreen televisions and computer monitors, have rendered pan and scan less important. Fullscreen versions of films originally produced in widescreen are still available in the United States. Open matte Film makers may also create an original image that includes visual information that extends above and below the widescreen theatrical image; this is called "open matte". This may still be pan-and-scanned, but gives the compositor the freedom to "zoom out" or "uncrop" the image to include not only the full width of the wide-format image, but additional visual content at the top and/or bottom of the screen, not included in the widescreen version. As a general rule (prior to the adoption of DVD), special effects would be done within the theatrical aspect ratio, but not the full-frame thereof; also the expanded image area can sometimes include extraneous objects—such as cables, microphone booms, jet vapor trails, or overhead telephone wires—not intended to be included in the frame, depending upon the nature of the shot and how well the full frame was protected. A more unusual use of the technique is present in the 17 original Dragon Ball Z movies, released from 1986 to 1996. The films were displayed in 1.85:1 during their theatrical release, but this was in fact cut down from 1.37:1 animation- a choice made so that the VHS releases would be nearly uncropped. Adjusting cinematography to account for aspect ratios Changes in screen angle (panning) may be necessary to prevent closeups between two
sustainability. It monitors the effects of ocean acidity on corals and shellfish and reports the results to the UK government. It also cultivates algae that could be used to make biofuels or in the treatment of wastewater by using technology such as photo-bioreactors. It works alongside the Boots Group to investigate the use of algae in skincare protects, taking advantage of the chemicals they contain that adapt to protect themselves from the sun. A scheme is in operation over summer 2018 to provide meals during the summer holidays for children with parents on a low income, the parents cannot afford to provide their children with healthy meals. UPSU also known as the University of Plymouth Student Union is based underground near the library. Every student at the University of Plymouth is a member of UPSU. The Union employs students across the University, from bar staff to events technicians. Every year the students at the University have an opportunity to vote which sabbatical officers represent them. In 2019 over 4000 students voted in the UPSU elections. Demography From the 2011 Census, the Office for National Statistics published that Plymouth's unitary authority area population was 256,384; 15,664 more people than that of the last census from 2001, which indicated that Plymouth had a population of 240,720. The Plymouth urban area had a population of 260,203 in 2011 (the urban sprawl which extends outside the authority's boundaries). The city's average household size was 2.3 persons. At the time of the 2011 UK census, the ethnic composition of Plymouth's population was 96.2% White (of 92.9% was White British), with the largest minority ethnic group being Chinese at 0.5%. The white Irish ethnic group saw the largest decline in its share of the population since the 2001 Census (−24%), while the Other Asian and Black African had the largest increases (360% and 351% respectively). This excludes the two new ethnic groups added to the 2011 census of Gypsy or Irish Traveller and Arab. The population rose rapidly during the second half of the 19th century, but declined by over 1.6% from 1931 to 1951. Plymouth's gross value added (a measure of the size of its economy) was 5,169 million GBP in 2013 making up 25% of Devon's GVA. Its GVA per person was £19,943 and compared to the national average of £23,755, was £3,812 lower. Plymouth's unemployment rate was 7.0% in 2014 which was 2.0 points higher than the South West average and 0.8 points higher than the average for Great Britain (England, Wales and Scotland). A 2014 profile by the National Health Service showed Plymouth had higher than average levels of poverty and deprivation (26.2% of the population among the poorest 20.4% nationally). Life expectancy, at 78.3 years for men and 82.1 for women, was the lowest of any region in the South West of England. Economy Because of its coastal location, the economy of Plymouth has traditionally been maritime, in particular the defence sector with over 12,000 people employed and approximately 7,500 in the armed forces. The Plymouth Gin Distillery has been producing Plymouth Gin since 1793, which was exported around the world by the Royal Navy. During the 1930s, it was the most widely distributed gin and had a controlled term of origin until 2015. Since the 1980s, employment in the defence sector has decreased substantially and the public sector is now prominent particularly in administration, health, education, medicine and engineering. Devonport Dockyard is the UK's only naval base that refits nuclear submarines and the Navy estimates that the Dockyard generates about 10% of Plymouth's income. Plymouth has the largest cluster of marine and maritime businesses in the south west with 270 firms operating within the sector. Other substantial employers include the university with almost 3,000 staff, the national retail chain The Range at their Estover headquarters, as well as the Plymouth Science Park employing 500 people in 50 companies. Plymouth has a post-war shopping area in the city centre with substantial pedestrianisation. At the west end of the zone inside a grade II listed building is the Pannier Market that was completed in 1959 – pannier meaning "basket" from French, so it translates as "basket market". In terms of retail floorspace, Plymouth is ranked in the top five in the South West, and 29th nationally. Plymouth was one of the first ten British cities to trial the new Business improvement district initiative. The Tinside Pool is situated at the foot of the Hoe and became a grade II listed building in 1998 before being restored to its 1930s look for £3.4 million. Plymouth 2020 Since 2003, Plymouth Council has been undertaking a project of urban redevelopment called the "Vision for Plymouth" launched by the architect David Mackay and backed by both Plymouth City Council and the Plymouth Chamber of Commerce (PCC). Its projects range from shopping centres, a cruise terminal, a boulevard and to increase the population to 300,000 and build 33,000 dwellings. In 2004 the old Drake Circus shopping centre and Charles Cross car park were demolished and replaced by the latest Drake Circus Shopping Centre, which opened in October 2006. It received negative feedback before opening when David Mackay said it was already "ten years out of date". It was awarded the first ever Carbuncle Cup, awarded for Britain's ugliest building, in 2006. In contrast, the Theatre Royal's production and education centre, TR2, which was built on wasteland at Cattedown, was a runner-up for the RIBA Stirling Prize for Architecture in 2003. There is a project involving the future relocation of Plymouth City Council's headquarters, the civic centre, to the current location of the Bretonside bus station; it would involve both the bus station and civic centre being demolished and a rebuilt together at the location with the land from the civic centre being sold off. Other suggestions include the demolition of the Plymouth Pavilions entertainment arena to create a canal "boulevard" linking Millbay to the city centre. Millbay is being regenerated with mixed residential, retail and office space alongside the ferry port. Transport The A38 dual-carriageway runs from east to west across the north of the city. Within the city it is known as 'The Parkway' and represents the boundary between the older parts of the city and more recently developed suburban areas. Heading east, it connects Plymouth to the M5 motorway about away near Exeter; and heading west it connects Devon with Cornwall via the Tamar Bridge. Bus services are mainly provided by Plymouth Citybus and Stagecoach South West, but a few routes are served by smaller local operators. Long distance intercity bus services terminate at Plymouth coach station. There are three Park and ride services at Milehouse, Coypool (Plympton) and George Junction (Plymouth City Airport), which are operated by Stagecoach South West. A regular international ferry service provided by Brittany Ferries operates from Millbay taking cars and foot passengers directly to France (Roscoff) and Spain (Santander) on the three ferries, MV Armorique, MV Bretagne and MV Pont-Aven. The Cremyll Ferry is a passenger ferry between Stonehouse and the Cornish hamlet of Cremyll, which is believed to have operated continuously since 1204. There is also a pedestrian ferry from the Mayflower Steps to Mount Batten, and an alternative to using the Tamar Bridge via the Torpoint Ferry (vehicle and pedestrian) across the River Tamar. The city's airport was Plymouth City Airport about north of the city centre. The airport was home to the local airline Air Southwest, which operated flights across the United Kingdom and Ireland. In June 2003, a report by the South West RDA was published looking at the future of aviation in the south-west and the possible closure of airports. It concluded that the best option for the south-west was to close Plymouth City Airport and expand Exeter International Airport and Newquay Cornwall Airport, although it did conclude that this was not the best option for Plymouth. In April 2011, it was announced that the airport would close, which it did on 23 December. A local company, FlyPlymouth, put forward plans in 2015 to reopen the airport by 2018, providing daily services to various destinations including London, but as of now, these projects have stalled. Plymouth railway station, which opened on its present site in 1877, is managed by Great Western Railway and is also served by trains on the CrossCountry network. The station was previously named Plymouth North Road, when there were other main line stations in the city at Millbay and Friary. These have now closed. Smaller stations in the suburban area west of the city centre are served by trains on the Tamar Valley Line to Gunnislake and local services on the Cornish Main Line, which crosses the Tamar on the Royal Albert Bridge. This was designed by Brunel and opened in 1859. The parallel road bridge was completed in 1961. There have been proposals to reopen the Exeter to Plymouth railway of the LSWR which would connect Cornwall and Plymouth to Exeter using the former Southern Railway main line from Plymouth to Exeter via Okehampton, because the main line through South Devon is vulnerable to damage from rough seas at Dawlish, where some of the cliffs are also fragile. There are related proposals to reopen part of the old main line from Bere Alston on the Plymouth-Gunnislake line as far as Tavistock to serve a new housing development, but although the idea has been discussed since 2008 at least progress has been slow. Plymouth is at the southern end of the long Devon Coast to Coast Cycle Route (National Cycle Route 27). The route runs mostly traffic-free on off-road sections between Ilfracombe and Plymouth. The route uses former railway lines, though there are some stretches on public roads. Religion Plymouth has about 150 churches and its Roman Catholic cathedral (1858) is in Stonehouse. The city's oldest church is Plymouth Minster, also known as St Andrew's Church, (Anglican) located at the top of Royal Parade—it is the largest parish church in Devon and has been a site of gathering since AD 800. The city also includes five Baptist churches, over twenty Methodist chapels, and thirteen Roman Catholic churches. In 1831 the first Brethren assembly in England, a movement of conservative non-denominational Evangelical Christians, was established in the city, so that Brethren are often called Plymouth Brethren, although the movement did not begin locally. Plymouth has the first known reference to Jews in the South West from Sir Francis Drake's voyages in 1577 to 1580, as his log mentioned "Moses the Jew" – a man from Plymouth. The Plymouth Synagogue is a Listed Grade II* building, built in 1762 and is the oldest Ashkenazi Synagogue in the English speaking world. There are also places of worship for Islam, Baháʼí, Buddhism, Unitarianism, Chinese beliefs and Humanism. 58.1% of the population described themselves in the 2011 census return as being at least nominally Christian and 0.8% as Muslim with all other religions represented by less than 0.5% each. The portion of people without a religion is 32.9%; above the national average of 24.7%. 7.1% did not state their religious belief. Since the 2001 Census, the number of Christians and Jews has decreased (−16% and −7% respectively), while all other religions have increased and non-religious people have almost doubled in number. Culture Built in 1815, Union Street was at the heart of Plymouth's historical culture. It became known as the servicemen's playground, as it was where sailors from the Royal Navy would seek entertainment of all kinds. During the 1930s, there were 30 pubs and it attracted such performers as Charlie Chaplin to the New Palace Theatre. It was described in 2008 as the late-night hub of Plymouth's entertainment strip. Outdoor events and festivals are held including the annual British Firework Championships in August, which attracts tens of thousands of people across the waterfront. In August 2006 the world record for the most simultaneous fireworks was surpassed, by Roy Lowry of the University of Plymouth, over Plymouth Sound. From 2014 MTV Crashes Plymouth has taken place every July on Plymouth Hoe, hosting big-name acts such as The 1975, Little Mix, Tinie Tempah and Busted. Between 1992 and 2012 the Music of the Night celebration was performed in the Royal Citadel by the 29 Commando Regiment and local performers to raise money for local and military charities. A number of other smaller cultural events taken place annually, including Plymouth Art Weekender, Plymouth Fringe Festival and Illuminate Festival. The city's main theatre is Theatre Royal Plymouth, presenting large-scale West End shows and smaller works as well as an extensive education and outreach programme. The main building is located in the city centre and contains three performance spaces – The Lyric (1,315 capacity), Drum Theatre (200 capacity), and The Lab (60 capacity) – and they also run their own specialised production and creative learning centre called TR2, based in Cattedown. Plymouth Pavilions has multiple uses for the city staging music concerts, basketball matches and stand-up comedy. There are also three cinemas: Reel Cinema at Derrys Cross, Plymouth Arts Centre at Looe Street and a Vue cinema at the Barbican Leisure Park. Barbican Theatre, Plymouth delivers a theatre and dance programme of performances and workshops focused on young people and emerging artists contains a main auditorium (110 – 140 capacity) and rehearsal studio; they also host the B-Bar (80 capacity), which offers a programme of music, comedy and spoken word performance. The Plymouth Athenaeum, which includes a local interest library, is a society dedicated to the promotion of learning in the fields of science, technology, literature and art. In 2017 its auditorium (340 capacity) returned to use as a theatre, having been out of service since 2009. The Plymouth City Museum and Art Gallery is operated by Plymouth City Council allowing free admission – it has six galleries. Plymouth is the regional television centre of BBC South West. A team of journalists are headquartered at Plymouth for the ITV West Country regional station, after a merger with ITV West forced ITV Westcountry to close on 16 February 2009. The main local newspapers serving Plymouth are The Herald and Western Morning News with Radio Plymouth, BBC Radio Devon, Heart South West, and Pirate FM being the local radio stations with the most listeners. Sport Plymouth is home to Plymouth Argyle F.C., who play in the third tier of English football league known as Football League One. The team's home ground is called Home Park and is located in Central Park. It links itself with the group of English non-conformists that left Plymouth for the New World in 1620: its nickname is "The Pilgrims". The city also has three Non-League football clubs; Plymouth Parkway who play at Bolitho Park, Elburton Villa who play at Haye Road and Plymstock United who play at Dean Cross. Plymouth Parkway were recently promoted to the Western League from the South West Peninsula League, and after two Covid-19 interrupted years to the Southern Football League in 2021, whilst Elburton Villa and Plymstock United continue to compete in the South West Peninsula League. Other sports clubs include Plymouth Albion, Plymouth City Patriots and Plymouth Gladiators. Plymouth Albion Rugby Football Club is a rugby union club that was founded in 1875 and are currently competing in the third tier of Professional English Rugby the National League 1. They play at the Brickfields. Plymouth Raiders played in the British Basketball League – the top tier of British basketball and were founded in 1983. Since 2021 the Raiders have been replaced by the Plymouth City Patriots. Both teams have been based in the Plymouth Pavilions entertainment arena. Plymouth Gladiators are a speedway team, currently competing in the British National League, with home meetings taking place at the Plymouth Coliseum. Plymouth cricket club was formed in 1843, the current 1st XI play in the Devon Premier League. Plymouth is also home to Plymouth Marjons Hockey Club, with their 1st XI playing in the National League last season. Plymouth Mariners Baseball club play in the South West Baseball League, they play their home games at Wilson Field in Central Park. Plymouth was home to an American football club, the Plymouth Admirals until 2010. Plymouth Leander is the most successful swimming club in Great Britain along with Plymouth Diving Club. Plymouth is an important centre for watersports, especially scuba diving and sailing. The Port of Plymouth Regatta is one of the oldest regattas in the world, and has been held regularly since 1823. In September 2011, Plymouth hosted the America's Cup World Series for nine days. Public services Since 1973 Plymouth has been supplied water by South West Water. Prior to the 1973 take over it was supplied by Plymouth County Borough Corporation. Before the 19th century two leats were built to provide drinking water for the town. They carried water from Dartmoor to Plymouth. A watercourse, known as Plymouth or Drake's Leat, was opened on 24 April 1591 to tap the River Meavy. The Devonport Leat was constructed to carry fresh drinking water to the expanding town of Devonport and its ever-growing dockyard. It was fed by three Dartmoor rivers: The West Dart, Cowsic and Blackabrook. It seems to have been carrying water since 1797, but it was officially completed in 1801. It was originally designed to carry water to Devonport town but has since been shortened and now carries water to Burrator Reservoir, which feeds most of the water supply of Plymouth. Burrator Reservoir is located about north of the city and was constructed in 1898 and expanded in 1928. Plymouth City Council is responsible for waste management throughout the city and South West Water is responsible for sewerage. Plymouth's electricity is supplied from the National Grid and distributed to Plymouth via Western Power Distribution. On the outskirts of Plympton a combined cycle gas-powered station, the Langage Power Station, which started to produce electricity for Plymouth at the end of 2009. Her Majesty's Courts Service provide a magistrates' court and a Combined Crown and County Court centre in the city. The Plymouth Borough Police, formed in 1836, eventually became part of Devon and Cornwall Constabulary. There are police stations at Charles Cross and Crownhill (the Divisional HQ) and smaller stations at Plympton and Plymstock. The city has one of the Devon and Cornwall Area Crown Prosecution Service Divisional offices. Plymouth has five fire stations located in Camel's Head, Crownhill, Greenbank, Plympton and Plymstock which is part of Devon and Somerset Fire and Rescue Service. The Royal National Lifeboat Institution have an Atlantic 85 class lifeboat and Severn class lifeboat stationed at Millbay Docks. Plymouth is served by Plymouth Hospitals NHS Trust and the city's NHS hospital is Derriford Hospital north of the city centre. The Royal Eye Infirmary is located at Derriford Hospital. South Western Ambulance Service NHS Foundation Trust operates in Plymouth and the rest of the south west; its headquarters are in Exeter. The mid-19th-century burial ground at Ford Park Cemetery was reopened in 2007 by a successful trust and the City council operate two large early 20th century cemeteries at Weston Mill and Efford both with crematoria and chapels. There is also a privately owned cemetery on the outskirts of the city, Drake Memorial Park which does not allow headstones to mark graves, but a brass plaque set into the ground. Landmarks and tourist attractions After the English Civil War the Royal Citadel was erected in 1666 towards the eastern section of Plymouth Hoe, to defend the port from naval attacks, suppress Plymothian Parliamentary leanings and to train the armed forces. Currently, guided tours are available in the summer months. Further west is Smeaton's Tower, which is a standard lighthouse that was constructed in 1759. Furthermore, Smeaton's Tower was dismantled in 1877 and the top two-thirds were reassembled on Plymouth Hoe. It is open to the public and has views over the Plymouth Sound and the city from the lantern room. Plymouth has 20 war memorials of which nine are on The Hoe including: Plymouth Naval Memorial, to remember those killed in World Wars I and II, and the Armada Memorial, to commemorate the defeat of the Spanish Armada. The early port settlement of Plymouth, called "Sutton", approximates to the area now referred to as the Barbican and has 100 listed buildings and the largest concentration of cobbled streets in Britain. The Pilgrim Fathers left for the New World in 1620 near the commemorative Mayflower Steps in Sutton Pool. Also on Sutton Pool is the National Marine Aquarium which displays 400 marine species and includes Britain's deepest aquarium tank. upstream on the opposite side of the River Plym is the Saltram estate, which has a Jacobean and Georgian mansion. On the northern outskirts of the city, Crownhill Fort is a well-restored example of a "Palmerston's Folly". It is owned by the Landmark Trust and is open to the public. To the west of the city is Devonport, one of Plymouth's historic quarters. As part of Devonport's millennium regeneration project, the Devonport Heritage Trail has been introduced, complete with over 70 waymarkers outlining the route. Plymouth is often used as a base by visitors to Dartmoor, the Tamar Valley and the beaches of south-east Cornwall. Kingsand, Cawsand and Whitsand Bay are popular. The Roland Levinsky building, the landmark building of the University of Plymouth, is located in the city's central quarter. Designed by leading architect Henning Larsen, the building was opened in 2008 and houses the University's Arts faculty. Beckley Point, at 78m / 20 floors, is Plymouth's tallest building and was completed on 8 February 2018. It was designed by Boyes Rees Architects and built by contractors Kier. Notable people People from Plymouth are known as Plymothians or less formally as Janners. Its meaning is described as a person from Devon, deriving from Cousin Jan (the Devon form of John), but more particularly in naval circles anyone from the Plymouth area. The Elizabethan navigator, Sir Francis Drake was born in the nearby town of Tavistock and was the mayor of Plymouth. He was the first Englishman to circumnavigate the world and was known by the Spanish as El Draco meaning "The Dragon" after he raided many of their ships. He died of dysentery in 1596 off the coast of Portobelo, Panama. In 2002 a mission to recover his body and bring it to Plymouth was allowed by the Ministry of Defence. His cousin and contemporary John Hawkins was a Plymouth man. Painter Sir Joshua Reynolds, founder and first president of the Royal Academy was born and educated in nearby Plympton, now part of Plymouth. William Cookworthy born in Kingsbridge set up his successful porcelain business in the city and was a close friend of John Smeaton designer of the Eddystone Lighthouse. On 26 January 1786, Benjamin Robert Haydon, an English painter who specialised in grand historical pictures, was born here. The naturalist Dr William Elford Leach FRS, who did much to pave the way in Britain for Charles Darwin, was born at Hoe Gate in 1791. Antarctic explorers Robert Falcon Scott who was born in Plymouth and Frank Bickerton both lived in the city. Artists include Beryl Cook whose paintings depict the culture of Plymouth and Robert Lenkiewicz, whose paintings investigated themes of vagrancy, sexual behaviour and suicide, lived in the city from the 1960s until his death in 2002. Illustrator and creator of children's series Mr Benn and King Rollo, David McKee, was born and brought up in South Devon and trained at Plymouth College of Art. Jazz musician John Surman, born in nearby Tavistock, has close connections to the area, evidenced by his 2012 album Saltash Bells. The avant garde prepared guitarist Keith Rowe was born in the city before establishing the jazz free improvisation band AMM in London in 1965 and MIMEO in 1997. The musician and film director Cosmo Jarvis has lived in several towns in South Devon and has filmed videos in and around Plymouth. In addition, actors Sir Donald Sinden and Judi Trott were born in Plymouth. George Passmore of Turner Prize winning duo Gilbert and George was also born in the city, as was Labour politician Michael Foot whose family reside at nearby Trematon Castle. Notable athletes include swimmer Sharron Davies, diver Tom Daley, dancer Wayne Sleep, and footballer Trevor Francis. Other past residents include composer journalist and newspaper editor William Henry Wills, Ron Goodwin, and journalist Angela Rippon and comedian Dawn French. Canadian politician and legal scholar Chris Axworthy hails from Plymouth. America based actor Donald Moffat, whose roles include American Vice President Lyndon B. Johnson in the film The Right Stuff, and fictional President Bennett in Clear and Present Danger, was born in Plymouth. Canadian actor Mark Holden was also born in Plymouth. Kevin Owen is an international TV news anchor who was born in Freedom Fields Hospital, while his father served as a Royal Navy Officer. Cambridge spy Guy Burgess was born at 2 Albemarle Villas, Stoke whilst his father was a serving Royal Navy officer. Twin city Brest, France Gdynia, Poland Novorossiysk, Russia Plymouth, United States San Sebastián, Spain Freedom of the City The following people and military units have received the Freedom of the City of Plymouth. Individuals Thomas Robert Daley: 13 September 2021. Mark Ormrod : 22 November 2021. Military Units 42 Commando, RM: 1955. The Merchant Navy: 22 March 2009. The Rifles: 25 September 2010. The Royal Naval Reserve See also Fortifications of Plymouth Grade I listed buildings in Plymouth Grade II* listed buildings in Plymouth Notes References Further reading Gould, Jeremy (2010). Plymouth: Vision of a modern city. English Heritage N.B. Carew refers to Plymouth Hoe as "the Hawe at Plymmouth" N.B. the publication carries the date 1943, although published on 27 April 27, 1944 A Plan for Plymouth – The Encyclopaedia of Plymouth History W Best Harris – Plymouth – Plymouth Council of Social Service (undated) W Best Harris – Stories From Plymouth's
which calls for revivification of the city centre with mixed-use and residential. In suburban areas, post-War prefabs had already begun to appear by 1946, and over 1,000 permanent council houses were built each year from 1951 to 1957 according to the Modernist zoned low-density garden city model advocated by Abercrombie. By 1964 over 20,000 new homes had been built, more than 13,500 of them permanent council homes and 853 built by the Admiralty. Plymouth is home to 28 parks with an average size of . Its largest park is Central Park, with other sizeable green spaces including Victoria Park, Freedom Fields Park, Alexandra Park, Devonport Park and the Hoe. Central Park is the home of Plymouth Argyle Football Club and a number of other leisure facilities. The Plymouth Plan 2019–2034 was published May 2019 and sets the direction for future development with a new spatial strategy which reinforces links with the wider region in west Devon and east Cornwall in its Joint Local Plan and identifies three development areas within the city: the City centre and waterfront; a 'northern corridor' including Derriford and the vacant airfield site at Roborough; and an 'eastern corridor' including major new settlements at Sherford and Langage. Climate Plymouth has a moderated temperate oceanic climate (Köppen Cfb) which is wetter and milder than the rest of England. This means a wide range of exotic plants, palm trees, and yuccas can be cultivated. The annual mean high temperature is approximately . Due to the moderating effect of the sea and the south-westerly location, the climate is among the mildest of British cities, and one of the warmest UK cities in winter. The coldest month of February is similarly moderate, having mild mean minimum temperatures between . Snow usually falls in small amounts but a noteworthy recent exception was the period of the European winter storms of 2009-10 which, in early January 2010, covered Plymouth in at least of snow; more on higher ground. Another notable event was the of snowfall between 17 and 19 December 2010 – though only would lie at any one time due to melting. Over the 1961–1990 period, annual snowfall accumulation averaged less than per year. South West England has a favoured location when the Azores High pressure area extends north-eastwards towards the UK, particularly in summer. Coastal areas have average annual sunshine totals over 1,600 hours. Owing to its geographic location, rainfall tends to be associated with Atlantic depressions or with convection and is more frequent and heavier than in London and southeast England. The Atlantic depressions are more vigorous in autumn and winter and most of the rain which falls in those seasons in the south-west is from this source. Average annual rainfall is around . November to March have the highest mean wind speeds, with June to August having the lightest winds. The predominant wind direction is from the south-west. Typically, the warmest day of the year (1971–2000) will achieve a temperature of , although in June 1976 the temperature reached , the site record. On average, 4.25 days of the year will report a maximum temperature of or above. During the winter half of the year, the coldest night will typically fall to although in January 1979 the temperature fell to . Typically, 18.6 nights of the year will register an air frost. Education The University of Plymouth enrolls 23,155 total students as of 2018/2019 ( largest in the UK out of ). It also employs 2,900 staff with an annual income of around £160 million. It was founded in 1992 from Polytechnic South West (formerly Plymouth Polytechnic) following the Further and Higher Education Act 1992. It has a wide range of courses including those in marine focused business, marine engineering, marine biology and Earth, ocean and environmental sciences, surf science, shipping and logistics. The university formed a joint venture with the fellow Devonian University of Exeter in 2000, establishing the Peninsula College of Medicine and Dentistry. The college is ranked 8th out of 30 universities in the UK in 2011 for medicine. Its dental school was established in 2006, which also provides free dental care in an attempt to improve access to dental care in the South West. The University of St Mark & St John (known as "Marjon" or "Marjons") specialises in teacher training, and offers training across the country and abroad. The city is also home to two large colleges. The City College Plymouth provides courses from the most basic to Foundation degrees for approximately 26,000 students. Plymouth College of Art offers a selection of courses including media. It was started 153 years ago and is now one of only four independent colleges of art and design in the UK. Plymouth also has 71 state primary phase schools, 13 state secondary schools, eight special schools and three selective state grammar schools, Devonport High School for Girls, Devonport High School for Boys and Plymouth High School for Girls. There is also an independent school Plymouth College. The city was also home to the Royal Naval Engineering College; opened in 1880 in Keyham, it trained engineering students for five years before they completed the remaining two years of the course at Greenwich. The college closed in 1910, but in 1940 a new college opened at Manadon. This was renamed Dockyard Technical College in 1959 before finally closing in 1994; training was transferred to the University of Southampton. Plymouth is home to the Marine Biological Association of the United Kingdom (MBA; founded 1884) which conducts research in all areas of the marine sciences. The Plymouth Marine Laboratory (PML; founded 1988) was formed in part from components of the MBA. Together with the National Marine Aquarium, the Sir Alister Hardy Foundation for Ocean Sciences, Plymouth University's Marine Institute and the Diving Diseases Research Centre, these marine-related organisations form the Plymouth Marine Sciences Partnership. The Plymouth Marine Laboratory, which focuses on global issues of climate change and sustainability. It monitors the effects of ocean acidity on corals and shellfish and reports the results to the UK government. It also cultivates algae that could be used to make biofuels or in the treatment of wastewater by using technology such as photo-bioreactors. It works alongside the Boots Group to investigate the use of algae in skincare protects, taking advantage of the chemicals they contain that adapt to protect themselves from the sun. A scheme is in operation over summer 2018 to provide meals during the summer holidays for children with parents on a low income, the parents cannot afford to provide their children with healthy meals. UPSU also known as the University of Plymouth Student Union is based underground near the library. Every student at the University of Plymouth is a member of UPSU. The Union employs students across the University, from bar staff to events technicians. Every year the students at the University have an opportunity to vote which sabbatical officers represent them. In 2019 over 4000 students voted in the UPSU elections. Demography From the 2011 Census, the Office for National Statistics published that Plymouth's unitary authority area population was 256,384; 15,664 more people than that of the last census from 2001, which indicated that Plymouth had a population of 240,720. The Plymouth urban area had a population of 260,203 in 2011 (the urban sprawl which extends outside the authority's boundaries). The city's average household size was 2.3 persons. At the time of the 2011 UK census, the ethnic composition of Plymouth's population was 96.2% White (of 92.9% was White British), with the largest minority ethnic group being Chinese at 0.5%. The white Irish ethnic group saw the largest decline in its share of the population since the 2001 Census (−24%), while the Other Asian and Black African had the largest increases (360% and 351% respectively). This excludes the two new ethnic groups added to the 2011 census of Gypsy or Irish Traveller and Arab. The population rose rapidly during the second half of the 19th century, but declined by over 1.6% from 1931 to 1951. Plymouth's gross value added (a measure of the size of its economy) was 5,169 million GBP in 2013 making up 25% of Devon's GVA. Its GVA per person was £19,943 and compared to the national average of £23,755, was £3,812 lower. Plymouth's unemployment rate was 7.0% in 2014 which was 2.0 points higher than the South West average and 0.8 points higher than the average for Great Britain (England, Wales and Scotland). A 2014 profile by the National Health Service showed Plymouth had higher than average levels of poverty and deprivation (26.2% of the population among the poorest 20.4% nationally). Life expectancy, at 78.3 years for men and 82.1 for women, was the lowest of any region in the South West of England. Economy Because of its coastal location, the economy of Plymouth has traditionally been maritime, in particular the defence sector with over 12,000 people employed and approximately 7,500 in the armed forces. The Plymouth Gin Distillery has been producing Plymouth Gin since 1793, which was exported around the world by the Royal Navy. During the 1930s, it was the most widely distributed gin and had a controlled term of origin until 2015. Since the 1980s, employment in the defence sector has decreased substantially and the public sector is now prominent particularly in administration, health, education, medicine and engineering. Devonport Dockyard is the UK's only naval base that refits nuclear submarines and the Navy estimates that the Dockyard generates about 10% of Plymouth's income. Plymouth has the largest cluster of marine and maritime businesses in the south west with 270 firms operating within the sector. Other substantial employers include the university with almost 3,000 staff, the national retail chain The Range at their Estover headquarters, as well as the Plymouth Science Park employing 500 people in 50 companies. Plymouth has a post-war shopping area in the city centre with substantial pedestrianisation. At the west end of the zone inside a grade II listed building is the Pannier Market that was completed in 1959 – pannier meaning "basket" from French, so it translates as "basket market". In terms of retail floorspace, Plymouth is ranked in the top five in the South West, and 29th nationally. Plymouth was one of the first ten British cities to trial the new Business improvement district initiative. The Tinside Pool is situated at the foot of the Hoe and became a grade II listed building in 1998 before being restored to its 1930s look for £3.4 million. Plymouth 2020 Since 2003, Plymouth Council has been undertaking a project of urban redevelopment called the "Vision for Plymouth" launched by the architect David Mackay and backed by both Plymouth City Council and the Plymouth Chamber of Commerce (PCC). Its projects range from shopping centres, a cruise terminal, a boulevard and to increase the population to 300,000 and build 33,000 dwellings. In 2004 the old Drake Circus shopping centre and Charles Cross car park were demolished and replaced by the latest Drake Circus Shopping Centre, which opened in October 2006. It received negative feedback before opening when David Mackay said it was already "ten years out of date". It was awarded the first ever Carbuncle Cup, awarded for Britain's ugliest building, in 2006. In contrast, the Theatre Royal's production and education centre, TR2, which was built on wasteland at Cattedown, was a runner-up for the RIBA Stirling Prize for Architecture in 2003. There is a project involving the future relocation of Plymouth City Council's headquarters, the civic centre, to the current location of the Bretonside bus station; it would involve both the bus station and civic centre being demolished and a rebuilt together at the location with the land from the civic centre being sold off. Other suggestions include the demolition of the Plymouth Pavilions entertainment arena to create a canal "boulevard" linking Millbay to the city centre. Millbay is being regenerated with mixed residential, retail and office space alongside the ferry port. Transport The A38 dual-carriageway runs from east to west across the north of the city. Within the city it is known as 'The Parkway' and represents the boundary between the older parts of the city and more recently developed suburban areas. Heading east, it connects Plymouth to the M5 motorway about away near Exeter; and heading west it connects Devon with Cornwall via the Tamar Bridge. Bus services are mainly provided by Plymouth Citybus and Stagecoach South West, but a few routes are served by smaller local operators. Long distance intercity bus services terminate at Plymouth coach station. There are three Park and ride services at Milehouse, Coypool (Plympton) and George Junction (Plymouth City Airport), which are operated by Stagecoach South West. A regular international ferry service provided by Brittany Ferries operates from Millbay taking cars and foot passengers directly to France (Roscoff) and Spain (Santander) on the three ferries, MV Armorique, MV Bretagne and MV Pont-Aven. The Cremyll Ferry is a passenger ferry between Stonehouse and the Cornish hamlet of Cremyll, which is believed to have operated continuously since 1204. There is also a pedestrian ferry from the Mayflower Steps to Mount Batten, and an alternative to using the Tamar Bridge via the Torpoint Ferry (vehicle and pedestrian) across the River Tamar. The city's airport was Plymouth City Airport about north of the city centre. The airport was home to the local airline Air Southwest, which operated flights across the United Kingdom and Ireland. In June 2003, a report by the South West RDA was published looking at the future of aviation in the south-west and the possible closure of airports. It concluded that the best option for the south-west was to close Plymouth City Airport and expand Exeter International Airport and Newquay Cornwall Airport, although it did conclude that this was not the best option for Plymouth. In April 2011, it was announced that the airport would close, which it did on 23 December. A local company, FlyPlymouth, put forward plans in 2015 to reopen the airport by 2018, providing daily services to various destinations including London, but as of now, these projects have stalled. Plymouth railway station, which opened on its present site in 1877, is managed by Great Western Railway and is also served by trains on the CrossCountry network. The station was previously named Plymouth North Road, when there were other main line stations in the city at Millbay and Friary. These have now closed. Smaller stations in the suburban area west of the city centre are served by trains on the Tamar Valley Line to Gunnislake and local services on the Cornish Main Line, which crosses the Tamar on the Royal Albert Bridge. This was designed by Brunel and opened in 1859. The parallel road bridge was completed in 1961. There have been proposals to reopen the Exeter to Plymouth railway of the LSWR which would connect Cornwall and Plymouth to Exeter using the former Southern Railway main line from Plymouth to Exeter via Okehampton, because the main line through South Devon is vulnerable to damage from rough seas at Dawlish, where some of the cliffs are also fragile. There are related proposals to reopen part of the old main line from Bere Alston on the Plymouth-Gunnislake line as far as Tavistock to serve a new housing development, but although the idea has been discussed since 2008 at least progress has been slow. Plymouth is at the southern end of the long Devon Coast to Coast Cycle Route (National Cycle Route 27). The route runs mostly traffic-free on off-road sections between Ilfracombe and Plymouth. The route uses former railway lines, though there are some stretches on public roads. Religion Plymouth has about 150 churches and its Roman Catholic cathedral (1858) is in Stonehouse. The city's oldest church is Plymouth Minster, also known as St Andrew's Church, (Anglican) located at the top of Royal Parade—it is the largest parish church in Devon and has been a site of gathering since AD 800. The city also includes five Baptist churches, over twenty Methodist chapels, and thirteen Roman Catholic churches. In 1831 the first Brethren assembly in England, a movement of conservative non-denominational Evangelical Christians, was established in the city, so that Brethren are often called Plymouth Brethren, although the movement did not begin locally. Plymouth has the first known reference to Jews in the South West from Sir Francis Drake's voyages in 1577 to 1580, as his log mentioned "Moses the Jew" – a man from Plymouth. The Plymouth Synagogue is a Listed Grade II* building, built in 1762 and is the oldest Ashkenazi Synagogue in the English speaking world. There are also places of worship for Islam, Baháʼí, Buddhism, Unitarianism, Chinese beliefs and Humanism. 58.1% of the population described themselves in the 2011 census return as being at least nominally Christian and 0.8% as Muslim with all other religions represented by less than 0.5% each. The portion of people without a religion is 32.9%; above the national average of 24.7%. 7.1% did not state their religious belief. Since the 2001 Census, the number of Christians and Jews has decreased (−16% and −7% respectively), while all other religions have increased and non-religious people have almost doubled in number. Culture Built in 1815, Union Street was at the heart of Plymouth's historical culture. It became known as the servicemen's playground, as it was where sailors from the Royal Navy would seek entertainment of all kinds. During the 1930s, there were 30 pubs and it attracted such performers as Charlie Chaplin to the New Palace Theatre. It was described in 2008 as the late-night hub of Plymouth's entertainment strip.
such as a checksum to detect transmission errors. PPP on serial links is usually encapsulated in a framing similar to HDLC, described by IETF RFC 1662. The Flag field is present when PPP with HDLC-like framing is used. The Address and Control fields always have the value hex FF (for "all stations") and hex 03 (for "unnumbered information"), and can be omitted whenever PPP LCP Address-and-Control-Field-Compression (ACFC) is negotiated. The frame check sequence (FCS) field is used for determining whether an individual frame has an error. It contains a checksum computed over the frame to provide basic protection against errors in transmission. This is a CRC code similar to the one used for other layer two protocol error protection schemes such as the one used in Ethernet. According to RFC 1662, it can be either 16 bits (2 bytes) or 32 bits (4 bytes) in size (default is 16 bits - Polynomial x16 + x12 + x5 + 1). The FCS is calculated over the Address, Control, Protocol, Information and Padding fields after the message has been encapsulated. Line activation and phases Link Dead This phase occurs when the link fails, or one side has been told to disconnect (e.g. a user has finished his or her dialup connection.) Link Establishment Phase This phase is where Link Control Protocol negotiation is attempted. If successful, control goes either to the authentication phase or the Network-Layer Protocol phase, depending on whether authentication is desired. Authentication Phase This phase is optional. It allows the sides to authenticate each other before a connection is established. If successful, control goes to the network-layer protocol phase. Network-Layer Protocol Phase This phase is where each desired protocols' Network Control Protocols are invoked. For example, IPCP is used in establishing IP service over the line. Data transport for all protocols which are successfully started with their network control protocols also occurs in this phase. Closing down of network protocols also occur in this phase. Link Termination Phase This phase closes down this connection. This can happen if there is an authentication failure, if there are so many checksum errors that the two parties decide to tear down the link automatically, if the link suddenly fails, or if the user decides to hang up a connection. Over several links Multilink PPP Multilink PPP (also referred to as MLPPP, MP, MPPP, MLP, or Multilink) provides a method for spreading traffic across multiple distinct PPP connections. It is defined in RFC 1990. It can be used, for example, to connect a home computer to an Internet Service Provider using two traditional 56k modems, or to connect a company through two leased lines. On a single PPP line frames cannot arrive out of order, but this is possible when the frames are divided among multiple PPP connections. Therefore, Multilink PPP must number the fragments so they can be put in the right order again when they arrive. Multilink PPP is an example of a link aggregation technology. Cisco IOS Release 11.1 and later supports Multilink PPP. Multiclass PPP With PPP, one cannot establish several simultaneous distinct PPP connections over a single link. That's not possible with Multilink PPP either. Multilink PPP uses contiguous numbers for all the fragments of a packet, and as a consequence it is not possible to suspend the sending of a sequence of fragments of one packet in order to send another packet. This prevents from running Multilink PPP multiple times on the same links. Multiclass PPP is a kind of Multilink PPP where each "class" of traffic uses a separate sequence number space and reassembly buffer. Multiclass PPP is defined in RFC 2686 Tunnels Derived protocols PPTP (Point-to-Point Tunneling Protocol) is a form of PPP between two hosts via GRE using encryption (MPPE) and compression (MPPC). As a layer 2 protocol between both ends of a tunnel Many protocols can be used to tunnel data over IP networks. Some of them, like SSL, SSH, or L2TP create virtual network interfaces and give the impression of direct physical connections between the tunnel endpoints. On a Linux host for example, these interfaces would be called tun0 or ppp0. As there are only two endpoints on a tunnel, the tunnel is a point-to-point connection and PPP is a natural choice as a data link layer protocol between the virtual network interfaces. PPP can assign IP addresses to these virtual interfaces, and these IP addresses can be used, for example,
zero. Magic numbers are generated randomly at each end of the connection. Multilink - Provides load balancing several interfaces used by PPP through Multilink PPP (see below). PPP frame Structure PPP frames are variants of HDLC frames: If both peers agree to Address field and Control field compression during LCP, then those fields are omitted. Likewise if both peers agree to Protocol field compression, then the 0x00 byte can be omitted. The Protocol field indicates the type of payload packet: 0xC021 for LCP, 0x80xy for various NCPs, 0x0021 for IP, 0x0029 AppleTalk, 0x002B for IPX, 0x003D for Multilink, 0x003F for NetBIOS, 0x00FD for MPPC and MPPE, etc. PPP is limited, and cannot contain general Layer 3 data, unlike EtherType. The Information field contains the PPP payload; it has a variable length with a negotiated maximum called the Maximum Transmission Unit. By default, the maximum is 1500 octets. It might be padded on transmission; if the information for a particular protocol can be padded, that protocol must allow information to be distinguished from padding. Encapsulation PPP frames are encapsulated in a lower-layer protocol that provides framing and may provide other functions such as a checksum to detect transmission errors. PPP on serial links is usually encapsulated in a framing similar to HDLC, described by IETF RFC 1662. The Flag field is present when PPP with HDLC-like framing is used. The Address and Control fields always have the value hex FF (for "all stations") and hex 03 (for "unnumbered information"), and can be omitted whenever PPP LCP Address-and-Control-Field-Compression (ACFC) is negotiated. The frame check sequence (FCS) field is used for determining whether an individual frame has an error. It contains a checksum computed over the frame to provide basic protection against errors in transmission. This is a CRC code similar to the one used for other layer two protocol error protection schemes such as the one used in Ethernet. According to RFC 1662, it can be either 16 bits (2 bytes) or 32 bits (4 bytes) in size (default is 16 bits - Polynomial x16 + x12 + x5 + 1). The FCS is calculated over the Address, Control, Protocol, Information and Padding fields after the message has been encapsulated. Line activation and phases Link Dead This phase occurs when the link fails, or one side has been told to disconnect (e.g. a user has finished his or her dialup connection.) Link Establishment Phase This phase is where Link Control Protocol negotiation is attempted. If successful, control goes either to the authentication phase or the Network-Layer Protocol phase, depending on whether authentication is desired. Authentication Phase This phase is optional. It allows the sides to authenticate each other before a connection is established. If successful, control goes to the network-layer protocol phase. Network-Layer Protocol Phase This phase is where each desired protocols' Network Control Protocols are invoked. For example, IPCP is used in establishing IP service over the line. Data transport for all protocols which are successfully started with their network control protocols also occurs in this phase. Closing down of network protocols also occur in this phase. Link Termination Phase This phase closes down this connection. This can happen if there is an authentication failure, if there are so many checksum errors that the two parties decide to tear down the link automatically, if the link suddenly fails, or if the user decides to hang up a connection. Over several links Multilink PPP Multilink PPP (also referred to as MLPPP, MP, MPPP, MLP, or Multilink) provides a method for spreading traffic across multiple distinct PPP connections. It is defined in RFC 1990. It can be used, for example, to connect a home computer to an Internet Service Provider using two traditional 56k modems, or to connect a company through two leased lines. On a single PPP line frames cannot arrive out of order, but this is possible when the frames are divided among multiple PPP connections. Therefore, Multilink PPP must number the fragments so they can be put in the right order again when they arrive. Multilink PPP is an example of a link aggregation technology. Cisco IOS Release 11.1 and later supports Multilink PPP. Multiclass PPP With PPP, one cannot establish several simultaneous distinct PPP connections over a single link. That's not possible with Multilink PPP either. Multilink PPP uses contiguous numbers for all the fragments of a packet, and as a consequence it is not possible to suspend the sending of a sequence of fragments of one packet in order to send another packet. This prevents from running Multilink PPP multiple times on the same links. Multiclass PPP is a kind of Multilink PPP where each "class" of traffic uses a separate sequence number space and reassembly buffer. Multiclass PPP is defined in RFC 2686 Tunnels Derived protocols PPTP (Point-to-Point Tunneling Protocol) is a form of PPP between two hosts via GRE using encryption (MPPE) and compression (MPPC). As a layer 2 protocol between both ends of a tunnel Many protocols can be used to tunnel data over IP networks. Some of them, like SSL, SSH, or L2TP create virtual network interfaces and give the impression of direct physical connections between the tunnel endpoints. On a Linux host for example, these interfaces would be called tun0 or ppp0. As there are only two endpoints on a tunnel, the tunnel is a point-to-point connection and PPP is a natural choice as a data link layer protocol between the virtual network interfaces. PPP can assign IP addresses to these virtual interfaces, and these IP addresses can be used, for example, to route between the networks on both sides of the tunnel. IPsec in tunneling mode does not create virtual physical interfaces at the end of the tunnel, since the tunnel is handled directly by the TCP/IP stack. L2TP can be used to provide these interfaces, this technique is called L2TP/IPsec. In this case too, PPP provides IP addresses to the extremities of the tunnel. IETF standards PPP is defined in RFC 1661 (The Point-to-Point Protocol, July 1994). RFC 1547 (Requirements for an Internet Standard Point-to-Point Protocol, December 1993) provides historical information about the need for PPP and its development. A series of related RFCs have been written to define how
suitable (old enough) for the expedition." Heironimus stated that Chico (a middle-aged gelding) "wouldn't jump or buck ..." Immediate aftermath At approximately 6:30p.m., Patterson and Gimlin met up with Al Hodgson at his variety store in Willow Creek, approximately south by road, about by Bluff Creek Road from their camp to the 1967 roadhead by Bluff Creek, and down California State Route 96 to Willow Creek. Patterson intended to drive on to Eureka to ship his film. Either at that time, or when he arrived in the Eureka/Arcata area, he called Al DeAtley (his brother-in-law in Yakima) and told him to expect the film he was shipping. He requested Hodgson to call Donald Abbott, whom Grover Krantz described as "the only scientist of any stature to have demonstrated any serious interest in the [Bigfoot] subject," hoping he would help them search for the creature by bringing a tracking dog. Hodgson called, but Abbott declined. Krantz argued that this call the same day of the encounter is evidence against a hoax, at least on Patterson's part. After shipping the film, they headed back toward their camp, where they had left their horses. On their way they "stopped at the Lower Trinity Ranger Station, as planned, arriving about 9:00p.m. Here they met with Syl McCoy [another friend] and Al Hodgson." At this point Patterson called the daily Times-Standard newspaper in Eureka and related his story. They arrived back at their campsite at about midnight. At either 5 or 5:30 the next morning, after it started to rain heavily, Gimlin returned to the filmsite from the camp and covered the other prints with bark to protect them. The cardboard boxes he had been given by Al Hodgson for this purpose and had left outside were so soggy they were useless, so he left them. When he returned to the camp he and Patterson aborted their plan to remain looking for more evidence and departed for home, fearing the rain would wash out their exit. After attempting to go out along "the low road"—Bluff Creek Road—and finding it blocked by a mudslide, they went instead up the steep Onion Mountain Road, off whose shoulder their truck slipped; extracting it required the (unauthorized) borrowing of a nearby front-end loader. The drive home from their campsite covered about , the initial on a low-speed logging road, and then about on twisty Route 96. Driving a truck with three horses, and allowing for occasional stops, it would have taken 13 hours to get home Saturday evening, at an average speed of ; it would have taken 14.5 hours at a average speed. US Forest Service "Timber Management Assistant" Lyle Laverty said, "I [and his team of three, in a Jeep] passed the site on either Thursday the 19th or Friday the 20th" and noticed no tracks. After reading the news of Patterson's encounter on their weekend break, Laverty and his team returned to the site on Monday, the 23rd, and made six photos of the tracks. (Laverty later served as an Assistant Secretary of the Interior under George W. Bush.) Taxidermist and outdoorsman Robert Titmus went to the site with his sister and brother-in-law nine days later. Titmus made plaster casts of ten successive prints of the creature and, as best he could, plotted Patterson's and the creature's movements on a map. Long-term aftermath Film-related Grover Krantz writes that "Patterson had the film developed as soon as possible. At first he thought he had brought in proof of Bigfoot's existence and really expected the scientists to accept it. But only a few scientists were willing to even look at the film," usually at showings at scientific organizations. These were usually arranged at the behest of zoologist, author, and media figure Ivan Sanderson, a supporter of Patterson's film. Seven showings occurred, in Vancouver, Manhattan, The Bronx, Washington, D.C., Atlanta, and Washington, D.C. again (all by the end of 1968); then, later, in Beaverton, Oregon. Of those who were quoted, most expressed various reservations, although some were willing to say they were intrigued by it. Christopher Murphy wrote, "Dahinden traveled to Europe [with the film] in 1971. He visited England, Finland, Sweden, Switzerland and Russia. Although scientists in these countries were somewhat more open-minded than those in North America, their findings were basically the same . ... A real glimmer of hope, however, emerged [in Russia, where he met Bayanov, Bourtsev, and their associates]." Though there was little scientific interest in the film, Patterson was still able to capitalize on it. He made a deal with the BBC, allowing the use of his footage in a docudrama made in return for letting him tour with their docudrama, into which he melded material from his own documentary and additional material he and Al DeAtley filmed. This film was shown in local movie houses around the Pacific Northwest and Midwest. A technique commonly used for nature films called "four-walling" was employed, involving heavy local advertising, mostly on TV, of a few days of showings. It was a modest financial success. Al DeAtley estimated that his 50% of the film's profits amounted to $75,000. The film generated a fair amount of national publicity. Patterson appeared on a few popular TV talk shows to promote the film and belief in Bigfoot by showing excerpts from it: for instance, on the Joe Pyne Show in Los Angeles, in 1967, which covered most of the western US; on Merv Griffin's program, with Krantz offering his analysis of the film; on Joey Bishop's talk show, and also on Johnny Carson's Tonight Show. Articles on the film appeared in Argosy, National Wildlife Magazine, and Reader's Digest. One radio interview, with Gimlin, by Vancouver-based Jack Webster in November 1967, was partly recorded by John Green and reprinted in Loren Coleman's Bigfoot! Patterson also appeared on broadcast interviews on local stations near where his film would be shown during his four-walling tour in 1968. Patterson subsequently sold overlapping distribution rights for the film to several parties, which resulted in costly legal entanglements. After Patterson's death, Michael McLeod wrote, "With the consent of Al DeAtley and Patricia Patterson, the film distributor Ron Olson took over the operation of Northwest Research ... and changed its name to the North American Wildlife Research Association. ... He worked full-time compiling reports, soliciting volunteers to join the hunt, and organizing several small expeditions. A Bigfoot trap Olson and his crew built still survives. ... Olson ... continued to lobby the company [American National Enterprises] to produce a Bigfoot film. ... In 1974 ... ANE finally agreed. ... [It was released in 1975,] titled Bigfoot: Man or Beast. [H]e devised a storyline involving members of a Bigfoot research party. Olson spent several years exhibiting the film around the country. He planned to make millions with the film, but says it lost money." Olson is profiled in Barbara Wasson's Sasquatch Apparitions. On November 25, 1974, CBS broadcast Monsters! Mystery or Myth, a documentary about the Loch Ness Monster and Bigfoot. (It was co-produced by the Smithsonian Institution, which cancelled their contract with the producer the next year). The show attracted fifty million viewers. In 1975, Sunn Classic Pictures released "Bigfoot: The Mysterious Monster" aka "The Mysterious Monsters", which remixed parts of "Monsters! Mystery or Myth" another documentary called "Land Of The Yeti", and also included footage from the Patterson–Gimlin film. Filmmaker-related Patterson's expensive ($369) 16 mm camera had been rented on May 13 from photographer Harold Mattson at Sheppard's Camera Shop in Yakima, but he had kept it longer than the contract had stipulated, and an arrest warrant had been issued for him on October 17; he was arrested within weeks of his return from Bluff Creek. After Patterson returned the camera in working order, this charge was dismissed, in 1969. While Patterson sought publicity, Gimlin was conspicuous by his absence. He only briefly helped to promote the film and avoided discussing his Bigfoot encounter publicly for many subsequent years; he turned down requests for interviews. He later reported that he had avoided publicity after Patterson and promoter Al DeAtley had broken their agreement to pay him a one-third share of any profits generated by the film. Another factor was that his wife objected to publicity. Daegling wrote, "Bigfoot advocates emphasize that Patterson remained an active Bigfoot hunter up until his death." For instance, in 1969, he hired a pair of brothers to travel around in a truck chasing down leads to Bigfoot witnesses and interviewing them. Later, in December of that year, he was one of those present in Bossburg, Washington, in the aftermath of the cripplefoot tracks found there. Krantz reports that "[a] few years after the film was made, Patterson received a letter from a man ["a US airman stationed in Thailand"] who assured him a Sasquatch was being held in a Buddhist monastery. Patterson spent most of his remaining money preparing an expedition to retrieve this creature" only to learn it was a hoax. He learned this only after having sent Dennis Jenson fruitlessly to Thailand (where he concluded that the airman was "mentally unbalanced") and then, after receiving a second untrue letter from the man, going himself to Thailand with Jenson. To obtain money to travel to Thailand, "Patterson called Ron, who had returned to ANE, and sold the company the theatrical rights to the clip for what Olson described as a pretty good sum of money." Patterson died of Hodgkin's lymphoma in 1972. According to Michael McLeod, Greg Long, and Bill Munns, "A few days before Roger died, he told [Bigfoot-book author Peter] Byrne that in retrospect, ... he [wished he] would have shot the thing and brought out a body instead of a reel of film." According to Grover Krantz and Robert Pyle, years later, Patterson and Gimlin both agreed they should have tried to shoot the creature, both for financial gain and to silence naysayers. In 1995, almost three decades after the Patterson–Gimlin filming, Greg Long, a technical writer for a technology firm who had a hobby of investigating and writing about Northwest mysteries, started years of interviewing people who knew Patterson, some of whom described him as a liar and a conman. "Marvin" (pseudonym), Jerry Lee Merritt, Pat Mason, Glen Koelling, and Bob Swanson suffered financially from their dealings with him, as well as 21 small local creditors who sued Patterson via a collection agency. Vilma Radford claimed Patterson never repaid a loan made to him for a Bigfoot movie Roger was planning. Radford had corroborative evidence: a $700 promissory note "for expenses in connection with filming of 'Bigfoot: America's Abominable Snowman.'" Patterson had agreed to repay her $850, plus 5 percent of any profits from the movie. In 1974, Bob Gimlin, with René Dahinden's financial assistance, sued DeAtley and Patterson's widow, Patricia, claiming he had not received his one-third share of the film's proceeds. He won his case in 1976. Legal status Greg Long reports that a 1978 legal "settlement gave Dahinden controlling rights—51 percent of the film footage, 51 percent of video cassette rights, and 100 percent of all 952 frames of the footage. Patty Patterson had 100 percent of all TV rights and 49 percent rights in the film footage. Dahinden had ... bought out Gimlin, who himself had received nothing from Patterson; and Mason and Radford, promised part of the profits by Patterson, had nothing to show for their investment or efforts."The film will enter the public domain on January 1, 2063, when all works published in 1967 enter the public domain in the United States. Ownership of the physical films First reel The whereabouts of the original is unknown, although there are several speculations as to what happened to it. Patterson had ceded ownership of the original to American National Enterprises, which went bankrupt a few years after his death in 1972. Thereafter, Greg Long writes, "Peregrine Entertainment bought the company. Then Peregrine was bought by Century Group of Los Angeles. When Century Group went bankrupt in 1996, Byrne rushed to Deerfield Beach, Florida, where an accountant was auctioning off the company's assets to pay creditors. The company's films were in storage in Los Angeles, but a search failed to turn up the Patterson footage." In 2008, Chris Murphy thought a Florida lawyer might have the film, not realizing until later that the lawyer had contacted the Los Angeles storage company that held it, and that it had responded that the film was not in the location the lawyer's records indicated. Bill Munns writes that it was "last seen by researchers René Dahinden and Bruce Bonney in 1980, when René convinced the film vault [in Southern California] holding it to release it to him". He made Cibachrome images from it. Sometime between then and 1996, the film went missing from its numbered location in the vault. At least seven copies were made of the original film. Bill Munns listed four other missing reels of derivative works that would be helpful to film analysts. Second reel The second reel, showing Patterson and Gimlin making and displaying plaster casts of some footprints, was not shown in conjunction with the first reel at Al DeAtley's house, according to those who were there. Chris Murphy wrote, "I believe the screening of this roll at the University of British Columbia on October 26, 1967, was the first and last major screening." It has subsequently been lost. John Green suspects that Al DeAtley has it. A ten-foot strip from that reel, or from a copy of that reel, from which still images were taken by Chris Murphy, still exists, but it, too, has gone missing. Filming speed One factor that complicates discussion of the Patterson film is that Patterson said he normally filmed at 24 frames per second, but in his haste to capture the Bigfoot on film, he did not note the camera's setting. His Cine-Kodak K-100 camera had markings on its continuously variable dial at 16, 24, 32, 48, and 64 frames per second, but no click-stops, and was capable of filming at any frame speed within this range. Grover Krantz wrote, "Patterson clearly told John Green that he found, after the filming, that the camera was set on 18 frames per second (fps). ... " It has been suggested that Patterson simply misread "16" as "18". "Dr. D.W. Grieve, an anatomist with expertise in human biomechanics ... evaluated the various possibilities" regarding film speed and did not come to a conclusion between them. He "confessed to being perplexed and unsettled" by "the tangible possibility that it [the film subject] was real". John Napier, a primatologist, claimed that "if the movie was filmed at 24 frame/s then the creature's walk cannot be distinguished from a normal human walk. If it was filmed at 16 or 18 frame/s, there are a number of important respects in which it is quite unlike man's gait." Napier, who published before Dahinden and Krantz, contended it was "likely that Patterson would have used 24 frame/s" because it "is best suited to TV transmission," while conceding that "this is entirely speculative." Krantz argued, on the basis of an analysis by Igor Bourtsev, that since Patterson's height is known (), a reasonable calculation can be made of his pace. This running pace can be synchronized with the regular bounces in the initial jumpy portions of the film that were caused by each fast step Patterson took to approach the creature. On the basis of this analysis, Krantz argued that a speed of 24 frames per second can be quickly dismissed and that "[we] may safely rule out 16 frames per second and accept the speed of 18." René Dahinden stated that "the footage of the horses prior to the Bigfoot film looks jerky and unnatural when projected at 24 frame/s." And Dahinden experimented at the film site by having people walk rapidly over the creature's path and reported: "None of us ... could walk that distance in 40 seconds [952 frames / 24 frame/s = 39.6 s], ... so I eliminated 24 frame/s." Bill Munns wrote, "One researcher, Bill Miller, found technical data from a Kodak technician that stated the K-100 cameras were tweaked so even when the dial is set to 16 fps, the camera actually runs at 18 fps. ... I have nine K-100 cameras now. ... I tried it on one camera, and got 18 fps, but the rest still need testing [and all with "film running through the camera"]." Analysis The Patterson–Gimlin film has seen relatively little interest from mainstream scientists. Statements of scientists who viewed the film at a screening, or who conducted a study, are reprinted in Chris Murphy's Bigfoot Film Journal. Typical objections include: Neither humans nor chimpanzees have hairy breasts as does the figure in the film, and Napier has noted that a sagittal crest is "only very occasionally seen, to an insignificant extent, in chimpanzees females". Critics have argued these features are evidence against authenticity. Krantz countered the latter point, saying "a sagittal crest ... is a consequence of absolute size alone." As anthropologist David Daegling writes, "[t]he skeptics have not felt compelled to offer much of a detailed argument against the film; the burden of proof, rightly enough, should lie with the advocates." Yet, without a detailed argument against authenticity, Daegling notes that "the film has not gone away." Similarly, Krantz argues that of the many opinions offered about the Patterson film, "[o]nly a few of these opinions are based on technical expertise and careful study of the film itself." Regarding the quality of the film, second-generation copies or copies from TV and DVD productions are inferior to first-generation copies. Many early frames are blurry due to camera shake, and the quality of subsequent frames varies for the same reason. Stabilization of the film (e.g., by M. K. Davis) to counter the effect of camera shake has improved viewers' ability to analyze it. Regarding "graininess," Bill Munns writes, "Based on transparencies taken off the camera original, ... the PGF original is as fine grain as any color 16mm film can achieve." He adds that graininess increases as images are magnified. Scientific studies Bernard Heuvelmans Bernard Heuvelmans—a zoologist and the so-called "father of cryptozoology"—thought the creature in the Patterson film was a suited human. He objected to the film subject's hair-flow pattern as being too uniform; to the hair on the breasts as not being like a primate; to its buttocks as being insufficiently separated; and to its too-calm retreat from the pursuing men. John Napier Prominent primate expert John Napier (one-time director of the Smithsonian's Primate Biology Program) was one of the few mainstream scientists not only to critique the Patterson–Gimlin film but also to study then-available Bigfoot evidence in a generally sympathetic manner, in his 1973 book, Bigfoot: The Sasquatch and Yeti in Myth and Reality. Napier conceded the likelihood of Bigfoot as a real creature, stating, "I am convinced that Sasquatch exists." But he argued against the film being genuine: "There is little doubt that the scientific evidence taken collectively points to a hoax of some kind. The creature shown in the film does not stand up well to functional analysis." Napier gives several reasons for his and other's skepticism that are commonly raised, but apparently his main reasons are original with him. First, the length of "the footprints are totally at variance with its calculated height". Second, the footprints are of the "hourglass" type, which he is suspicious of. (In response, Barbara Wasson criticized Napier's logic at length.) He adds, "I could not see the zipper; and I still can't. There I think we must leave the matter. Perhaps it was a man dressed up in a monkey-skin; if so it was a brilliantly executed hoax and the unknown perpetrator will take his place with the great hoaxers of the world. Perhaps it was the first film of a new type of hominid, quite unknown to science, in which case Roger Patterson deserves to rank with Dubois, the discoverer of Pithecanthropus erectus, or Raymond Dart of Johannesburg, the man who introduced the world to its immediate human ancestor, Australopithecus africanus." The skeptical views of Grieve and Napier are summarized favorably by Kenneth Wylie (and those of Bayanov and Donskoy negatively) in Appendix A of his 1980 book, Bigfoot: A Personal Inquiry into a Phenomenon. Esteban Sarmiento Esteban Sarmiento is a specialist in physical anthropology at the American Museum of Natural History. He has 25 years of experience with great apes in the wild. He writes, "I did find some inconsistencies in appearance and behavior that might suggest a fake ... but nothing that conclusively shows that this is the case." His most original criticism is this: "The plantar surface of the feet is decidedly pale, but the palm of the hand seems to be dark. There is no mammal I know of in which the plantar sole differs so drastically in color from the palm." His most controversial statements are these: "The gluteals, although large, fail to show a humanlike cleft (or crack)." "Body proportions: ... In all of the above relative values, bigfoot is well within the human range and differs markedly from any living ape and from the 'australopithecine' fossils." (E.g., the IM index is in the normal human range.) And: "I estimate bigfoot's weight to be between 190 and 240 lbs[]." David J. Daegling and Daniel O. Schmitt When anthropologists David J. Daegling of the University of Florida and Daniel O. Schmitt examined the film, they concluded it was impossible to conclusively determine if the subject in the film is nonhuman, and additionally argued that flaws in the studies by Krantz and others invalidated their claims. Daegling and Schmitt noted problems of uncertainties in the subject and camera positions, camera movement, poor image quality, and artifacts of the subject. They concluded: "Based on our analysis of gait and problems inherent in estimating subject dimensions, it is our opinion that it is not possible to evaluate the identity of the film subject with any confidence." Daegling has asserted that the creature's odd walk could be replicated: "Supposed peculiarities of subject speed, stride length, and posture are all reproducible by a human being employing this type of locomotion [a "compliant gait"]." Daegling notes that in 1967, movie and television special effects were primitive compared to the more sophisticated effects in later decades, and allows that if the Patterson film depicts a man in a suit that "it is not unreasonable to suggest that it is better than some of the tackier monster outfits that got thrown together for television at that time." Jessica Rose and James Gamble Jessica Rose and James Gamble are authors of "the definitive text on human gait", Human Walking. They operate the Motion and Gait Analysis Lab at Stanford University. They conducted a high-tech human-replication attempt of "Patty's" gait, in cooperation with Jeff Meldrum. Rose was certain their subject had matched Patty's gait, while Gamble was not quite as sure. Meldrum was impressed and acknowledged that "some aspects" of the creature's walk had been replicated, but not all. The narrator said, "even the experts can see the gait test could not replicate all parameters of the gait." It was shown in an episode of the Discovery Channel's Best Evidence series. Cliff Crook and Chris Murphy A computerized visual analysis of the video conducted by Cliff Crook, who once devoted rooms to sasquatch memorabilia in his home in Bothell, Washington, and Chris Murphy, a Canadian Bigfoot buff from Vancouver, British Columbia, was released in January 1999 and exposed an object which appeared to be the suit's zip-fastener. Zooming in on four magnified frames of the 16 mm footage video exposed what appeared to be tracings of a bell-shaped fastener on the creature's waist area, presumably used to hold a person's suit together. Since both Crook and Murphy were previously staunch supporters of the video's authenticity, Associated Press journalist John W. Humbell noted "Longtime enthusiasts smell a deserter." Other analysts Nike researcher Gordon Valient Krantz also showed the film to Gordon Valient, a researcher for Nike shoes, who he says "made some rather useful observations about some rather unhuman movements he could see". MonsterQuest A first-season episode of MonsterQuest focuses on the Bigfoot phenomenon. One pair of scientists, Jurgen Konczak (Director, Human Sensorimotor Control Laboratory, University of Minnesota) and Esteban Sarmiento, attempts and fails to get a mime outfitted with LEDs on his joints to mimic the Patterson Bigfoot's gait. A second pair, Daris Swindler and Owen Caddy, employs digital enhancement and observes facial movements, such as moving eyelids, lips that compress like an upset chimp's, and a mouth that is lower than it appears, due to a false-lip anomaly like that of a chimp's. (Unfortunately, the show's narrator falsely claims,
least on Patterson's part. After shipping the film, they headed back toward their camp, where they had left their horses. On their way they "stopped at the Lower Trinity Ranger Station, as planned, arriving about 9:00p.m. Here they met with Syl McCoy [another friend] and Al Hodgson." At this point Patterson called the daily Times-Standard newspaper in Eureka and related his story. They arrived back at their campsite at about midnight. At either 5 or 5:30 the next morning, after it started to rain heavily, Gimlin returned to the filmsite from the camp and covered the other prints with bark to protect them. The cardboard boxes he had been given by Al Hodgson for this purpose and had left outside were so soggy they were useless, so he left them. When he returned to the camp he and Patterson aborted their plan to remain looking for more evidence and departed for home, fearing the rain would wash out their exit. After attempting to go out along "the low road"—Bluff Creek Road—and finding it blocked by a mudslide, they went instead up the steep Onion Mountain Road, off whose shoulder their truck slipped; extracting it required the (unauthorized) borrowing of a nearby front-end loader. The drive home from their campsite covered about , the initial on a low-speed logging road, and then about on twisty Route 96. Driving a truck with three horses, and allowing for occasional stops, it would have taken 13 hours to get home Saturday evening, at an average speed of ; it would have taken 14.5 hours at a average speed. US Forest Service "Timber Management Assistant" Lyle Laverty said, "I [and his team of three, in a Jeep] passed the site on either Thursday the 19th or Friday the 20th" and noticed no tracks. After reading the news of Patterson's encounter on their weekend break, Laverty and his team returned to the site on Monday, the 23rd, and made six photos of the tracks. (Laverty later served as an Assistant Secretary of the Interior under George W. Bush.) Taxidermist and outdoorsman Robert Titmus went to the site with his sister and brother-in-law nine days later. Titmus made plaster casts of ten successive prints of the creature and, as best he could, plotted Patterson's and the creature's movements on a map. Long-term aftermath Film-related Grover Krantz writes that "Patterson had the film developed as soon as possible. At first he thought he had brought in proof of Bigfoot's existence and really expected the scientists to accept it. But only a few scientists were willing to even look at the film," usually at showings at scientific organizations. These were usually arranged at the behest of zoologist, author, and media figure Ivan Sanderson, a supporter of Patterson's film. Seven showings occurred, in Vancouver, Manhattan, The Bronx, Washington, D.C., Atlanta, and Washington, D.C. again (all by the end of 1968); then, later, in Beaverton, Oregon. Of those who were quoted, most expressed various reservations, although some were willing to say they were intrigued by it. Christopher Murphy wrote, "Dahinden traveled to Europe [with the film] in 1971. He visited England, Finland, Sweden, Switzerland and Russia. Although scientists in these countries were somewhat more open-minded than those in North America, their findings were basically the same . ... A real glimmer of hope, however, emerged [in Russia, where he met Bayanov, Bourtsev, and their associates]." Though there was little scientific interest in the film, Patterson was still able to capitalize on it. He made a deal with the BBC, allowing the use of his footage in a docudrama made in return for letting him tour with their docudrama, into which he melded material from his own documentary and additional material he and Al DeAtley filmed. This film was shown in local movie houses around the Pacific Northwest and Midwest. A technique commonly used for nature films called "four-walling" was employed, involving heavy local advertising, mostly on TV, of a few days of showings. It was a modest financial success. Al DeAtley estimated that his 50% of the film's profits amounted to $75,000. The film generated a fair amount of national publicity. Patterson appeared on a few popular TV talk shows to promote the film and belief in Bigfoot by showing excerpts from it: for instance, on the Joe Pyne Show in Los Angeles, in 1967, which covered most of the western US; on Merv Griffin's program, with Krantz offering his analysis of the film; on Joey Bishop's talk show, and also on Johnny Carson's Tonight Show. Articles on the film appeared in Argosy, National Wildlife Magazine, and Reader's Digest. One radio interview, with Gimlin, by Vancouver-based Jack Webster in November 1967, was partly recorded by John Green and reprinted in Loren Coleman's Bigfoot! Patterson also appeared on broadcast interviews on local stations near where his film would be shown during his four-walling tour in 1968. Patterson subsequently sold overlapping distribution rights for the film to several parties, which resulted in costly legal entanglements. After Patterson's death, Michael McLeod wrote, "With the consent of Al DeAtley and Patricia Patterson, the film distributor Ron Olson took over the operation of Northwest Research ... and changed its name to the North American Wildlife Research Association. ... He worked full-time compiling reports, soliciting volunteers to join the hunt, and organizing several small expeditions. A Bigfoot trap Olson and his crew built still survives. ... Olson ... continued to lobby the company [American National Enterprises] to produce a Bigfoot film. ... In 1974 ... ANE finally agreed. ... [It was released in 1975,] titled Bigfoot: Man or Beast. [H]e devised a storyline involving members of a Bigfoot research party. Olson spent several years exhibiting the film around the country. He planned to make millions with the film, but says it lost money." Olson is profiled in Barbara Wasson's Sasquatch Apparitions. On November 25, 1974, CBS broadcast Monsters! Mystery or Myth, a documentary about the Loch Ness Monster and Bigfoot. (It was co-produced by the Smithsonian Institution, which cancelled their contract with the producer the next year). The show attracted fifty million viewers. In 1975, Sunn Classic Pictures released "Bigfoot: The Mysterious Monster" aka "The Mysterious Monsters", which remixed parts of "Monsters! Mystery or Myth" another documentary called "Land Of The Yeti", and also included footage from the Patterson–Gimlin film. Filmmaker-related Patterson's expensive ($369) 16 mm camera had been rented on May 13 from photographer Harold Mattson at Sheppard's Camera Shop in Yakima, but he had kept it longer than the contract had stipulated, and an arrest warrant had been issued for him on October 17; he was arrested within weeks of his return from Bluff Creek. After Patterson returned the camera in working order, this charge was dismissed, in 1969. While Patterson sought publicity, Gimlin was conspicuous by his absence. He only briefly helped to promote the film and avoided discussing his Bigfoot encounter publicly for many subsequent years; he turned down requests for interviews. He later reported that he had avoided publicity after Patterson and promoter Al DeAtley had broken their agreement to pay him a one-third share of any profits generated by the film. Another factor was that his wife objected to publicity. Daegling wrote, "Bigfoot advocates emphasize that Patterson remained an active Bigfoot hunter up until his death." For instance, in 1969, he hired a pair of brothers to travel around in a truck chasing down leads to Bigfoot witnesses and interviewing them. Later, in December of that year, he was one of those present in Bossburg, Washington, in the aftermath of the cripplefoot tracks found there. Krantz reports that "[a] few years after the film was made, Patterson received a letter from a man ["a US airman stationed in Thailand"] who assured him a Sasquatch was being held in a Buddhist monastery. Patterson spent most of his remaining money preparing an expedition to retrieve this creature" only to learn it was a hoax. He learned this only after having sent Dennis Jenson fruitlessly to Thailand (where he concluded that the airman was "mentally unbalanced") and then, after receiving a second untrue letter from the man, going himself to Thailand with Jenson. To obtain money to travel to Thailand, "Patterson called Ron, who had returned to ANE, and sold the company the theatrical rights to the clip for what Olson described as a pretty good sum of money." Patterson died of Hodgkin's lymphoma in 1972. According to Michael McLeod, Greg Long, and Bill Munns, "A few days before Roger died, he told [Bigfoot-book author Peter] Byrne that in retrospect, ... he [wished he] would have shot the thing and brought out a body instead of a reel of film." According to Grover Krantz and Robert Pyle, years later, Patterson and Gimlin both agreed they should have tried to shoot the creature, both for financial gain and to silence naysayers. In 1995, almost three decades after the Patterson–Gimlin filming, Greg Long, a technical writer for a technology firm who had a hobby of investigating and writing about Northwest mysteries, started years of interviewing people who knew Patterson, some of whom described him as a liar and a conman. "Marvin" (pseudonym), Jerry Lee Merritt, Pat Mason, Glen Koelling, and Bob Swanson suffered financially from their dealings with him, as well as 21 small local creditors who sued Patterson via a collection agency. Vilma Radford claimed Patterson never repaid a loan made to him for a Bigfoot movie Roger was planning. Radford had corroborative evidence: a $700 promissory note "for expenses in connection with filming of 'Bigfoot: America's Abominable Snowman.'" Patterson had agreed to repay her $850, plus 5 percent of any profits from the movie. In 1974, Bob Gimlin, with René Dahinden's financial assistance, sued DeAtley and Patterson's widow, Patricia, claiming he had not received his one-third share of the film's proceeds. He won his case in 1976. Legal status Greg Long reports that a 1978 legal "settlement gave Dahinden controlling rights—51 percent of the film footage, 51 percent of video cassette rights, and 100 percent of all 952 frames of the footage. Patty Patterson had 100 percent of all TV rights and 49 percent rights in the film footage. Dahinden had ... bought out Gimlin, who himself had received nothing from Patterson; and Mason and Radford, promised part of the profits by Patterson, had nothing to show for their investment or efforts."The film will enter the public domain on January 1, 2063, when all works published in 1967 enter the public domain in the United States. Ownership of the physical films First reel The whereabouts of the original is unknown, although there are several speculations as to what happened to it. Patterson had ceded ownership of the original to American National Enterprises, which went bankrupt a few years after his death in 1972. Thereafter, Greg Long writes, "Peregrine Entertainment bought the company. Then Peregrine was bought by Century Group of Los Angeles. When Century Group went bankrupt in 1996, Byrne rushed to Deerfield Beach, Florida, where an accountant was auctioning off the company's assets to pay creditors. The company's films were in storage in Los Angeles, but a search failed to turn up the Patterson footage." In 2008, Chris Murphy thought a Florida lawyer might have the film, not realizing until later that the lawyer had contacted the Los Angeles storage company that held it, and that it had responded that the film was not in the location the lawyer's records indicated. Bill Munns writes that it was "last seen by researchers René Dahinden and Bruce Bonney in 1980, when René convinced the film vault [in Southern California] holding it to release it to him". He made Cibachrome images from it. Sometime between then and 1996, the film went missing from its numbered location in the vault. At least seven copies were made of the original film. Bill Munns listed four other missing reels of derivative works that would be helpful to film analysts. Second reel The second reel, showing Patterson and Gimlin making and displaying plaster casts of some footprints, was not shown in conjunction with the first reel at Al DeAtley's house, according to those who were there. Chris Murphy wrote, "I believe the screening of this roll at the University of British Columbia on October 26, 1967, was the first and last major screening." It has subsequently been lost. John Green suspects that Al DeAtley has it. A ten-foot strip from that reel, or from a copy of that reel, from which still images were taken by Chris Murphy, still exists, but it, too, has gone missing. Filming speed One factor that complicates discussion of the Patterson film is that Patterson said he normally filmed at 24 frames per second, but in his haste to capture the Bigfoot on film, he did not note the camera's setting. His Cine-Kodak K-100 camera had markings on its continuously variable dial at 16, 24, 32, 48, and 64 frames per second, but no click-stops, and was capable of filming at any frame speed within this range. Grover Krantz wrote, "Patterson clearly told John Green that he found, after the filming, that the camera was set on 18 frames per second (fps). ... " It has been suggested that Patterson simply misread "16" as "18". "Dr. D.W. Grieve, an anatomist with expertise in human biomechanics ... evaluated the various possibilities" regarding film speed and did not come to a conclusion between them. He "confessed to being perplexed and unsettled" by "the tangible possibility that it [the film subject] was real". John Napier, a primatologist, claimed that "if the movie was filmed at 24 frame/s then the creature's walk cannot be distinguished from a normal human walk. If it was filmed at 16 or 18 frame/s, there are a number of important respects in which it is quite unlike man's gait." Napier, who published before Dahinden and Krantz, contended it was "likely that Patterson would have used 24 frame/s" because it "is best suited to TV transmission," while conceding that "this is entirely speculative." Krantz argued, on the basis of an analysis by Igor Bourtsev, that since Patterson's height is known (), a reasonable calculation can be made of his pace. This running pace can be synchronized with the regular bounces in the initial jumpy portions of the film that were caused by each fast step Patterson took to approach the creature. On the basis of this analysis, Krantz argued that a speed of 24 frames per second can be quickly dismissed and that "[we] may safely rule out 16 frames per second and accept the speed of 18." René Dahinden stated that "the footage of the horses prior to the Bigfoot film looks jerky and unnatural when projected at 24 frame/s." And Dahinden experimented at the film site by having people walk rapidly over the creature's path and reported: "None of us ... could walk that distance in 40 seconds [952 frames / 24 frame/s = 39.6 s], ... so I eliminated 24 frame/s." Bill Munns wrote, "One researcher, Bill Miller, found technical data from a Kodak technician that stated the K-100 cameras were tweaked so even when the dial is set to 16 fps, the camera actually runs at 18 fps. ... I have nine K-100 cameras now. ... I tried it on one camera, and got 18 fps, but the rest still need testing [and all with "film running through the camera"]." Analysis The Patterson–Gimlin film has seen relatively little interest from mainstream scientists. Statements of scientists who viewed the film at a screening, or who conducted a study, are reprinted in Chris Murphy's Bigfoot Film Journal. Typical objections include: Neither humans nor chimpanzees have hairy breasts as does the figure in the film, and Napier has noted that a sagittal crest is "only very occasionally seen, to an insignificant extent, in chimpanzees females". Critics have argued these features are evidence against authenticity. Krantz countered the latter point, saying "a sagittal crest ... is a consequence of absolute size alone." As anthropologist David Daegling writes, "[t]he skeptics have not felt compelled to offer much of a detailed argument against the film; the burden of proof, rightly enough, should lie with the advocates." Yet, without a detailed argument against authenticity, Daegling notes that "the film has not gone away." Similarly, Krantz argues that of the many opinions offered about the Patterson film, "[o]nly a few of these opinions are based on technical expertise and careful study of the film itself." Regarding the quality of the film, second-generation copies or copies from TV and DVD productions are inferior to first-generation copies. Many early frames are blurry due to camera shake, and the quality of subsequent frames varies for the same reason. Stabilization of the film (e.g., by M. K. Davis) to counter the effect of camera shake has improved viewers' ability to analyze it. Regarding "graininess," Bill Munns writes, "Based on transparencies taken off the camera original, ... the PGF original is as fine grain as any color 16mm film can achieve." He adds that graininess increases as images are magnified. Scientific studies Bernard Heuvelmans Bernard Heuvelmans—a zoologist and the so-called "father of cryptozoology"—thought the creature in the Patterson film was a suited human. He objected to the film subject's hair-flow pattern as being too uniform; to the hair on the breasts as not being like a primate; to its buttocks as being insufficiently separated; and to its too-calm retreat from the pursuing men. John Napier Prominent primate expert John Napier (one-time director of the Smithsonian's Primate Biology Program) was one of the few mainstream scientists not only to critique the Patterson–Gimlin film but also to study then-available Bigfoot evidence in a generally sympathetic manner, in his 1973 book, Bigfoot: The Sasquatch and Yeti in Myth and Reality. Napier conceded the likelihood of Bigfoot as a real creature, stating, "I am convinced that Sasquatch exists." But he argued against the film being genuine: "There is little doubt that the scientific evidence taken collectively points to a hoax of some kind. The creature shown in the film does not stand up well to functional analysis." Napier gives several reasons for his and other's skepticism that are commonly raised, but apparently his main reasons are original with him. First, the length of "the footprints are totally at variance with its calculated height". Second, the footprints are of the "hourglass" type, which he is suspicious of. (In response, Barbara Wasson criticized Napier's logic at length.) He adds, "I could not see the zipper; and I still can't. There I think we must leave the matter. Perhaps it was a man dressed up in a monkey-skin; if so it was a brilliantly executed hoax and the unknown perpetrator will take his place with the great hoaxers of the world. Perhaps it was the first film of a new type of hominid, quite unknown to science, in which case Roger Patterson deserves to rank with Dubois, the discoverer of Pithecanthropus erectus, or Raymond Dart of Johannesburg, the man who introduced the world to its immediate human ancestor, Australopithecus africanus." The skeptical views of Grieve and Napier are summarized favorably by Kenneth Wylie (and those of Bayanov and Donskoy negatively) in Appendix A of his 1980 book, Bigfoot: A Personal Inquiry into a Phenomenon. Esteban Sarmiento Esteban Sarmiento is a specialist in physical anthropology at the American Museum of Natural History. He has 25 years of experience with great apes in the wild. He writes, "I did find some inconsistencies in appearance and behavior that might suggest a fake ... but nothing that conclusively shows that this is the case." His most original criticism is this: "The plantar surface of the feet is decidedly pale, but the palm of the hand seems to be dark. There is no mammal I know of in which the plantar sole differs so drastically in color from the palm." His most controversial statements are these: "The gluteals, although large, fail to show a humanlike cleft (or crack)." "Body proportions: ... In all of the above relative values, bigfoot is well within the human range and differs markedly from any living ape and from the 'australopithecine' fossils." (E.g., the IM index is in the normal human range.) And: "I estimate bigfoot's weight to be between 190 and 240 lbs[]." David J. Daegling and Daniel O. Schmitt When anthropologists David J. Daegling of the University of Florida and Daniel O. Schmitt examined the film, they concluded it was impossible to conclusively determine if the subject in the film is nonhuman, and additionally argued that flaws in the studies by Krantz and others invalidated their claims. Daegling and Schmitt noted problems of uncertainties in the subject and camera positions, camera movement,
a news programme into a cohesive show Theatrical producer, oversees the staging of theatre productions Video game producer, in charge of overseeing development of a video game Online producer, oversees the making of content for websites Primary producer, an organism which produces energy to then be carried through the rest of the chain/web Other uses The Producers, 1967 film, book and a musical by Mel Brooks and Thomas Meehan Autotroph, an organism that synthesizes energy-rich organic compounds "The
budget and usually does not work on set Line producer, manager during daily operations Impresario, a producer or manager in the theatre and music industries Radio producer, oversees the making of a radio show Record producer, manages sound recording Television producer, oversees all aspects of video production on a television program News producer, compiles all items of a news programme into a cohesive show Theatrical producer, oversees the staging of theatre productions
(born 1984) Jan Lechoń (1899–1956) Krystyna Lenkowska (born 1957) Bolesław Leśmian (1877–1937) Jerzy Liebert (1904–1931) Ewa Lipska (born 1945) Stanisław Herakliusz Lubomirski (1641–1702) Ł Henryka Łazowertówna (1909–1942) Józef Łobodowski (1909–1988) M Antoni Malczewski (1793–1826) Marcin Malek (born 1975) Jakobe Mansztajn (born 1982) Tadeusz Miciński (1873–1918) Adam Mickiewicz (1798–1855), considered Poland's national poet and a leading figure of European romanticism Grażyna Miller (1957–2009) Czesław Miłosz (1911–2004), Nobel Prize in Literature Stanisław Młodożeniec (1895–1959) Jan Andrzej Morsztyn (1621–1693) Zbigniew Morsztyn (1628–1689) N Daniel Naborowski (1573–1640) Adam Naruszewicz (1733–1796) Julian Ursyn Niemcewicz (1758–1841) Cyprian Kamil Norwid (1821–1883) Franciszek Nowicki (1864–1935) O Antoni Edward Odyniec (1804–1885) Artur Oppman (1867–1931) Władysław Orkan (1875–1930) Agnieszka Osiecka (1936–1997) P Leon Pasternak (1910–1969) Maria Pawlikowska-Jasnorzewska (1891–1945) Jacek Podsiadło (born 1964) Wincenty Pol (1807–1872) Halina Poświatowska (1935–1967) Wacław Potocki (1621–1696) Kazimierz Przerwa-Tetmajer a.k.a. Kazimierz Tetmajer (1865–1940) Zenon Przesmycki (1861–1944) Jeremi Przybora (1915–2004) R Mikołaj Rej (1505–1569) Sydor Rey (1908–1979) Barbara Rosiek (born 1959) Tadeusz Różewicz (1921–2014), Nike Award winner Tomasz Różycki (born 1970) Zygmunt Rumel (1915–1943) Lucjan Rydel (1870–1918) Jarosław Marek Rymkiewicz (born 1935), Nike Award winner S Maciej Kazimierz Sarbiewski (1595–1640) Władysław Sebyła (1902–1940) Mikołaj Sęp Szarzyński (1550–1581) Jan Stanisław Skorupski (born 1938) Antoni Słonimski (1895–1976) Juliusz Słowacki (1809–1849), regarded as of the Three Bards of Polish literature Edward Stachura (1937–1979) Anatol Stern (1899–1968) Leopold Staff (1878–1957) Anna Stanisławska (1651–1701) Andrzej Stasiuk (born 1960) Anatol Stern (1899–1968) Marcin Świetlicki (born 1961) Anna Świrszczyńska (1909–1984) Władysław Syrokomla (1823–1862) Janusz Szpotański (1929–2001) Włodzimierz Szymanowicz (1946–1967) Wisława Szymborska (1923–2012), Nobel Prize in Literature Szymon Szymonowic (1558–1629) T Eugeniusz Tkaczyszyn-Dycki (born 1962), Nike Award winner Julian Tuwim (1894–1953) Jan Twardowski (1915–2006) U Kornel Ujejski (1823–1897) W Aleksander Wat (1900–1967) Adam Ważyk (1905–1982) Kazimierz Wierzyński (1894–1969) Stanisław Ignacy Witkiewicz a.k.a. "Witkacy" (1885–1939) Stefan Witwicki (1801–1847) Rafał Wojaczek (1945–1971) Grażyna Wojcieszko (born 1957) Maryla Wolska (1873–1930) Józef Wybicki (1747–1822), author of the National Anthem of Poland Stanisław Wyspiański (1869–1907) Z Tymon Zaborowski (1799–1828) Adam Zagajewski (1945–2021) Józef Bohdan Zaleski (1802–1886) Wacław Michał Zaleski (1799–1849) Kazimiera Zawistowska (1870–1902) Piotr Zbylitowski (1569–1649) Emil Zegadłowicz (1888–1941) Katarzyna Ewa Zdanowicz-Cyganiak (born 1979) Narcyza Żmichowska (1819–1876), a precursor of feminism in
one of the best known and the most translated post-war Polish writers I Maria Ilnicka (1825 or 1827–1897) Wacław Iwaniuk (1912–2001) Jarosław Iwaszkiewicz (1894–1980) J Klemens Janicki (1516–1543) Bruno Jasieński (1901–1938) Mieczysław Jastrun (1903–1983) K Anna Kamieńska (1920–1986) Franciszek Karpiński (1741–1825) Jan Kasprowicz (1860–1936) Andrzej Tadeusz Kijowski (born 1954) Franciszek Dionizy Kniaźnin (1750–1807) Jan Kochanowski (1530–1584), considered the "father of Polish poetry" and the greatest Slavic poet prior to the 19th century Halina Konopacka (1900–1989) Maria Konopnicka (1842–1910) Stanisław Korab-Brzozowski, (1876–1901) Julian Kornhauser (born 1946) Apollo Korzeniowski (1820–1869), father of Polish-British novelist Joseph Conrad Urszula Kozioł (born 1931) Ignacy Krasicki (1735–1801) Zygmunt Krasiński (1812–1859), one of the Three Bards of Polish literature Katarzyna Krenz (born 1953) Józef Krupiński (1930–1998) Ryszard Krynicki (born 1943) Andrzej Krzycki (1482–1537) Paweł Kubisz (1907–1968) Jalu Kurek (1904–1983) Mira Kuś (born 1958) L Antoni Lange (1863–1929) Stanisław Jerzy Lec, (1909–1966) Joanna Lech (born 1984) Jan Lechoń (1899–1956) Krystyna Lenkowska (born 1957) Bolesław Leśmian (1877–1937) Jerzy Liebert (1904–1931) Ewa Lipska (born 1945) Stanisław Herakliusz Lubomirski (1641–1702) Ł Henryka Łazowertówna (1909–1942) Józef Łobodowski (1909–1988) M Antoni Malczewski (1793–1826) Marcin Malek (born 1975) Jakobe Mansztajn (born 1982) Tadeusz Miciński (1873–1918) Adam Mickiewicz (1798–1855), considered Poland's national poet and a leading figure of European romanticism Grażyna Miller (1957–2009) Czesław Miłosz (1911–2004), Nobel Prize in Literature Stanisław Młodożeniec (1895–1959) Jan Andrzej Morsztyn (1621–1693) Zbigniew Morsztyn (1628–1689) N Daniel Naborowski (1573–1640) Adam Naruszewicz (1733–1796) Julian Ursyn Niemcewicz (1758–1841) Cyprian Kamil Norwid (1821–1883) Franciszek Nowicki (1864–1935) O Antoni Edward Odyniec (1804–1885) Artur Oppman (1867–1931) Władysław Orkan (1875–1930) Agnieszka Osiecka (1936–1997) P Leon Pasternak (1910–1969) Maria Pawlikowska-Jasnorzewska (1891–1945) Jacek Podsiadło (born 1964) Wincenty Pol (1807–1872) Halina Poświatowska (1935–1967) Wacław Potocki (1621–1696) Kazimierz Przerwa-Tetmajer a.k.a. Kazimierz Tetmajer (1865–1940) Zenon Przesmycki (1861–1944) Jeremi Przybora (1915–2004)
Morisot's daughter, Julie Manet, married the painter . Valéry and Gobillard had three children: Claude, Agathe and François. Valéry served as a juror with Florence Meyer Blumenthal in awarding the Prix Blumenthal, a grant given between 1919 and 1954 to young French painters, sculptors, decorators, engravers, writers, and musicians. Though his earliest publications date from his mid-twenties, Valéry did not become a full-time writer until 1920, when the man for whom he worked as private secretary, a former chief executive of the Havas news agency, Edouard Lebey, died of Parkinson's disease. Until then, Valéry had, briefly, earned his living at the Ministry of War before assuming the relatively flexible post as assistant to the increasingly impaired Lebey, a job he held for some twenty years. After his election to the Académie française in 1925, Valéry became a tireless public speaker and intellectual figure in French society, touring Europe and giving lectures on cultural and social issues as well as assuming a number of official positions eagerly offered to him by an admiring French nation. He represented France on cultural matters at the League of Nations, and he served on several of its committees, including the sub-committee on Arts and Letters of the Committee on Intellectual Cooperation. The English-language collection The Outlook for Intelligence (1989) contains translations of a dozen essays related to these activities. In 1931, he founded the Collège International de Cannes, a private institution teaching French language and civilization. The Collège is still operating today, offering professional courses for native speakers (for educational certification, law and business) as well as courses for foreign students. He gave the keynote address at the 1932 German national celebration of the 100th anniversary of the death of Johann Wolfgang Goethe. This was a fitting choice, as Valéry shared Goethe's fascination with science (specifically, biology and optics). In addition to his activities as a member of the Académie française, he was also a member of the Academy of Sciences of Lisbon, and of the Front national des Ecrivains. In 1937, he was appointed chief executive of what later became the University of Nice. He was the inaugural holder of the Chair of Poetics at the Collège de France. During World War II, the Vichy regime stripped him of some of these jobs and distinctions because of his quiet refusal to collaborate with Vichy and the German occupation, but Valéry continued, throughout these troubled years, to publish and to be active in French cultural life, especially as a member of the Académie française. Valéry died in Paris in 1945. He is buried in the cemetery of his native town, Sète, the same cemetery celebrated in his famous poem Le Cimetière marin. Work The great silence Valéry is best known as a poet, and he is sometimes considered to be the last of the French symbolists. However, he published fewer than a hundred poems, and none of them drew much attention. On the night of 4 October 1892, during a heavy storm, Paul Valéry underwent an existential crisis, an event that made a huge impact on his writing career. Eventually, around 1898, he quit writing altogether, publishing not a word for nearly twenty years. This hiatus was in part due to the death of his mentor, Stéphane Mallarmé. When, in 1917, he finally broke his 'great silence' with the publication of La Jeune Parque, he was forty-six years of age. La Jeune Parque This obscure, but sublimely musical, masterpiece, of 512 alexandrine lines in rhyming couplets, had taken him four years to complete, and it immediately secured his fame. With "Le Cimetière marin" and "L'Ébauche d'un serpent," it is often considered one of the greatest French poems of the twentieth century. The title was chosen late in the poem's gestation; it refers to the youngest of the three Parcae (the minor Roman deities also called The Fates), though for some readers the connection with that mythological figure is tenuous and problematic. The poem is written in the first person, and is the soliloquy of a young woman contemplating life and death, engagement and withdrawal, love and estrangement, in a setting dominated by the sea, the sky, stars, rocky cliffs, and the rising sun. However, it is also possible to read the poem as an allegory on the way fate moves human affairs or as an attempt to comprehend the horrific violence in Europe at the time of the poem's composition. The poem is not about World War I, but it does try to address the relationships between destruction and beauty, and, in this sense, it resonates with ancient Greek meditations on these matters, especially in the plays of Sophocles and Aeschylus. There are, therefore, evident links with le Cimetière marin, which is also a seaside meditation on comparably large themes. Other works Before la Jeune Parque, Valéry's only publications of note were dialogues, articles, some poems, and a study of Leonardo da Vinci. In 1920 and 1922, he published two slim collections of verses. The first, Album des vers anciens (Album of old verses), was a revision of early but beautifully wrought smaller poems, some of which had been published individually before 1900. The second, Charmes (from the Latin carmina, meaning "songs" and also "incantations"), further confirmed his reputation as a major French poet. The collection includes le Cimetière marin, and many smaller poems with diverse structures. Technique Valéry's technique is quite orthodox in its essentials. His verse rhymes and scans in conventional ways, and it has much in common with the work of Mallarmé. His poem, Palme, inspired James Merrill's celebrated 1974 poem Lost in Translation, and his cerebral lyricism also influenced the American poet, Edgar Bowers. Prose works Valéry described his “true oeuvre” to be prose, and he filled more than 28,000 notebook pages over his lifetime. His far more ample prose writings, peppered with many aphorisms and bons mots, reveal a skeptical outlook on human nature, verging on the cynical. His view of state power was broadly liberal insofar as he believed that state power and infringements
affairs or as an attempt to comprehend the horrific violence in Europe at the time of the poem's composition. The poem is not about World War I, but it does try to address the relationships between destruction and beauty, and, in this sense, it resonates with ancient Greek meditations on these matters, especially in the plays of Sophocles and Aeschylus. There are, therefore, evident links with le Cimetière marin, which is also a seaside meditation on comparably large themes. Other works Before la Jeune Parque, Valéry's only publications of note were dialogues, articles, some poems, and a study of Leonardo da Vinci. In 1920 and 1922, he published two slim collections of verses. The first, Album des vers anciens (Album of old verses), was a revision of early but beautifully wrought smaller poems, some of which had been published individually before 1900. The second, Charmes (from the Latin carmina, meaning "songs" and also "incantations"), further confirmed his reputation as a major French poet. The collection includes le Cimetière marin, and many smaller poems with diverse structures. Technique Valéry's technique is quite orthodox in its essentials. His verse rhymes and scans in conventional ways, and it has much in common with the work of Mallarmé. His poem, Palme, inspired James Merrill's celebrated 1974 poem Lost in Translation, and his cerebral lyricism also influenced the American poet, Edgar Bowers. Prose works Valéry described his “true oeuvre” to be prose, and he filled more than 28,000 notebook pages over his lifetime. His far more ample prose writings, peppered with many aphorisms and bons mots, reveal a skeptical outlook on human nature, verging on the cynical. His view of state power was broadly liberal insofar as he believed that state power and infringements on the individual should be severely limited. Although he had flirted with nationalist ideas during the 1890s, he moved away from them by 1899, and believed that European culture owed its greatness to the ethnic diversity and universalism of the Roman Empire. He denounced the myth of "racial purity" and argued that such purity, if it existed, would only lead to stagnation—thus the mixing of races was necessary for progress and cultural development. In "America as a Projection of the European Mind", Valéry remarked that whenever he despaired about Europe's situation, he could "restore some degree of hope only by thinking of the New World" and mused on the "happy variations" which could result from European "aesthetic ideas filtering into the powerful character of native Mexican art." Raymond Poincaré, Louis de Broglie, André Gide, Henri Bergson, and Albert Einstein all respected Valéry's thinking and became friendly correspondents. Valéry was often asked to write articles on topics not of his choosing; the resulting intellectual journalism was collected in five volumes titled Variétés. The notebooks Valéry's most striking achievement is perhaps his monumental intellectual diary, called the Cahiers (Notebooks). Early every morning of his adult life, he contributed something to the Cahiers, prompting him to write: "Having dedicated those hours to the life of the mind, I thereby earn the right to be stupid for the rest of the day." The subjects of his Cahiers entries often were, surprisingly, reflections on science and mathematics. In fact, arcane topics in these domains appear to have commanded far more of his considered attention than his celebrated poetry. The Cahiers also contain the first drafts of many aphorisms he later included in his books. To date, the Cahiers have been published in their entirety only as photostatic reproductions, and only since 1980 have they begun to receive scholarly scrutiny. The Cahiers have been translated into English in five volumes published by Peter Lang with the title Cahiers/Notebooks. In recent decades Valéry's thought has been considered a touchstone in the field of constructivist epistemology, as noted, for instance, by Jean-Louis Le Moigne in his description of constructivist history. In other literature One of three epigraphs in Cormac McCarthy's novel Blood Meridian is from Valéry's Writing at the Yalu River (1895): "Your ideas are terrifying and your hearts are faint. Your acts of pity and cruelty are absurd, committed with no calm, as if they were irresistible. Finally, you fear blood more and more. Blood and time". In the book "El laberinto de la soledad" from Octavio Paz there are three verses of one of Valéry's poems: Je pense, sur le bord doré de l’univers A ce gout de périr qui prend la Pythonisse En qui mugit l’espoir que le monde finisse. In popular culture Oscar-winning Japanese director Hayao Miyazaki's 2013 film The Wind Rises and the Japanese novel of the same name (on which the film was partially based) take their title from Valéry's verse "Le vent se lève... il faut tenter de vivre !" ("The wind rises… We must try to
the organ. Pianists past and present Modern classical pianists dedicate their careers to performing, recording, teaching, researching, and learning new works to expand their repertoire. They generally do not write or transcribe music as pianists did in the 19th century. Some classical pianists might specialize in accompaniment and chamber music, while others (though comparatively few) will perform as full-time soloists. Classical Mozart could be considered the first "concert pianist" as he performed widely on the piano. Composers Beethoven and Clementi from the classical era were also famed for their playing, as were, from the romantic era, Liszt, Brahms, Chopin, Mendelssohn, Rachmaninoff, and Schumann. It was during the Classical period that the piano begins to establish its place in the hearts and homes of everyday people. From that era, leading performers less known as composers were Clara Schumann and Hans von Bülow. However, as we do not have modern audio recordings of most of these pianists, we rely mainly on written commentary to give us an account of their technique and style. Jazz Jazz pianists almost always perform with other musicians. Their playing is more free than that of classical pianists and they create an air of spontaneity in their performances. They generally do not write down their compositions; improvisation is a significant part of their work. Well known jazz pianists include Bill Evans, Art Tatum, Duke Ellington, Thelonious Monk, Oscar Peterson and Bud Powell. Pop and rock Popular pianists might work as live performers (concert, theatre, etc.), session musicians, arrangers most likely feel at home with synthesizers and other electronic keyboard instruments. Notable popular pianists include Victor Borge who performed as a comedian; Richard Clayderman, who is known for his covers of popular tunes; and singer and entertainer Liberace, who at the height of his fame, was one of the highest-paid entertainers in the world. Well-known pianists
part of their work. Well known jazz pianists include Bill Evans, Art Tatum, Duke Ellington, Thelonious Monk, Oscar Peterson and Bud Powell. Pop and rock Popular pianists might work as live performers (concert, theatre, etc.), session musicians, arrangers most likely feel at home with synthesizers and other electronic keyboard instruments. Notable popular pianists include Victor Borge who performed as a comedian; Richard Clayderman, who is known for his covers of popular tunes; and singer and entertainer Liberace, who at the height of his fame, was one of the highest-paid entertainers in the world. Well-known pianists A single listing of pianists in all genres would be impractical, given the multitude of musicians noted for their performances on the instrument. Below are links to lists of well-known or influential pianists divided by genres: Classical pianists List of classical pianists (recorded) List of classical pianists List of classical piano duos (performers) Jazz pianists List of jazz pianists Pop and rock music pianists List of pop and rock pianists Blues pianists List of blues musicians List of boogie woogie musicians Gospel pianists List of gospel musicians New-age pianists List of new-age music artists Pianist-composers Many important composers were also virtuoso pianists. The following is an incomplete list of such musicians. Classical period Muzio Clementi Wolfgang Amadeus Mozart Ludwig van Beethoven Johann Nepomuk Hummel Carl Maria von Weber Franz Schubert Romantic period Felix Mendelssohn Frédéric Chopin Robert Schumann Franz Liszt Charles-Valentin Alkan Anton Rubinstein Johannes Brahms Camille Saint-Saëns Edvard Grieg Isaac Albéniz Anton Arensky Alexander Scriabin Sergei Rachmaninoff Nikolai Medtner Modern period Claude Debussy Ferruccio Busoni Maurice Ravel Béla Bartók Sergei Prokofiev George Gershwin Dmitri Shostakovich Alberto Ginastera Amateur pianism Some people, having received a solid piano training in their youth, decide not to continue their musical careers but choose nonmusical ones. As a result, there are prominent communities of amateur pianists
day keeps the doctor away If the shoe fits, wear it! On the Internet, nobody knows you're a dog Slow and steady wins the race Don't count your chickens before they hatch Practice makes perfect. Don't put all your eggs in one basket Your mileage may vary All that glitters is not gold You can't have your cake and eat it With great power comes great responsibility The enemy of my enemy is my friend Sources Proverbs come from a variety of sources. Some are, indeed, the result of people pondering and crafting language, such as some by Confucius, Plato, Baltasar Gracián, etc. Others are taken from such diverse sources as poetry, stories, songs, commercials, advertisements, movies, literature, etc. A number of the well known sayings of Jesus, Shakespeare, and others have become proverbs, though they were original at the time of their creation, and many of these sayings were not seen as proverbs when they were first coined. Many proverbs are also based on stories, often the end of a story. For example, the proverb "Who will bell the cat?" is from the end of a story about the mice planning how to be safe from the cat. Some authors have created proverbs in their writings, such as J.R.R. Tolkien, and some of these proverbs have made their way into broader society. Similarly, C.S. Lewis' created proverb about a lobster in a pot, from the Chronicles of Narnia, has also gained currency. In cases like this, deliberately created proverbs for fictional societies have become proverbs in real societies. In a fictional story set in a real society, the movie Forrest Gump introduced "Life is like a box of chocolates" into broad society. In at least one case, it appears that a proverb deliberately created by one writer has been naively picked up and used by another who assumed it to be an established Chinese proverb, Ford Madox Ford having picked up a proverb from Ernest Bramah, "It would be hypocrisy to seek for the person of the Sacred Emperor in a Low Tea House." The proverb with "a longer history than any other recorded proverb in the world", going back to "around 1800 BC" is in a Sumerian clay tablet, "The bitch by her acting too hastily brought forth the blind". Though many proverbs are ancient, they were all newly created at some point by somebody. Sometimes it is easy to detect that a proverb is newly coined by a reference to something recent, such as the Haitian proverb "The fish that is being microwaved doesn't fear the lightning". Similarly, there is a recent Maltese proverb, wil-muturi, ferh u duluri "Women and motorcycles are joys and griefs"; the proverb is clearly new, but still formed as a traditional style couplet with rhyme. Also, there is a proverb in the Kafa language of Ethiopia that refers to the forced military conscription of the 1980s, "...the one who hid himself lived to have children." A Mongolian proverb also shows evidence of recent origin, "A beggar who sits on gold; Foam rubber piled on edge." Another example of a proverb that is clearly recent is this from Sesotho: "A mistake goes with the printer." A political candidate in Kenya popularised a new proverb in his 1995 campaign, Chuth ber "Immediacy is best". "The proverb has since been used in other contexts to prompt quick action." Over 1,400 new English proverbs are said to have been coined and gained currency in the 20th century. This process of creating proverbs is always ongoing, so that possible new proverbs are being created constantly. Those sayings that are adopted and used by an adequate number of people become proverbs in that society. Interpretations Interpreting proverbs is often complex, but is best done in a context. Interpreting proverbs from other cultures is much more difficult than interpreting proverbs in one's own culture. Even within English-speaking cultures, there is difference of opinion on how to interpret the proverb "A rolling stone gathers no moss." Some see it as condemning a person that keeps moving, seeing moss as a positive thing, such as profit; others see the proverb as praising people that keep moving and developing, seeing moss as a negative thing, such as negative habits. Similarly, among Tajik speakers, the proverb "One hand cannot clap" has two significantly different interpretations. Most see the proverb as promoting teamwork. Others understand it to mean that an argument requires two people. In an extreme example, one researcher working in Ghana found that for a single Akan proverb, twelve different interpretations were given. Proverb interpretation is not automatic, even for people within a culture: Owomoyela tells of a Yoruba radio program that asked people to interpret an unfamiliar Yoruba proverb, "very few people could do so". Siran found that people who had moved out of the traditional Vute-speaking area of Cameroon were not able to interpret Vute proverbs correctly, even though they still spoke Vute. Their interpretations tended to be literal. Children will sometimes interpret proverbs in a literal sense, not yet knowing how to understand the conventionalized metaphor. Interpretation of proverbs is also affected by injuries and diseases of the brain, "A hallmark of schizophrenia is impaired proverb interpretation." Features Grammatical structures Proverbs in various languages are found with a wide variety of grammatical structures. In English, for example, we find the following structures (in addition to others): Imperative, negative - Don't beat a dead horse. Imperative, positive - If the shoe fits, wear it! Parallel phrases - Garbage in, garbage out. Rhetorical question - Is the Pope Catholic? Declarative sentence - Birds of a feather flock together. However, people will often quote only a fraction of a proverb to invoke an entire proverb, e.g. "All is fair" instead of "All is fair in love and war", and "A rolling stone" for "A rolling stone gathers no moss." The grammar of proverbs is not always the typical grammar of the spoken language, often elements are moved around, to achieve rhyme or focus. Another type of grammatical construction is the wellerism, a speaker and a quotation, often with an unusual circumstance, such as the following, a representative of a wellerism proverb found in many languages: "The bride couldn't dance; she said, 'The room floor isn't flat.'" Another type of grammatical structure in proverbs is a short dialogue: Shor/Khkas (SW Siberia): "They asked the camel, 'Why is your neck crooked?' The camel laughed roaringly, 'What of me is straight?'" Armenian: "They asked the wine, 'Have you built or destroyed more?' It said, 'I do not know of building; of destroying I know a lot.'" Bakgatla (a.k.a. Tswana): "The thukhui jackal said, 'I can run fast.' But the sands said, 'We are wide.'" (Botswana) Bamana: "'Speech, what made you good?' 'The way I am,' said Speech. 'What made you bad?' 'The way I am,' said Speech." (Mali) Conservative language Because many proverbs are both poetic and traditional, they are often passed down in fixed forms. Though spoken language may change, many proverbs are often preserved in conservative, even archaic, form. In English, for example, "betwixt" is not used by many, but a form of it is still heard (or read) in the proverb "There is many a slip 'twixt the cup and the lip." The conservative form preserves the meter and the rhyme. This conservative nature of proverbs can result in archaic words and grammatical structures being preserved in individual proverbs, as has been documented in Amharic, Greek, Nsenga, Polish, Venda and Hebrew. In addition, proverbs may still be used in languages which were once more widely known in a society, but are now no longer so widely known. For example, English speakers use some non-English proverbs that are drawn from languages that used to be widely understood by the educated class, e.g. "C'est la vie" from French and "Carpe diem" from Latin. Proverbs are often handed down through generations. Therefore, "many proverbs refer to old measurements, obscure professions, outdated weapons, unknown plants, animals, names, and various other traditional matters." Therefore, it is common that they preserve words that become less common and archaic in broader society. Proverbs in solid form—such as murals, carvings, and glass—can be viewed even after the language of their form is no longer widely understood, such as an Anglo-French proverb in a stained glass window in York. Borrowing and spread Proverbs are often and easily translated and transferred from one language into another. "There is nothing so uncertain as the derivation of proverbs, the same proverb being often found in all nations, and it is impossible to assign its paternity." Proverbs are often borrowed across lines of language, religion, and even time. For example, a proverb of the approximate form "No flies enter a mouth that is shut" is currently found in Spain, France, Ethiopia, and many countries in between. It is embraced as a true local proverb in many places and should not be excluded in any collection of proverbs because it is shared by the neighbors. However, though it has gone through multiple languages and millennia, the proverb can be traced back to an ancient Babylonian proverb (Pritchard 1958:146). Another example of a widely spread proverb is "A drowning person clutches at [frogs] foam", found in Peshai of Afghanistan and Orma of Kenya, and presumably places in between. Proverbs about one hand clapping are common across Asia, from Dari in Afghanistan to Japan. Some studies have been done devoted to the spread of proverbs in certain regions, such as India and her neighbors and Europe. An extreme example of the borrowing and spread of proverbs was the work done to create a corpus of proverbs for Esperanto, where all the proverbs were translated from other languages. It is often not possible to trace the direction of borrowing a proverb between languages. This is complicated by the fact that the borrowing may have been through plural languages. In some cases, it is possible to make a strong case for discerning the direction of the borrowing based on an artistic form of the proverb in one language, but a prosaic form in another language. For example, in Ethiopia there is a proverb "Of mothers and water, there is none evil." It is found in Amharic, Alaaba language, and Oromo, three languages of Ethiopia: Oromo: Hadhaa fi bishaan, hamaa hin qaban. Amharic: Käənnatənna wəha, kəfu yälläm. Alaaba: Wiihaa ʔamaataa hiilu yoosebaʔa The Oromo version uses poetic features, such as the initial ha in both clauses with the final -aa in the same word, and both clauses ending with -an. Also, both clauses are built with the vowel a in the first and last words, but the vowel i in the one syllable central word. In contrast, the Amharic and Alaaba versions of the proverb show little evidence of sound-based art. However, not all languages have proverbs. Proverbs are (nearly) universal across Europe, Asia, and Africa. Some languages in the Pacific have them, such as Maori. Other Pacific languages do not, e.g. "there are no proverbs in Kilivila" of the Trobriand Islands. However, in the New World, there are almost no proverbs: "While proverbs abound in the thousands in most cultures of the world, it remains a riddle why the Native Americans have hardly any proverb tradition at all." Hakamies has examined the matter of whether proverbs are found universally, a universal genre, concluding that they are not. Use In conversation Proverbs are used in conversation by adults more than children, partially because adults have learned more proverbs than children. Also, using proverbs well is a skill that is developed over years. Additionally, children have not mastered the patterns of metaphorical expression that are invoked in proverb use. Proverbs, because they are indirect, allow a speaker to disagree or give advice in a way that may be less offensive. Studying actual proverb use in conversation, however, is difficult since the researcher must wait for proverbs to happen. An Ethiopian researcher, Tadesse Jaleta Jirata, made headway in such research by attending and taking notes at events where he knew proverbs were expected to be part of the conversations. In literature Many authors have used proverbs in their writings, for a very wide variety of literary genres: epics, novels, poems, short stories. Probably the most famous user of proverbs in novels is J. R. R. Tolkien in his The Hobbit and The Lord of the Rings series. Herman
is clearly new, but still formed as a traditional style couplet with rhyme. Also, there is a proverb in the Kafa language of Ethiopia that refers to the forced military conscription of the 1980s, "...the one who hid himself lived to have children." A Mongolian proverb also shows evidence of recent origin, "A beggar who sits on gold; Foam rubber piled on edge." Another example of a proverb that is clearly recent is this from Sesotho: "A mistake goes with the printer." A political candidate in Kenya popularised a new proverb in his 1995 campaign, Chuth ber "Immediacy is best". "The proverb has since been used in other contexts to prompt quick action." Over 1,400 new English proverbs are said to have been coined and gained currency in the 20th century. This process of creating proverbs is always ongoing, so that possible new proverbs are being created constantly. Those sayings that are adopted and used by an adequate number of people become proverbs in that society. Interpretations Interpreting proverbs is often complex, but is best done in a context. Interpreting proverbs from other cultures is much more difficult than interpreting proverbs in one's own culture. Even within English-speaking cultures, there is difference of opinion on how to interpret the proverb "A rolling stone gathers no moss." Some see it as condemning a person that keeps moving, seeing moss as a positive thing, such as profit; others see the proverb as praising people that keep moving and developing, seeing moss as a negative thing, such as negative habits. Similarly, among Tajik speakers, the proverb "One hand cannot clap" has two significantly different interpretations. Most see the proverb as promoting teamwork. Others understand it to mean that an argument requires two people. In an extreme example, one researcher working in Ghana found that for a single Akan proverb, twelve different interpretations were given. Proverb interpretation is not automatic, even for people within a culture: Owomoyela tells of a Yoruba radio program that asked people to interpret an unfamiliar Yoruba proverb, "very few people could do so". Siran found that people who had moved out of the traditional Vute-speaking area of Cameroon were not able to interpret Vute proverbs correctly, even though they still spoke Vute. Their interpretations tended to be literal. Children will sometimes interpret proverbs in a literal sense, not yet knowing how to understand the conventionalized metaphor. Interpretation of proverbs is also affected by injuries and diseases of the brain, "A hallmark of schizophrenia is impaired proverb interpretation." Features Grammatical structures Proverbs in various languages are found with a wide variety of grammatical structures. In English, for example, we find the following structures (in addition to others): Imperative, negative - Don't beat a dead horse. Imperative, positive - If the shoe fits, wear it! Parallel phrases - Garbage in, garbage out. Rhetorical question - Is the Pope Catholic? Declarative sentence - Birds of a feather flock together. However, people will often quote only a fraction of a proverb to invoke an entire proverb, e.g. "All is fair" instead of "All is fair in love and war", and "A rolling stone" for "A rolling stone gathers no moss." The grammar of proverbs is not always the typical grammar of the spoken language, often elements are moved around, to achieve rhyme or focus. Another type of grammatical construction is the wellerism, a speaker and a quotation, often with an unusual circumstance, such as the following, a representative of a wellerism proverb found in many languages: "The bride couldn't dance; she said, 'The room floor isn't flat.'" Another type of grammatical structure in proverbs is a short dialogue: Shor/Khkas (SW Siberia): "They asked the camel, 'Why is your neck crooked?' The camel laughed roaringly, 'What of me is straight?'" Armenian: "They asked the wine, 'Have you built or destroyed more?' It said, 'I do not know of building; of destroying I know a lot.'" Bakgatla (a.k.a. Tswana): "The thukhui jackal said, 'I can run fast.' But the sands said, 'We are wide.'" (Botswana) Bamana: "'Speech, what made you good?' 'The way I am,' said Speech. 'What made you bad?' 'The way I am,' said Speech." (Mali) Conservative language Because many proverbs are both poetic and traditional, they are often passed down in fixed forms. Though spoken language may change, many proverbs are often preserved in conservative, even archaic, form. In English, for example, "betwixt" is not used by many, but a form of it is still heard (or read) in the proverb "There is many a slip 'twixt the cup and the lip." The conservative form preserves the meter and the rhyme. This conservative nature of proverbs can result in archaic words and grammatical structures being preserved in individual proverbs, as has been documented in Amharic, Greek, Nsenga, Polish, Venda and Hebrew. In addition, proverbs may still be used in languages which were once more widely known in a society, but are now no longer so widely known. For example, English speakers use some non-English proverbs that are drawn from languages that used to be widely understood by the educated class, e.g. "C'est la vie" from French and "Carpe diem" from Latin. Proverbs are often handed down through generations. Therefore, "many proverbs refer to old measurements, obscure professions, outdated weapons, unknown plants, animals, names, and various other traditional matters." Therefore, it is common that they preserve words that become less common and archaic in broader society. Proverbs in solid form—such as murals, carvings, and glass—can be viewed even after the language of their form is no longer widely understood, such as an Anglo-French proverb in a stained glass window in York. Borrowing and spread Proverbs are often and easily translated and transferred from one language into another. "There is nothing so uncertain as the derivation of proverbs, the same proverb being often found in all nations, and it is impossible to assign its paternity." Proverbs are often borrowed across lines of language, religion, and even time. For example, a proverb of the approximate form "No flies enter a mouth that is shut" is currently found in Spain, France, Ethiopia, and many countries in between. It is embraced as a true local proverb in many places and should not be excluded in any collection of proverbs because it is shared by the neighbors. However, though it has gone through multiple languages and millennia, the proverb can be traced back to an ancient Babylonian proverb (Pritchard 1958:146). Another example of a widely spread proverb is "A drowning person clutches at [frogs] foam", found in Peshai of Afghanistan and Orma of Kenya, and presumably places in between. Proverbs about one hand clapping are common across Asia, from Dari in Afghanistan to Japan. Some studies have been done devoted to the spread of proverbs in certain regions, such as India and her neighbors and Europe. An extreme example of the borrowing and spread of proverbs was the work done to create a corpus of proverbs for Esperanto, where all the proverbs were translated from other languages. It is often not possible to trace the direction of borrowing a proverb between languages. This is complicated by the fact that the borrowing may have been through plural languages. In some cases, it is possible to make a strong case for discerning the direction of the borrowing based on an artistic form of the proverb in one language, but a prosaic form in another language. For example, in Ethiopia there is a proverb "Of mothers and water, there is none evil." It is found in Amharic, Alaaba language, and Oromo, three languages of Ethiopia: Oromo: Hadhaa fi bishaan, hamaa hin qaban. Amharic: Käənnatənna wəha, kəfu yälläm. Alaaba: Wiihaa ʔamaataa hiilu yoosebaʔa The Oromo version uses poetic features, such as the initial ha in both clauses with the final -aa in the same word, and both clauses ending with -an. Also, both clauses are built with the vowel a in the first and last words, but the vowel i in the one syllable central word. In contrast, the Amharic and Alaaba versions of the proverb show little evidence of sound-based art. However, not all languages have proverbs. Proverbs are (nearly) universal across Europe, Asia, and Africa. Some languages in the Pacific have them, such as Maori. Other Pacific languages do not, e.g. "there are no proverbs in Kilivila" of the Trobriand Islands. However, in the New World, there are almost no proverbs: "While proverbs abound in the thousands in most cultures of the world, it remains a riddle why the Native Americans have hardly any proverb tradition at all." Hakamies has examined the matter of whether proverbs are found universally, a universal genre, concluding that they are not. Use In conversation Proverbs are used in conversation by adults more than children, partially because adults have learned more proverbs than children. Also, using proverbs well is a skill that is developed over years. Additionally, children have not mastered the patterns of metaphorical expression that are invoked in proverb use. Proverbs, because they are indirect, allow a speaker to disagree or give advice in a way that may be less offensive. Studying actual proverb use in conversation, however, is difficult since the researcher must wait for proverbs to happen. An Ethiopian researcher, Tadesse Jaleta Jirata, made headway in such research by attending and taking notes at events where he knew proverbs were expected to be part of the conversations. In literature Many authors have used proverbs in their writings, for a very wide variety of literary genres: epics, novels, poems, short stories. Probably the most famous user of proverbs in novels is J. R. R. Tolkien in his The Hobbit and The Lord of the Rings series. Herman Melville is noted for creating proverbs in Moby Dick and in his poetry. Also, C. S. Lewis created a dozen proverbs in The Horse and His Boy, and Mercedes Lackey created dozens for her invented Shin'a'in and Tale'edras cultures; Lackey's proverbs are notable in that they are reminiscent to those of Ancient Asia - e.g. "Just because you feel certain an enemy is lurking behind every bush, it doesn't follow that you are wrong" is like to "Before telling secrets on the road, look in the bushes." These authors are notable for not only using proverbs as integral to the development of the characters and the story line, but also for creating proverbs. Among medieval literary texts, Geoffrey Chaucer's Troilus and Criseyde plays a special role because Chaucer's usage seems to challenge the truth value of proverbs by exposing their epistemological unreliability. Rabelais used proverbs to write an entire chapter of Gargantua. The patterns of using proverbs in literature can change over time. A study of "classical Chinese novels" found proverb use as frequently as one proverb every 3,500 words in Water Margin (Sui-hu chuan) and one proverb every 4,000 words in Wen Jou-hsiang. But modern Chinese novels have fewer proverbs by far. Proverbs (or portions of them) have been the inspiration for titles of books: The Bigger they Come by Erle Stanley Gardner, and Birds of a Feather (several books with this title), Devil in the Details (multiple books with this title). Sometimes a title alludes to a proverb, but does not actually quote much of it, such as The Gift Horse's Mouth by Robert Campbell. Some books or stories have titles that are twisted proverbs, anti-proverbs, such as No use dying over spilled milk, When life gives you lululemons, and two books titled Blessed are the Cheesemakers. The twisted proverb of last title was also used in the Monty Python movie Life of Brian, where a person mishears one of Jesus Christ's beatitudes, "I think it was 'Blessed are the cheesemakers.'" Some books and stories are built around a proverb. Some of Tolkien's books have been analyzed as having "governing proverbs" where "the acton of a book turns on or fulfills a proverbial saying." Some stories have been written with a proverb overtly as an opening, such as "A stitch in time saves nine" at the beginning of "Kitty's Class Day", one of Louisa May Alcott's Proverb Stories. Other times, a proverb appears at the end of a story, summing up a moral to the story, frequently found in Aesop's Fables, such as "Heaven helps those who help themselves" from Hercules and the Wagoner. In a novel by the Ivorian novelist Ahmadou Kourouma, "proverbs are used to conclude each chapter". Proverbs have also been used strategically by poets. Sometimes proverbs (or portions of them or anti-proverbs) are used for titles, such as "A bird in the bush" by Lord Kennet and his stepson Peter Scott and "The blind leading the blind" by Lisa Mueller. Sometimes, multiple proverbs are important parts of poems, such as Paul Muldoon's "Symposium", which begins "You can lead a horse to water but you can't make it hold its nose to the grindstone and hunt with the hounds. Every dog has a stitch in time..." In Finnish there are proverb poems written hundreds of years ago. The Turkish poet Refiki wrote an entire poem by stringing proverbs together, which has been translated into English poetically yielding such verses as "Be watchful and be wary, / But seldom grant a boon; / The man who calls the piper / Will also call the tune." Eliza Griswold also created a poem by stringing proverbs together, Libyan proverbs translated into English. Because proverbs are familiar and often pointed, they have been used by a number of hip-hop poets. This has been true not only in the USA, birthplace of hip-hop, but also in Nigeria. Since Nigeria is so multilingual, hip-hop poets there use proverbs from various languages, mixing them in as it fits their need, sometimes translating the original. For example, "They forget say ogbon ju agbaralo They forget that wisdom is greater than power" Some authors have bent and twisted proverbs, creating anti-proverbs, for a variety of literary effects. For example, in the Harry Potter novels, J. K. Rowling reshapes a standard English proverb into "It's no good crying over spilt potion" and Dumbledore advises Harry not to "count your owls before they are delivered". In a slightly different use of reshaping proverbs, in the Aubrey–Maturin series of historical naval novels by Patrick O'Brian, Capt. Jack Aubrey humorously mangles and mis-splices proverbs, such as "Never count the bear's skin before it is hatched" and "There's a good deal to be said for making hay while the iron is hot." Earlier than O'Brian's Aubrey, Beatrice Grimshaw also used repeated splicings of proverbs in the mouth of an eccentric marquis to create a memorable character in The Sorcerer's Stone, such as "The proof of the pudding sweeps clean" (p. 109) and "A stitch in time is as good as a mile" (p. 97). Because proverbs are so much a part of the language and culture, authors have sometimes used proverbs in historical fiction effectively, but anachronistically, before the proverb was actually known. For example, the novel Ramage and the Rebels, by Dudley Pope is set in approximately 1800. Captain Ramage reminds his adversary "You are supposed to know that it is dangerous to change horses in midstream" (p. 259), with another allusion to the same proverb three pages later. However, the proverb about changing horses in midstream is reliably dated to 1864, so the proverb could not have been known or used by a character from that period. Some authors have used so many proverbs that there have been entire books written cataloging their proverb usage, such as Charles Dickens, Agatha Christie, George Bernard Shaw, Miguel de Cervantes, But using proverbs to illustrate a cultural value is not the same as using a collection of proverbs to discern cultural values. In a comparative study between Spanish and Jordanian proverbs it is defined the social imagination for the mother as an archetype in the context of role transformation and in contrast with the roles of husband, son and brother, in two societies which might be occasionally associated with sexist and /or rural ideologies. Some scholars have adopted a cautious approach, acknowledging at least a genuine, though limited, link between cultural values and proverbs: "The cultural portrait painted by proverbs may be fragmented, contradictory, or otherwise at variance with reality... but must be regarded not as accurate renderings but rather as tantalizing shadows of the culture which spawned them." There is not yet agreement on the issue of whether, and how much, cultural values are reflected in a culture's proverbs. It is clear that the Soviet Union believed that proverbs had a direct link to the values of a culture, as they used them to try to create changes in
International portability of social security benefits is therefore understood as the migrant's ability to preserve, maintain, and transfer acquired social security rights independent of nationality and country of residence. International portability of social security benefits is achieved through bilateral or multilateral social security agreements between countries. These agreements guarantee the totalization of periods of contribution to the social security systems of both countries and the extraterritorial payment of benefits. Currently it is estimated that approximately 23 per cent
not experience any disadvantage such as the loss of contributions and benefits associated with these contributions when moving from one job to another, from one occupation to another, or from the public to the private sector or vice versa. International portability of social security rights allows international migrants, who have contributed to a social security scheme for some time in a particular country, to maintain acquired benefits or benefits in the process of being acquired when moving to another country. International portability of social security benefits is therefore understood as the migrant's ability to preserve, maintain, and transfer acquired social security
appear quite different externally. Order Percopsiformes Berg 1937 Genus †Lateopisciculus Murray & Wilson1996 Genus †Percopsiformorum [Otolith] Suborder Percopsoidei Berg 1937 Family †Libotoniidae Wilson & Williams 1992 Genus †Libotonius Wilson 1977 Family Percopsidae Regan 1911 [Percopsides Agassiz 1850; Erismatopteridae Jordan 1905] Genus †Massamorichthys Murray 1996 Genus †Amphiplaga Cope 1877 Genus †Erismatopterus Cope 1870 Genus Percopsis Agassiz 1849 [Columbia Eigenmann & Eigenmann 1892 non Rang 1834; Columatilla Whitley 1940; Salmoperca Thompson 1850] Suborder Aphredoderoidei Berg 1937 [Amblyopsoidei Regan 1911; Aphredoderoidea; Amblyopsoidea] Family Aphredoderidae Bonaparte 1832 (Pirate perches) Genus †Trichophanes Cope 1872 Genus Aphredoderus Lesueur 1833 ex Cuvier & Valenciennes 1833 [Sternotremia Nelson 1876; Asternotremia Nelson ex Jordan 1877; Scolopsis Gilliams 1824 non Cuvier 1814] Family Amblyopsidae Bonaparte 1832 [Hypsaeidae
North America. They are grouped together because of technical characteristics of their internal anatomy, and the different species may appear quite different externally. Order Percopsiformes Berg 1937 Genus †Lateopisciculus Murray & Wilson1996 Genus †Percopsiformorum [Otolith] Suborder Percopsoidei Berg 1937 Family †Libotoniidae Wilson & Williams 1992 Genus †Libotonius Wilson 1977 Family Percopsidae Regan 1911 [Percopsides Agassiz 1850; Erismatopteridae Jordan 1905] Genus †Massamorichthys Murray 1996 Genus †Amphiplaga Cope 1877 Genus †Erismatopterus Cope 1870 Genus Percopsis Agassiz 1849 [Columbia Eigenmann & Eigenmann 1892 non Rang 1834; Columatilla Whitley 1940; Salmoperca
a means for very-high-precision tests of Coulomb's law. A null result of such an experiment has set a limit of . Sharper upper limits on the mass of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is very large because the galactic magnetic field exists on very great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term mAA would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of . The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring. Such methods were used to obtain the sharper upper limit of (the equivalent of ) given by the Particle Data Group. These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model-dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of from the test of Coulomb's law is valid. Historical development In most theories up to the eighteenth century, light was pictured as being made up of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke (1665), and Christiaan Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early 19th century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light, and by 1850 wave models were generally accepted. James Clerk Maxwell's 1865 prediction that light was an electromagnetic wave – which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves – seemed to be the final blow to particle models of light. The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity. At the same time, investigations of black-body radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency is an integer multiple of an energy quantum As shown by Albert Einstein, some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics. Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law regarding black-body radiation is accepted, the energy quanta must also carry momentum making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton, for which he received the Nobel Prize in 1927. The pivotal question then, was how to unify Maxwell's wave theory of light with its experimentally observed particle nature? The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model. (See and , below.) Einstein's 1905 predictions were verified experimentally in several ways in the first two decades of the 20th century, as recounted in Robert Millikan's Nobel lecture. However, before Compton's experiment showed that photons carried momentum proportional to their wave number (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (See, for example, the Nobel lectures of Wien, Planck and Millikan.) Instead, there was a widespread belief that energy quantization resulted from some unknown constraint on the matter that absorbed or emitted radiation. Attitudes changed over time. In part, the change can be traced to experiments such as those revealing Compton scattering, where it was much more difficult not to ascribe quantization to light itself to explain the observed results. Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory. An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics. A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments. Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven. Wave–particle duality and uncertainty principles Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double-slit experiment has its energy received at a point on the screen with a probability distribution given by its interference pattern determined by Maxwell's wave equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; a photon's Maxwell waves will diffract, but photon energy does not spread out as it propagates, nor does this energy divide when it encounters a beam splitter. Rather, the received photon acts like a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics. In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes. Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl. The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa. A coherent state minimizes the overall uncertainty as far as quantum mechanics allows. Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase. This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, , and the uncertainty in the phase of the wave, . However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase cannot be represented by a Hermitian operator. Bose–Einstein model of a photon gas In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein
results. Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS theory. An important feature of the BKS theory is how it treated the conservation of energy and the conservation of momentum. In the BKS theory, energy and momentum are only conserved on the average across many interactions between matter and radiation. However, refined Compton experiments showed that the conservation laws hold for individual interactions. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics. A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments. Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven. Wave–particle duality and uncertainty principles Photons obey the laws of quantum mechanics, and so their behavior has both wave-like and particle-like aspects. When a photon is detected by a measuring instrument, it is registered as a single, particulate unit. However, the probability of detecting a photon is calculated by equations that describe waves. This combination of aspects is known as wave–particle duality. For example, the probability distribution for the location at which a photon might be detected displays clearly wave-like phenomena such as diffraction and interference. A single photon passing through a double-slit experiment has its energy received at a point on the screen with a probability distribution given by its interference pattern determined by Maxwell's wave equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; a photon's Maxwell waves will diffract, but photon energy does not spread out as it propagates, nor does this energy divide when it encounters a beam splitter. Rather, the received photon acts like a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, including systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. While many introductory texts treat photons using the mathematical techniques of non-relativistic quantum mechanics, this is in some ways an awkward oversimplification, as photons are by nature intrinsically relativistic. Because photons have zero rest mass, no wave function defined for a photon can have all the properties familiar from wave functions in non-relativistic quantum mechanics. In order to avoid these difficulties, physicists employ the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes. Another difficulty is finding the proper analogue for the uncertainty principle, an idea frequently attributed to Heisenberg, who introduced the concept in analyzing a thought experiment involving an electron and a high-energy photon. However, Heisenberg did not give precise mathematical definitions of what the "uncertainty" in these measurements meant. The precise mathematical statement of the position–momentum uncertainty principle is due to Kennard, Pauli, and Weyl. The uncertainty principle applies to situations where an experimenter has a choice of measuring either one of two "canonically conjugate" quantities, like the position and the momentum of a particle. According to the uncertainty principle, no matter how the particle is prepared, it is not possible to make a precise prediction for both of the two alternative measurements: if the outcome of the position measurement is made more certain, the outcome of the momentum measurement becomes less so, and vice versa. A coherent state minimizes the overall uncertainty as far as quantum mechanics allows. Quantum optics makes use of coherent states for modes of the electromagnetic field. There is a tradeoff, reminiscent of the position–momentum uncertainty relation, between measurements of an electromagnetic wave's amplitude and its phase. This is sometimes informally expressed in terms of the uncertainty in the number of photons present in the electromagnetic wave, , and the uncertainty in the phase of the wave, . However, this cannot be an uncertainty relation of the Kennard–Pauli–Weyl type, since unlike position and momentum, the phase cannot be represented by a Hermitian operator. Bose–Einstein model of a photon gas In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995. It was later used by Lene Hau to slow, and then completely stop, light in 1999 and 2001. The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics). Stimulated and spontaneous emission In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density of photons with frequency (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they are absorbed. Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate for a system to absorb a photon of frequency and transition from a lower energy to a higher energy is proportional to the number of atoms with energy and to the energy density of ambient photons of that frequency, where is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate for the emission of photons of frequency and transition from a higher energy to a lower energy is where is the rate constant for emitting a photon spontaneously, and is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state and those in state must, on average, be constant; hence, the rates and must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of and is where and are the degeneracy of the state and that of , respectively, and their energies, the Boltzmann constant and the system's temperature. From this, it is readily derived that and The and are collectively known as the Einstein coefficients. Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients , and once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis". Not long thereafter, in 1926, Paul Dirac derived the rate constants by using a semiclassical approach, and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory. Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory; earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field. Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take. Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function was inspired by Einstein's later work searching for a more complete theory. Quantum field theory Quantization of the electromagnetic field In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption. He decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of , where is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of black-body radiation, which were derived by Einstein in 1909. In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way. As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be , where is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy as a state with photons, each of energy . This approach gives the correct energy fluctuation formula. Dirac took this one step further. He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's and coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics. Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy , and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events. Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization. Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electron–positron pairs. Such photon–photon scattering
to the Empress in 1882. She died at court in 1903. Studies of Japanese fauna and flora His main interest, however, focused on the study of Japanese fauna and flora. He collected as much material as he could. Starting a small botanical garden behind his home (there was not much room on the small island) Siebold amassed over 1,000 native plants. In a specially built glasshouse he cultivated the Japanese plants to endure the Dutch climate. Local Japanese artists like Kawahara Keiga drew and painted images of these plants, creating botanical illustrations but also images of the daily life in Japan, which complemented his ethnographic collection. He hired Japanese hunters to track rare animals and collect specimens. Many specimens were collected with the help of his Japanese collaborators Keisuke Ito (1803–1901), Mizutani Sugeroku (1779–1833), Ōkochi Zonshin (1796–1882) and Katsuragawa Hoken (1797–1844), a physician to the shōgun. As well, Siebold's assistant and later successor, Heinrich Bürger (1806–1858), proved to be indispensable in carrying on Siebold's work in Japan. Siebold first introduced to Europe such familiar garden-plants as the Hosta and the Hydrangea otaksa. Unknown to the Japanese, he was also able to smuggle out germinative seeds of tea plants to the botanical garden in Batavia. Through this single act, he started the tea culture in Java, a Dutch colony at the time. Until then Japan had strictly guarded the trade in tea plants. Remarkably, in 1833, Java already could boast a half million tea plants. He also introduced Japanese knotweed (Reynoutria japonica, syn. Fallopia japonica), which has become a highly invasive weed in Europe and North America. All derive from a single female plant collected by Siebold. During his stay at Dejima, Siebold sent three shipments with an unknown number of herbarium specimens to Leiden, Ghent, Brussels and Antwerp. The shipment to Leiden contained the first specimens of the Japanese giant salamander (Andrias japonicus) to be sent to Europe. In 1825 the government of the Dutch-Indies provided him with two assistants: apothecary and mineralogist Heinrich Bürger (his later successor) and the painter Carl Hubert de Villeneuve. Each would prove to be useful to Siebold's efforts that ranged from ethnographical to botanical to horticultural, when attempting to document the exotic Eastern Japanese experience. De Villeneuve taught Kawahara the techniques of Western painting. Reportedly, Siebold was not the easiest man to deal with. He was in continuous conflict with his Dutch superiors who felt he was arrogant. This threat of conflict resulted in his recall in July 1827 back to Batavia. But the ship, the Cornelis Houtman, sent to carry him back to Batavia, was thrown ashore by a typhoon in Nagasaki bay. The same storm badly damaged Dejima and destroyed Siebold's botanical garden. Repaired, the Cornelis Houtman was refloated. It left for Batavia with 89 crates of Siebold's salvaged botanical collection, but Siebold himself remained behind in Dejima. Siebold Incident In 1826 Siebold made the court journey to Edo. During this long trip he collected many plants and animals. But he also obtained from the court astronomer Takahashi Kageyasu several detailed maps of Japan and Korea (written by Inō Tadataka), an act strictly forbidden by the Japanese government. When the Japanese discovered, by accident, that Siebold had a map of the northern parts of Japan, the government accused him of high treason and of being a spy for Russia. The Japanese placed Siebold under house arrest and expelled him from Japan on 22 October 1829. Satisfied that his Japanese collaborators would continue his work, he journeyed back on the frigate Java to his former residence, Batavia, in possession of his enormous collection of thousands of animals and plants, his books and his maps. The botanical garden of would soon house Siebold's surviving, living flora collection of 2,000 plants. He arrived in the Netherlands on 7 July 1830. His stay in Japan and Batavia had lasted for a period of eight years. Return to Europe Philipp Franz von Siebold arrived in the Netherlands in 1830, just at a time when political troubles erupted in Brussels, leading soon to Belgian independence. Hastily he salvaged his ethnographic collections in Antwerp and his herbarium specimens in Brussels and took them to Leiden, helped by Johann Baptist Fischer. He left behind his botanical collections of living plants that were sent to the University of Ghent. The consequent expansion of this collection of rare and exotic plants led to the horticultural fame of Ghent. In gratitude the University of Ghent presented him in 1841 with specimens of every plant from his original collection. Siebold settled in Leiden, taking with him the major part of his collection. The "Philipp Franz von Siebold collection", containing many type specimens, was the earliest botanical collection from Japan. Even today, it still remains a subject of ongoing research, a testimony to the depth of work undertaken by Siebold. It contained about 12,000 specimens, from which he could describe only about 2,300 species. The whole collection was purchased for a handsome amount by the Dutch government. Siebold was also granted a substantial annual allowance by the Dutch King William II and was appointed Advisor to the King for Japanese Affairs. In 1842, the King even raised Siebold to the nobility as an esquire. The "Siebold collection" opened to the public in 1831. He founded a museum in his home in 1837. This small, private museum would eventually evolve into the National Museum of Ethnology in Leiden. Siebold's successor in Japan, Heinrich Bürger, sent Siebold three more shipments of herbarium specimens collected in Japan. This flora collection formed the basis of the Japanese collections of the National Herbarium of the Netherlands in Leiden, while the zoological specimens Siebold collected were kept by the Rijksmuseum van Natuurlijke Historie (National Museum of Natural History) in Leiden, which later became Naturalis. Both institutions merged into Naturalis Biodiversity Center in 2010, which now maintains the entire natural history collection that Siebold brought back to Leiden. In 1845 Siebold married Helene von Gagern (1820–1877), they had three sons and two daughters. Writings During his stay in Leiden, Siebold wrote Nippon in 1832, the first part of a volume of a richly illustrated ethnographical and geographical work on Japan. The 'Archiv zur Beschreibung Nippons' also contained a report of his journey to the Shogunate Court at Edo. He wrote six further parts, the last ones published posthumously in 1882; his sons published an edited and lower-priced reprint in 1887. The appeared between 1833 and 1841. This work was co-authored by Joseph Hoffmann and Kuo Cheng-Chang, a Javanese of Chinese extraction, who had journeyed along with Siebold from Batavia. It contained a survey of Japanese literature and a Chinese, Japanese and Korean dictionary. Siebold's writing on Japanese religion and customs notably shaped early modern European conceptions of Buddhism and Shinto; he notably suggested that Japanese Buddhism was a form of Monotheism. The zoologists Coenraad Temminck (1777–1858), Hermann Schlegel (1804–1884), and Wilhem de Haan (1801–1855) scientifically described and documented Siebold's collection of Japanese animals. The , a series of monographs published between 1833 and 1850, was mainly based on Siebold's collection, making the Japanese fauna the best-described non-European fauna – "a remarkable feat". A significant part of the was also based on the collections of Siebold's successor on Dejima, Heinrich Bürger. Siebold wrote his in collaboration with the German botanist Joseph Gerhard Zuccarini (1797–1848). It first appeared in 1835, but the work was not completed until after his death, finished in 1870 by F.A.W. Miquel (1811–1871), director of the Rijksherbarium in Leiden. This work expanded Siebold's scientific fame from Japan to Europe. From the Hortus Botanicus Leiden – the botanical garden of Leiden – many of Siebold's plants spread to Europe and from there to other countries. Hosta and Hortensia, Azalea, and the Japanese butterbur and the coltsfoot as well as the Japanese larch began to inhabit gardens across the world. International endeavours After his return to Europe, Siebold tried to exploit his knowledge of Japan. Whilst living in Boppard, from 1852 he corresponded with Russian diplomats such as Baron von Budberg-Bönninghausen, the Russian ambassador to Prussia, which resulted in an invitation to go to St Petersburg to advise the Russian government how to open trade relations with Japan. Though still employed by the Dutch government he did not inform the Dutch of this voyage until after his return. American Naval Commodore Matthew C. Perry consulted Siebold in advance of his voyage to Japan in 1854. He notably advised Townsend Harris on how Christianity might be spread to Japan, alleging based on his time there that the Japanese "hated" Christianity. In 1858, the Japanese government lifted the banishment of Siebold. He returned to Japan in 1859 as an adviser to the Agent of the Dutch Trading Society (Nederlandsche Handel-Maatschappij) in Nagasaki, Albert Bauduin. After two years the connection with the Trading Society was severed as the advice of Siebold was considered to be of no value. In Nagasaki he fathered another child with one of his female servants. In 1861 Siebold organised his appointment as an adviser to the Japanese government and went in that function to Edo. There he tried to obtain a position between the foreign representatives and the Japanese government. As he had been specially admonished by the Dutch authorities before going to Japan that he was to abstain from all interference in politics, the Dutch Consul General in Japan, J.K. de Wit, was ordered to ask Siebold's removal. Siebold was ordered to return to Batavia and from there he returned to Europe. After his return he asked the Dutch government to employ him as Consul General in Japan but the Dutch government severed all relations with Siebold who had a huge debt because of loans given to him, except for the payment of his pension. Siebold kept trying to organise another voyage to Japan. After he did not succeed in gaining employment with the Russian government, he went to Paris in 1865 to try to interest the French government in funding another expedition to Japan, but failed. He died in Munich on 18 October 1866. Legacy Plants named after Siebold The botanical and horticultural spheres of influence have honored Philipp Franz von Siebold by naming some of the very garden-worthy plants that he studied after him. Examples include: Acer sieboldianum or Siebold's Maple: a variety of maple native to Japan Calanthe sieboldii or Siebold's Calanthe
could describe only about 2,300 species. The whole collection was purchased for a handsome amount by the Dutch government. Siebold was also granted a substantial annual allowance by the Dutch King William II and was appointed Advisor to the King for Japanese Affairs. In 1842, the King even raised Siebold to the nobility as an esquire. The "Siebold collection" opened to the public in 1831. He founded a museum in his home in 1837. This small, private museum would eventually evolve into the National Museum of Ethnology in Leiden. Siebold's successor in Japan, Heinrich Bürger, sent Siebold three more shipments of herbarium specimens collected in Japan. This flora collection formed the basis of the Japanese collections of the National Herbarium of the Netherlands in Leiden, while the zoological specimens Siebold collected were kept by the Rijksmuseum van Natuurlijke Historie (National Museum of Natural History) in Leiden, which later became Naturalis. Both institutions merged into Naturalis Biodiversity Center in 2010, which now maintains the entire natural history collection that Siebold brought back to Leiden. In 1845 Siebold married Helene von Gagern (1820–1877), they had three sons and two daughters. Writings During his stay in Leiden, Siebold wrote Nippon in 1832, the first part of a volume of a richly illustrated ethnographical and geographical work on Japan. The 'Archiv zur Beschreibung Nippons' also contained a report of his journey to the Shogunate Court at Edo. He wrote six further parts, the last ones published posthumously in 1882; his sons published an edited and lower-priced reprint in 1887. The appeared between 1833 and 1841. This work was co-authored by Joseph Hoffmann and Kuo Cheng-Chang, a Javanese of Chinese extraction, who had journeyed along with Siebold from Batavia. It contained a survey of Japanese literature and a Chinese, Japanese and Korean dictionary. Siebold's writing on Japanese religion and customs notably shaped early modern European conceptions of Buddhism and Shinto; he notably suggested that Japanese Buddhism was a form of Monotheism. The zoologists Coenraad Temminck (1777–1858), Hermann Schlegel (1804–1884), and Wilhem de Haan (1801–1855) scientifically described and documented Siebold's collection of Japanese animals. The , a series of monographs published between 1833 and 1850, was mainly based on Siebold's collection, making the Japanese fauna the best-described non-European fauna – "a remarkable feat". A significant part of the was also based on the collections of Siebold's successor on Dejima, Heinrich Bürger. Siebold wrote his in collaboration with the German botanist Joseph Gerhard Zuccarini (1797–1848). It first appeared in 1835, but the work was not completed until after his death, finished in 1870 by F.A.W. Miquel (1811–1871), director of the Rijksherbarium in Leiden. This work expanded Siebold's scientific fame from Japan to Europe. From the Hortus Botanicus Leiden – the botanical garden of Leiden – many of Siebold's plants spread to Europe and from there to other countries. Hosta and Hortensia, Azalea, and the Japanese butterbur and the coltsfoot as well as the Japanese larch began to inhabit gardens across the world. International endeavours After his return to Europe, Siebold tried to exploit his knowledge of Japan. Whilst living in Boppard, from 1852 he corresponded with Russian diplomats such as Baron von Budberg-Bönninghausen, the Russian ambassador to Prussia, which resulted in an invitation to go to St Petersburg to advise the Russian government how to open trade relations with Japan. Though still employed by the Dutch government he did not inform the Dutch of this voyage until after his return. American Naval Commodore Matthew C. Perry consulted Siebold in advance of his voyage to Japan in 1854. He notably advised Townsend Harris on how Christianity might be spread to Japan, alleging based on his time there that the Japanese "hated" Christianity. In 1858, the Japanese government lifted the banishment of Siebold. He returned to Japan in 1859 as an adviser to the Agent of the Dutch Trading Society (Nederlandsche Handel-Maatschappij) in Nagasaki, Albert Bauduin. After two years the connection with the Trading Society was severed as the advice of Siebold was considered to be of no value. In Nagasaki he fathered another child with one of his female servants. In 1861 Siebold organised his appointment as an adviser to the Japanese government and went in that function to Edo. There he tried to obtain a position between the foreign representatives and the Japanese government. As he had been specially admonished by the Dutch authorities before going to Japan that he was to abstain from all interference in politics, the Dutch Consul General in Japan, J.K. de Wit, was ordered to ask Siebold's removal. Siebold was ordered to return to Batavia and from there he returned to Europe. After his return he asked the Dutch government to employ him as Consul General in Japan but the Dutch government severed all relations with Siebold who had a huge debt because of loans given to him, except for the payment of his pension. Siebold kept trying to organise another voyage to Japan. After he did not succeed in gaining employment with the Russian government, he went to Paris in 1865 to try to interest the French government in funding another expedition to Japan, but failed. He died in Munich on 18 October 1866. Legacy Plants named after Siebold The botanical and horticultural spheres of influence have honored Philipp Franz von Siebold by naming some of the very garden-worthy plants that he studied after him. Examples include: Acer sieboldianum or Siebold's Maple: a variety of maple native to Japan Calanthe sieboldii or Siebold's Calanthe is a terrestrial evergreen orchid native to Japan, the Ryukyu Islands and Taiwan. Clematis florida var. sieboldiana (syn: C. florida 'Sieboldii' & C. florida 'Bicolor'): a somewhat difficult Clematis to grow "well" but a much sought after plant nevertheless Corylus sieboldiana: (Asian beaked hazel) is a species of nut found in northeastern Asia and Japan Dryopteris sieboldii: a fern with leathery fronds Hosta sieboldii of which a large garden may have a dozen quite distinct cultivars Magnolia sieboldii: the under-appreciated small "Oyama" magnolia Malus sieboldii: the fragrant Toringo Crab-Apple, (originally called Sorbus toringo by Siebold), whose pink buds fade to white Primula sieboldii: the Japanese woodland primula Sakurasou (Chinese/Japanese: 櫻草) Prunus sieboldii: a flowering cherry Sedum sieboldii: a succulent whose leaves form rose-like whorls Tsuga sieboldii: a Japanese hemlock Viburnum sieboldii: a deciduous large shrub that has creamy white flowers in spring and red berries that ripen to black in autumn Animals named after Siebold Enhydris sieboldii or Siebold's smooth water snake A type of abalone, Nordotis gigantea, is known as Siebold's abalone, and is prized for sushi. Further legacy Though he is well known in Japan, where he is called "Shiboruto-san", and although mentioned in the relevant schoolbooks, Siebold is almost unknown elsewhere, except among gardeners who admire the many plants whose names incorporate sieboldii and sieboldiana. The Hortus Botanicus in Leiden has recently laid out the "Von Siebold Memorial Garden", a Japanese garden with plants sent by Siebold. The garden was laid out under a 150-year-old Zelkova serrata tree dating from Siebold's lifetime. Japanese visitors come and visit this garden, to pay their respect for him. Siebold museums Although he was disillusioned by what he perceived as a lack of appreciation for Japan and his contributions to its understanding, a testimony of the remarkable character of Siebold is found in museums that honor him. Japan Museum SieboldHuis in Leiden, Netherlands, shows highlights from the Leiden Siebold collections in the transformed, refitted, formal, first house of Siebold in Leiden Naturalis Biodiversity Center, the National Museum of Natural History in Leiden, Netherlands houses the zoological and botanical specimens Siebold collected during his first stay in Japan (1823-1829). These include 200 mammals, 900 birds, 750 fishes, 170 reptiles, over 5,000 invertebrates, 2,000 different species of plants and 12,000 herbarium specimens. The National Museum of Ethnology in Leiden, Netherlands houses the large collection which Siebold brought together during his first stay in Japan (1823–1829). The State Museum of Ethnology in Munich, Germany, houses the collection of Philipp Franz von Siebold from his second voyage to Japan (1859–1862) and a letter of Siebold to King Ludwig I in which he urged the monarch to found a museum of ethnology at Munich. Siebold's grave, in the shape of a Buddhist pagoda, is in the (Former Southern Cemetery of Munich). He is also commemorated in the name of a street and a large number of mentions in the Botanical Garden at Munich. A Siebold-Museum exists in Würzburg, Germany. Siebold-Museum on , Schlüchtern, Germany. Nagasaki, Japan, pays tribute to Siebold by housing the Siebold Memorial Museum on property adjacent to Siebold's former residence in the Narutaki neighborhood, the first museum dedicated to a non-Japanese in Japan. His collections
and the uneasy interface between mathematical concepts and ordinary language as it is used by non-mathematicians. Probability theory is an established field of study in mathematics. It has its origins in correspondence discussing the mathematics of games of chance between Blaise Pascal and Pierre de Fermat in the seventeenth century, and was formalized and rendered axiomatic as a distinct branch of mathematics by Andrey Kolmogorov in the twentieth century. In axiomatic form, mathematical statements about probability theory carry the same sort of epistemological confidence within the philosophy of mathematics as are shared by other mathematical statements. The mathematical analysis originated in observations of the behaviour of game equipment such as playing cards and dice, which are designed specifically to introduce random and equalized elements; in mathematical terms, they are subjects of indifference. This is not the only way probabilistic statements are used in ordinary human language: when people say that "it will probably rain", they typically do not mean that the outcome of rain versus not-rain is a random factor that the odds currently favor; instead, such statements are perhaps better understood as qualifying their expectation of rain with a degree of confidence. Likewise, when it is written that "the most probable explanation" of the name of Ludlow, Massachusetts "is that it was named after Roger Ludlow", what is meant here is not that Roger Ludlow is favored by a random factor, but rather that this is the most plausible explanation of the evidence, which admits other, less likely explanations. Thomas Bayes attempted to provide a logic that could handle varying degrees of confidence; as such, Bayesian probability is an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held. Though probability initially had somewhat mundane motivations, its modern influence and use is widespread ranging from evidence-based medicine, through six sigma, all the way to the probabilistically checkable proof and the string theory landscape. Classical definition The first attempt at mathematical rigour in the field of probability, championed by Pierre-Simon Laplace, is now known as the classical definition. Developed from studies of games of chance (such as rolling dice) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely. (3.1) This can be represented mathematically as follows: If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by There are two clear limitations to the classical definition. Firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. But some important random experiments, such as tossing a coin until it rises heads, give rise to an infinite set of outcomes. And secondly, you need to determine in advance that all the possible outcomes are equally likely without relying on the notion of probability to avoid circularity—for instance, by symmetry considerations. Frequentism Frequentists posit that the probability of an event is its relative frequency over time, (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some random physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see determinism); or phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel; an example of the second kind is radioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity. If we denote by the number of occurrences of an event in trials, then if we say that . The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example “What is the Chance of an Earthquake?” Subjectivism Subjectivists, also known as Bayesians or followers of epistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation. Epistemic or subjective probability is sometimes called credence, as opposed to the term chance for a propensity probability. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is that a suspect committed a crime, based on the evidence presented. The use of Bayesian probability raises the philosophical debate as to whether it can contribute valid justifications of belief. Bayesians point to the work of Ramsey (p 182) and de Finetti (p 103) as proving that subjective beliefs must follow the laws of probability if they are to be coherent. Evidence casts doubt that humans will have coherent beliefs. The use of Bayesian probability involves specifying a prior probability. This may be obtained from consideration of whether the required prior probability is greater or lesser than a reference probability associated with an urn model or a thought experiment. The issue is that for a given problem, multiple thought experiments could apply, and choosing one is a matter of judgement: different people may assign different prior probabilities, known as the reference class problem. The "sunrise problem" provides an example. Propensity Propensity theorists think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind or to yield a long run relative frequency of such an outcome. This kind of objective probability is sometimes called 'chance'. Propensities, or chances, are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate given outcome types at persistent rates, which are known as propensities or chances. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives (see "single case possible" in the table above). In contrast, a propensitist is able to use the law of large numbers to explain the behaviour of long-run frequencies. This law, which is a consequence of the axioms of probability, says that if (for example) a coin is tossed repeatedly many times, in such a way that its probability of landing heads is the same on each toss, and the outcomes are probabilistically independent, then the relative frequency of heads will be close to the probability of heads on each single toss. This law allows that stable long-run frequencies are a manifestation of invariant single-case probabilities. In addition to explaining the emergence of
On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. The four main evidential interpretations are the classical (e.g. Laplace's) interpretation, the subjective interpretation (de Finetti and Savage), the epistemic or inductive interpretation (Ramsey, Cox) and the logical interpretation (Keynes and Carnap). There are also evidential interpretations of probability covering groups, which are often labelled as 'intersubjective' (proposed by Gillies and Rowbottom). Some interpretations of probability are associated with approaches to statistical inference, including theories of estimation and hypothesis testing. The physical interpretation, for example, is taken by followers of "frequentist" statistical methods, such as Ronald Fisher, Jerzy Neyman and Egon Pearson. Statisticians of the opposing Bayesian school typically accept the frequency interpretation when it makes sense (although not as a definition), but there's less agreement regarding physical probabilities. Bayesians consider the calculation of evidential probabilities to be both valid and necessary in statistics. This article, however, focuses on the interpretations of probability rather than theories of statistical inference. The terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields. The word "frequentist" is especially tricky. To philosophers it refers to a particular theory of physical probability, one that has more or less been abandoned. To scientists, on the other hand, "frequentist probability" is just another name for physical (or objective) probability. Those who promote Bayesian inference view "frequentist statistics" as an approach to statistical inference that is based on the frequency interpretation of probability, usually relying on the law of large numbers and characterized by what is called 'Null Hypothesis Significance Testing' (NHST). Also the word "objective", as applied to probability, sometimes means exactly what "physical" means here, but is also used of evidential probabilities that are fixed by rational constraints, such as logical and epistemic probabilities. Philosophy The philosophy of probability presents problems chiefly in matters of epistemology and the uneasy interface between mathematical concepts and ordinary language as it is used by non-mathematicians. Probability theory is an established field of study in mathematics. It has its origins in correspondence discussing the mathematics of games of chance between Blaise Pascal and Pierre de Fermat in the seventeenth century, and was formalized and rendered axiomatic as a distinct branch of mathematics by Andrey Kolmogorov in the twentieth century. In axiomatic form, mathematical statements about probability theory carry the same sort of epistemological confidence within the philosophy of mathematics as are shared by other mathematical statements. The mathematical analysis originated in observations of the behaviour of game equipment such as playing cards and dice, which are designed specifically to introduce random and equalized elements; in mathematical terms, they are subjects of indifference. This is not the only way probabilistic statements are used in ordinary human language: when people say that "it will probably rain", they typically do not mean that the outcome of rain versus not-rain is a random factor that the odds currently favor; instead, such statements are perhaps better understood as qualifying their expectation of rain with a degree of confidence. Likewise, when it is written that "the most probable explanation" of the name of Ludlow, Massachusetts "is that it was named after Roger Ludlow", what is meant here is not that Roger Ludlow is favored by a random factor, but rather that this is the most plausible explanation of the evidence, which admits other, less likely explanations. Thomas Bayes attempted to provide a logic that could handle varying degrees of confidence; as such, Bayesian probability is an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held. Though probability initially had somewhat mundane motivations, its modern influence and use is widespread ranging from evidence-based medicine, through six sigma, all the way to the probabilistically checkable proof and the string theory landscape. Classical definition The first attempt at mathematical rigour in the field of probability, championed by Pierre-Simon Laplace, is now known as the classical definition. Developed from studies of games of chance (such as rolling dice) it states that probability is shared equally between all the possible outcomes, provided these outcomes can be deemed equally likely. (3.1) This can be represented mathematically as follows: If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by There are two clear limitations to the classical definition. Firstly, it is applicable only to situations in which there is only a 'finite' number of possible outcomes. But some important random experiments, such as tossing a coin until it rises heads, give rise to an infinite set of outcomes. And secondly, you need to determine in advance that all the possible outcomes are equally likely without relying on the notion of probability to avoid circularity—for instance, by symmetry considerations. Frequentism Frequentists posit that the probability of an event is its relative frequency over time, (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some random physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see determinism); or phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel; an example of the second kind is radioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity. If we denote by the number of occurrences of an event in trials, then if we say that . The frequentist view has its own problems. It is of course impossible to actually perform an infinity of repetitions of a random experiment to determine the probability of an event. But if only a finite number of repetitions of the process are performed, different relative frequencies will appear in different series of trials. If these relative frequencies are to define the probability, the probability will be slightly different every time it is measured. But the real probability should be the same every time. If we acknowledge the fact that we only can measure a probability with some error of measurement attached, we still get into problems as the error of measurement can only be expressed as a probability, the very concept we are trying to define. This renders even the frequency definition circular; see for example “What is the Chance of an Earthquake?” Subjectivism Subjectivists, also known as Bayesians or followers of epistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation. Epistemic or subjective probability is sometimes called credence, as opposed to the term chance for a propensity probability. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is
insightful procedure that illustrates the power of the third axiom, and its interaction with the remaining two axioms. Four of the immediate corollaries and their proofs are shown below: Monotonicity If A is a subset of, or equal to B, then the probability of A is less than, or equal to the probability of B. Proof of monotonicity In order to verify the monotonicity property, we set and , where and for . From the properties of the empty set (), it is easy to see that the sets are pairwise disjoint and . Hence, we obtain from the third axiom that Since, by the first axiom, the left-hand side of this equation is a series of non-negative numbers, and since it converges to which is finite, we obtain both and . The probability of the empty set In some cases, is not the only event with probability 0. Proof of probability of the empty set As shown in the previous proof, . This statement can be proved by contradiction: if then the left hand side is infinite; If we have a contradiction, because the left hand side is infinite while must be finite (from the first axiom). Thus, . We have shown as a byproduct of the proof of monotonicity that . The complement rule Proof of the complement rule Given and are mutually exclusive and that : ... (by axiom 3) and, ... (by axiom 2) The numeric bound It immediately
of the proof of monotonicity that . The complement rule Proof of the complement rule Given and are mutually exclusive and that : ... (by axiom 3) and, ... (by axiom 2) The numeric bound It immediately follows from the monotonicity property that Proof of the numeric bound Given the complement rule and axiom 1 : Further consequences Another important property is: This is called the addition law of probability, or the sum rule. That is, the probability that an event in A or B will happen is the sum of the probability of an event in A and the probability of an event in B, minus the probability of an event that is in both A and B. The proof of this is as follows: Firstly, ... (by Axiom 3) So, (by ). Also, and eliminating from both equations gives us the desired result. An extension of the addition law to any number of sets is the inclusion–exclusion principle. Setting B to the complement Ac of A in the addition law gives That is, the probability that any event will not happen (or the event's complement) is 1 minus the probability that it will. Simple example: coin toss Consider a single coin-toss, and assume that the coin will either land heads (H) or tails (T) (but not both). No assumption is made as to whether the coin is fair. We may define: Kolmogorov's axioms imply that: The probability of neither heads nor tails, is 0. The probability of either heads or tails, is 1. The sum of the probability of heads and the probability of
theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics or sequential estimation. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. History of probability The modern mathematical theory of probability has its roots in attempts to analyze games of chance by Gerolamo Cardano in the sixteenth century, and by Pierre de Fermat and Blaise Pascal in the seventeenth century (for example the "problem of points"). Christiaan Huygens published a book on the subject in 1657 and in the 19th century, Pierre Laplace completed what is today considered the classic interpretation. Initially, probability theory mainly considered events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of variables into the theory. This culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of sample space, introduced by Richard von Mises, and measure theory and presented his axiom system for probability theory in 1933. This became the mostly undisputed axiomatic basis for modern probability theory; but, alternatives exist, such as the adoption of finite rather than countable additivity by Bruno de Finetti. Treatment Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, and more. Motivation Consider an experiment that can produce a number of outcomes. The set of all outcomes is called the sample space of the experiment. The power set of the sample space (or equivalently, the event space) is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results. One collection of possible results corresponds to getting an odd number. Thus, the subset {1,3,5} is an element of the power set of the sample space of die rolls. These collections are called events. In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every "event" a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events that contain no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the probability that any of these events occurs is given by the sum of the probabilities of the events. The probability that any one of the events {1,6}, {3}, or {2,4} will occur is 5/6. This is the same as saying that the probability of event {1,2,3,4,6} is 5/6. This event encompasses the possibility of any number except five being rolled. The mutually exclusive event {5} has a probability of 1/6, and the event {1,2,3,4,5,6} has a probability of 1, that is, absolute certainty. When doing calculations using the outcomes of an experiment, it is necessary that all those elementary events have a number assigned to them. This is done using a random variable. A random variable is a function that assigns to each elementary event in the sample space a real number. This function is usually denoted by a capital letter. In the case of a die, the assignment of a number to a certain elementary events can be done using the identity function. This does not always work. For example, when flipping a coin the two possible outcomes are "heads" and "tails". In this example, the random variable X could assign to the outcome "heads" the number "0" () and to the outcome "tails" the number "1" (). Discrete probability distributions deals with events that occur in countable sample spaces. Examples: Throwing dice, experiments with decks of cards, random walk, and tossing coins : Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability. For example, if the event is "occurrence of an even number when a die is rolled", the probability is given by , since 3 faces out of the 6 have even numbers and each face has the same probability of appearing. : The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by . It is then assumed that for each element , an intrinsic "probability" value is attached, which satisfies the following properties: That is, the probability function f(x) lies between zero and one for every value of x in the sample space Ω, and the sum of f(x) over all values x in the sample space Ω is equal to 1. An is defined as any subset of the sample space . The of the event is defined as So, the probability of the entire sample space is 1, and the probability of the null event is 0. The function mapping a point in the sample space to
then there is a unique probability measure on for any cdf, and vice versa. The measure corresponding to a cdf is said to be by the cdf. This measure coincides with the pmf for discrete variables and pdf for continuous variables, making the measure-theoretic approach free of fallacies. The probability of a set in the σ-algebra is defined as where the integration is with respect to the measure induced by Along with providing better understanding and unification of discrete and continuous probabilities, measure-theoretic treatment also allows us to work on probabilities outside , as in the theory of stochastic processes. For example, to study Brownian motion, probability is defined on a space of functions. When it's convenient to work with a dominating measure, the Radon-Nikodym theorem is used to define a density as the Radon-Nikodym derivative of the probability distribution of interest with respect to this dominating measure. Discrete densities are usually defined as this derivative with respect to a counting measure over the set of all possible outcomes. Densities for absolutely continuous distributions are usually defined as this derivative with respect to the Lebesgue measure. If a theorem can be proved in this general setting, it holds for both discrete and continuous distributions as well as others; separate proofs are not required for discrete and continuous distributions. Classical probability distributions Certain random variables occur very often in probability theory because they well describe many natural or physical processes. Their distributions, therefore, have gained special importance in probability theory. Some fundamental discrete distributions are the discrete uniform, Bernoulli, binomial, negative binomial, Poisson and geometric distributions. Important continuous distributions include the continuous uniform, normal, exponential, gamma and beta distributions. Convergence of random variables In probability theory, there are several notions of convergence for random variables. They are listed below in the order of strength, i.e., any subsequent notion of convergence in the list implies convergence according to all of the preceding notions. Weak convergence A sequence of random variables converges to the random variable if their respective cumulative distribution functions converge to the cumulative distribution function of , wherever is continuous. Weak convergence is also called . Most common shorthand notation: Convergence in probability The sequence of random variables is said to converge towards the random variable if for every ε > 0. Most common shorthand notation: Strong convergence The sequence of random variables is said to converge towards the random variable if . Strong convergence is also known as . Most common shorthand notation: As the names indicate, weak convergence is weaker than strong convergence. In fact, strong convergence implies convergence in probability, and convergence in probability implies weak convergence. The reverse statements are not always true. Law of large numbers Common intuition suggests that if a fair coin is tossed many times, then roughly half of the time it will turn up heads, and the other half it will turn up tails. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of heads to the number of tails will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the . This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem. Since it links theoretically derived probabilities to their actual frequency of occurrence in the real world, the law of large numbers is considered as a pillar in the history of statistical theory and has had widespread influence. The (LLN) states that the sample average of a sequence of independent and identically distributed random variables converges towards their common expectation , provided that the expectation of is finite. It is in the different forms of convergence of random variables that separates the weak and the strong law of large numbers Weak law: for Strong law: for It follows from the LLN that if an event of probability p is observed repeatedly during independent experiments, the ratio of the observed frequency of that event to the total number of repetitions converges towards p. For example, if are independent Bernoulli random variables taking values 1 with probability p and 0 with probability 1-p, then for all i, so that converges to p almost surely. Central limit theorem The central limit theorem (CLT) explains the ubiquitous occurrence of the normal distribution in nature, and this theorem, according to David Williams, "is one of the great results of mathematics." The theorem states that the average of many independent and identically distributed random variables with finite variance tends towards a normal distribution irrespective of the distribution followed by the original random variables. Formally, let be independent random variables with mean and variance Then the sequence of random variables converges in distribution to a standard normal random variable. For some classes of random variables, the classic central limit theorem works rather fast, as illustrated in the Berry–Esseen theorem. For example, the distributions with finite first, second, and third moment from the exponential family; on the other hand, for some random variables of the heavy tail and fat tail variety, it works very slowly or may not work at all: in such cases one may use the Generalized Central Limit Theorem (GCLT). See
upper and lower limits is always equal to zero. If the interval is replaced by any measurable set , the according equality still holds: . A continuous random variable is a random variable whose probability distribution is continuous. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others. Cumulative distribution function Continuous probability distributions as defined above are precisely those with an absolutely continuous cumulative distribution function. In this case, the cumulative distribution function has the form where is a density of the random variable with regard to the distribution . Note on terminology: Some authors use the term "continuous distribution" to denote all distributions whose cumulative distribution function is continuous, instead of requiring absolute continuity, which means all distributions such that for all . This includes the (absolutely) continuous distributions defined above, but it also includes singular distributions, which are neither absolutely continuous nor discrete nor a mixture of those, and do not have a density. An example is given by the Cantor distribution. For a more general definition of density functions and the equivalent absolute continuous measures see absolutely continuous measure. Kolmogorov definition In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable function from a probability space to a measurable space . Given that probabilities of events of the form satisfy Kolmogorov's probability axioms, the probability distribution of is the pushforward measure of , which is a probability measure on satisfying . Other kinds of distributions Continuous and discrete distributions with support on or are extremely useful to model a myriad of phenomena, since most practical distributions are supported on relatively simple subsets, such as hypercubes or balls. However, this is not always the case, and there exist phenomena with supports that are actually complicated curves within some space or similar. In these cases, the probability distribution is supported on the image of such curve, and is likely to be determined empirically, rather than finding a closed formula for it. One example is shown in the figure to the right, which displays the evolution of a system of differential equations (commonly known as the Rabinovich–Fabrikant equations) that can be used to model the behaviour of Langmuir waves in plasma. When this phenomenon is studied, the observed states from the subset are as indicated in red. So one could ask what is the probability of observing a state in a certain position of the red subset; if such a probability exists, it is called the probability measure of the system. This kind of complicated support appears quite frequently in dynamical systems. It is not simple to establish that the system has a probability measure, and the main problem is the following. Let be instants in time and a subset of the support; if the probability measure exists for the system, one would expect the frequency of observing states inside set would be equal in interval and , which might not happen; for example, it could oscillate similar to a sine, , whose limit when does not converge. Formally, the measure exists only if the limit of the relative frequency converges when the system is observed into the infinite future. The branch of dynamical systems that studies the existence of a probability measure is ergodic theory. Note that even in these cases, the probability distribution, if it exists, might still be termed "continuous" or "discrete" depending on whether the support is uncountable or countable, respectively. Random number generation Most algorithms are based on a pseudorandom number generator that produces numbers that are uniformly distributed in the half-open interval [0,1). These random variates are then transformed via some algorithm to create a new random variate having the required probability distribution. With this source of uniform pseudo-randomness, realizations of any random variable can be generated. For example, suppose has a uniform distribution between 0 and 1. To construct a random Bernoulli variable for some , we define so that This random variable X has a Bernoulli distribution with parameter . Note that this is a transformation of discrete random variable. For a distribution function of a continuous random variable, a continuous random variable must be constructed. , an inverse function of , relates to the uniform variable : For example, suppose a random variable that has an exponential distribution must be constructed. so and if has a distribution, then the random variable is defined by . This has an exponential distribution of . A frequent problem in statistical simulations (the Monte Carlo method) is the generation of pseudo-random numbers that are distributed in a given way. Common probability distributions and their applications The concept of the probability distribution and the random variables which they describe underlies the mathematical discipline of probability theory, and the science of statistics. There is spread or variability in almost any value that can be measured in a population (e.g. height of people, durability of a metal, sales growth, traffic flow, etc.); almost all measurements are made with some intrinsic error; in physics, many processes are described probabilistically, from the kinetic properties of gases to the quantum mechanical description of fundamental particles. For these and many other reasons, simple numbers are often inadequate for describing a quantity, while probability distributions are often more appropriate. The following is a list of some of the most common probability distributions, grouped by the type of process that they are related to. For a more complete list, see list of probability distributions, which groups by the nature of the outcome being considered (discrete, continuous, multivariate, etc.) All of the univariate distributions below are singly peaked; that is, it is assumed that the values cluster around a single point. In practice, actually observed quantities may cluster around multiple values. Such quantities can be modeled using a mixture distribution. Linear growth (e.g. errors, offsets) Normal distribution (Gaussian distribution), for a single such quantity; the most commonly used continuous distribution Exponential growth (e.g. prices, incomes, populations) Log-normal distribution, for a single such quantity whose log is normally distributed Pareto distribution, for a single such quantity whose log is exponentially distributed; the prototypical power law distribution Uniformly distributed quantities Discrete uniform distribution, for a finite set of values (e.g. the outcome of a fair die) Continuous uniform distribution, for continuously distributed values Bernoulli trials (yes/no events, with a given probability) Basic distributions: Bernoulli distribution, for the outcome of a single Bernoulli trial (e.g. success/failure, yes/no) Binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed total number of independent occurrences Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs Geometric distribution, for binomial-type observations but where the quantity of interest is the number of failures before the first success; a special case of the negative binomial distribution Related to sampling schemes over a finite population: Hypergeometric distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, using sampling without replacement Beta-binomial distribution, for the number of "positive occurrences" (e.g. successes, yes votes, etc.) given a fixed number of total occurrences, sampling using a Pólya urn model (in some sense, the "opposite" of sampling without replacement) Categorical outcomes (events with possible outcomes) Categorical distribution, for a single categorical outcome (e.g. yes/no/maybe in a survey); a generalization of the Bernoulli distribution Multinomial distribution, for the number of each type of categorical outcome, given a fixed number of total outcomes; a generalization of the binomial distribution Multivariate hypergeometric distribution, similar to the multinomial distribution, but using sampling
all subsets whose probability can be measured, and is the probability function, or probability measure, that assigns a probability to each of these measurable subsets . Probability distributions usually belong to one of two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a die) and the probabilities are encoded by a discrete list of the probabilities of the outcomes; in this case the discrete probability distribution is known as probability mass function. On the other hand, continuous probability distributions are applicable to scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day. In the case of real numbers, the continuous probability distribution is described by the cumulative distribution function. In the continuous case, probabilities are described by a probability density function, and the probability distribution is by definition the integral of the probability density function. The normal distribution is a commonly encountered continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various different values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector – a list of two or more random variables – taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. A commonly encountered multivariate distribution is the multivariate normal distribution. Besides the probability function, the cumulative distribution function, the probability mass function and the probability density function, the moment generating function and the characteristic function also serve to identify a probability distribution, as they uniquely determine an underlying cumulative distribution function. Terminology Some key concepts and terms, widely used in the literature on the topic of probability distributions, are listed below. Basic terms Random variable: takes values from a sample space; probabilities describe which values and set of values are taken more likely. Event: set of possible values (outcomes) of a random variable that occurs with a certain probability. Probability function or probability measure: describes the probability that the event occurs. Cumulative distribution function: function evaluating the probability that will take a value less than or equal to for a random variable (only for real-valued random variables). Quantile function: the inverse of the cumulative distribution function. Gives such that, with probability , will not exceed . Discrete probability distributions Discrete probability distribution: for many random variables with finitely or countably infititely many values. Probability mass function (pmf): function that gives the probability that a discrete random variable is equal to some value. Frequency distribution: a table that displays the frequency of various outcomes in a sample. Relative frequency distribution: a frequency distribution where each value has been divided (normalized) by a number of outcomes in a sample (i.e. sample size). Categorical distribution: for discrete random variables with a finite set of values. Continuous probability distributions Continuous probability distribution: for many random variables with uncountably many values. Probability density function (pdf) or Probability density: function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. Related terms Support: set of values that can be assumed with non-zero probability by the random variable. For a random variable , it is sometimes denoted as . Tail: the regions close to the bounds of the random variable, if the pmf or pdf are relatively low therein. Usually has the form , or a union thereof. Head: the region where the pmf or pdf is relatively high. Usually has the form . Expected value or mean: the weighted average of the possible values, using their probabilities as their weights; or the continuous analog thereof. Median: the value such that the set of values less than the median, and the set greater than the median, each have probabilities no greater than one-half. Mode: for a discrete random variable, the value with highest probability; for a continuous random variable, a location at which the probability density function has a local peak. Quantile: the q-quantile is the value such that . Variance: the second moment of the pmf or pdf about the mean; an important measure of the dispersion of the distribution. Standard deviation: the square root of the variance, and hence another measure of dispersion. Symmetry: a property of some distributions in which the portion of the distribution to the left of a specific value (usually the median) is a mirror image of the portion to its right. Skewness: a measure of the extent to which a pmf or pdf "leans" to one side of its mean. The third standardized moment of the distribution. Kurtosis: a measure of the "fatness" of the tails of a pmf or pdf. The fourth standardized moment of the distribution. Cumulative distribution function In the special case of a real-valued random variable, the probability distribution can equivalently be represented by a cumulative distribution function instead of a probability measure. The cumulative distribution function of a random variable with regard to a probability distribution is defined as . The cumulative distribution function of any real-valued random variable has the properties: is non-decreasing; is right-continuous; ; and ; and . Conversely, any function that satisfies the first four of the properties above is the cumulative distribution function of some probability distribution on the real numbers. Any probability distribution can be decomposed as the sum of a discrete, a continuous and a singular continuous distribution, and thus any cumulative distribution function admits a decomposition as the sum of the three according cumulative distribution functions. Discrete probability distribution A discrete probability distribution is the probability distribution of a random variable that can take on only a countable number of values (almost surely) which means that the probability of any event can be expressed as a (finite or countably infinite) sum: , where is a countable set. Thus the discrete random variables are exactly those with a probability mass function . In the case where the range of values is countably infinite, these values have to decline to zero fast enough for the probabilities to add up to 1. For example, if for , the sum of probabilities would be . A discrete random variable is a random variable whose probability distribution is discrete. Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, the negative binomial distribution and categorical distribution. When a sample (a set of observations) is drawn from a larger population, the sample points have an empirical distribution that is discrete, and which provides information about the population distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices. Cumulative distribution function A real-valued discrete random variable can equivalently be defined as a random variable whose cumulative distribution function increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant in intervals without jumps. The points where jumps occur are precisely the values which the random variable may take. Thus the cumulative distribution function has the form . Note that the points where the cdf jumps always form a countable set; this may be any countable set and thus may even be dense in the real numbers. Dirac delta representation A discrete probability distribution is often represented with Dirac measures, the probability distributions of deterministic random variables. For any outcome , let be the Dirac measure concentrated at . Given a discrete probability distribution, there is a countable set with and a probability mass function . If is any event, then or in short, . Similarly, discrete distributions can be represented with the Dirac delta function as a generalized probability density function , where , which means for any event Indicator-function representation For a discrete random variable , let be the values it can take with non-zero probability. Denote These are disjoint sets, and for such sets It follows that the probability that takes any value except for is zero, and thus one can write as except on a set of probability zero, where is the indicator function of . This may serve as an alternative definition of discrete random variables. One-point distribution A special case is the discrete distribution of a random variable that can take on only one fixed value; in other words, it is a deterministic distribution. Expressed formally, the random variable has a one-point distribution if it has a possible outcome such that All other possible outcomes then have probability 0. Its cumulative distribution function jumps immediately from 0 to 1. Continuous probability distribution A continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integral. More precisely, a real random variable has a continuous probability distribution if there is a function such that for each interval the probability of belonging to is given by the integral of over : . This is the definition of a probability density function, so that continuous probability distributions are exactly those with a probability density function. In particular, the probability for to take any single value (that is, ) is zero, because an integral with coinciding upper and lower limits is always equal to zero. If the interval is replaced by any measurable set , the according equality still holds: . A continuous random variable is a random variable whose probability distribution is continuous. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others. Cumulative distribution function Continuous probability distributions as defined above are precisely those with an absolutely continuous cumulative distribution function. In this case, the cumulative distribution function has the form where is a density of the random variable with regard to the distribution . Note on terminology: Some authors use the term "continuous distribution" to denote all distributions whose cumulative distribution function is continuous, instead of requiring absolute continuity, which means all distributions such that for all . This includes the (absolutely) continuous distributions defined above, but it also includes singular distributions, which are neither absolutely continuous nor discrete nor a mixture of those, and do not have a density. An example is given by the Cantor distribution. For a more general definition of density functions and the equivalent absolute continuous measures see absolutely continuous measure. Kolmogorov definition In the measure-theoretic formalization of probability theory, a random variable is defined as a measurable
Kendal's concordance coefficient, etc. are useful statistical tools. (B) Test-Retest Reliability: Test-Retest Procedure is estimation of temporal consistency of the test. A test is administered twice to the same sample with a time interval. Correlation between two sets of scores is used as an estimate of reliability. Testing conditions are assumed to be identical. (C) Internal Consistency Reliability: Internal consistency reliability estimates consistency of items with each other. Split-half reliability (Spearman- Brown Prophecy) and Cronbach Alpha are popular estimates of this reliability. (D) Parallel Form Reliability: It is an estimate of consistency between two different instruments of measurement. The inter-correlation between two parallel forms of a test or scale is used as an estimate of parallel form reliability. Validity Validity of a scale or test is ability of the instrument to measure what it purports to measure. Construct validity, Content Validity, and Criterion Validity are types of validity. Construct validity is estimated by convergent and discriminant validity and factor analysis. Convergent and discriminant validity are ascertained by correlation between similar of different constructs. Content Validity: Subject matter experts evaluate content validity. Criterion Validity is correlation between the test and a criterion variable (or variables) of the construct. Regression analysis, Multiple regression analysis, and Logistic regression are used as an estimate of criterion validity. Software applications: The R software has ‘psych’ package that is useful for classical test theory analysis. Modern test Theory The modern test theory is based on latent trait model. Every item estimates the ability of the test taker. The ability parameter is called as theta (θ). The difficulty parameter is called b. the two important assumptions are local independence and unidimensionality. The Item Response Theory has three models. They are one parameter logistic model, two parameter logistic model and three parameter logistic model. In addition, Polychromous IRT Model are also useful. The R Software has ‘ltm’, packages useful for IRT analysis. Factor Analysis Factor analysis is at the core of psychological statistics. It has two schools: (1) Exploratory Factor analysis (2) Confirmatory Factor analysis. Exploratory Factor Analysis (EFA) The exploratory factor analysis begins without a theory or with a very tentative theory. It is a dimension reduction technique. It is useful in psychometrics, multivariate analysis of data and data analytics. Typically a k-dimensional correlation matrix or covariance matrix of variables is reduced to k X r factor pattern matrix where r < k. Principal Component analysis and common factor analysis are two ways of extracting data. Principal axis factoring, ML factor analysis, alpha factor analysis and image factor analysis is most useful ways of EFA. It employs various factor rotation methods which can be classified into orthogonal (resulting in uncorrelated factors) and oblique (resulting correlated factors). The ‘psych’ package in R is useful for EFA. Confirmatory Factor Analysis (CFA) Confirmatory Factor Analysis (CFA) is factor analytic technique that begins with theory and test the theory by carrying out factor analysis. The CFA is also called as latent structure analysis, which considers factor as latent variables causing actual observable variables. The basic equation of the CFA is X = Λξ + δ where, X is observed variables, Λ are structural coefficients, ξ are latent variables (factors) and δ are errors. The parameters are estimated using ML methods however; other methods of estimation are also available. The chi-square test is very sensitive and hence various fit measures are used. R package ‘sem’, ‘lavaan’ are useful for the same. Experimental Design Experimental Methods are very popular in psychology. It has more than 100 years tradition. Experimental psychology has a status of sub-discipline in psychology . The statistical methods are applied for
called b. the two important assumptions are local independence and unidimensionality. The Item Response Theory has three models. They are one parameter logistic model, two parameter logistic model and three parameter logistic model. In addition, Polychromous IRT Model are also useful. The R Software has ‘ltm’, packages useful for IRT analysis. Factor Analysis Factor analysis is at the core of psychological statistics. It has two schools: (1) Exploratory Factor analysis (2) Confirmatory Factor analysis. Exploratory Factor Analysis (EFA) The exploratory factor analysis begins without a theory or with a very tentative theory. It is a dimension reduction technique. It is useful in psychometrics, multivariate analysis of data and data analytics. Typically a k-dimensional correlation matrix or covariance matrix of variables is reduced to k X r factor pattern matrix where r < k. Principal Component analysis and common factor analysis are two ways of extracting data. Principal axis factoring, ML factor analysis, alpha factor analysis and image factor analysis is most useful ways of EFA. It employs various factor rotation methods which can be classified into orthogonal (resulting in uncorrelated factors) and oblique (resulting correlated factors). The ‘psych’ package in R is useful for EFA. Confirmatory Factor Analysis (CFA) Confirmatory Factor Analysis (CFA) is factor analytic technique that begins with theory and test the theory by carrying out factor analysis. The CFA is also called as latent structure analysis, which considers factor as latent variables causing actual observable variables. The basic equation of the CFA is X = Λξ + δ where, X is observed variables, Λ are structural coefficients, ξ are latent variables (factors) and δ are errors. The parameters are estimated using ML methods however; other methods of estimation are also available. The chi-square test is very sensitive and hence various fit measures are used. R package ‘sem’, ‘lavaan’ are useful for the same. Experimental Design Experimental Methods are very popular in psychology. It has more than 100 years tradition. Experimental psychology has a status of sub-discipline in psychology . The statistical methods are applied for designing and analyzing experimental data. They involve, t-test, ANOVA, ANCOVA, MANOVA, MANCOVA, binomial test, chi-square etc. are used for the analysis of the experimental data. Multivariate Behavioral Research Multivariate behavioral research is becoming very popular in psychology. These methods include Multiple Regression and Prediction; Moderated and Mediated Regression Analysis; Logistics Regression; Canonical Correlations; Cluster analysis; Multi-level modeling; Survival-Failure analysis; Structural Equations Modeling; hierarchical linear modelling, etc. are very useful for psychological statistics. Journals for statistical application for
Ball. Consequences album Cook played multiple roles on the 1977 concept album Consequences, written and produced by former 10cc members Kevin Godley and Lol Creme. A mixture of spoken comedy and progressive rock with an environmental subtext, Consequences started as a single that Godley and Creme planned to make to demonstrate their invention, an electric guitar effect called the Gizmo, which they developed in 10cc. The project grew into a three-LP box set. The comedy sections were originally intended to be performed by a cast including Spike Milligan and Peter Ustinov, but Godley and Creme eventually settled on Cook once they realised he could perform most parts himself. The storyline centres on the impending divorce of ineffectual Englishman Walter Stapleton (Cook) and his French wife Lulu (Judy Huxtable). While meeting their lawyers – the bibulous Mr. Haig and overbearing Mr. Pepperman (both played by Cook) – the encroaching global catastrophe interrupts proceedings with bizarre and mysterious happenings, which seem to centre on Mr. Blint (Cook), a musician and composer living in the flat below Haig's office, to which it is connected by a large hole in the floor. Although it has since developed a cult following due to Cook's presence, Consequences was released as punk was sweeping the UK and proved a resounding commercial failure, savaged by critics who found the music self-indulgent. The script and story have evident connections to Cook's own life – his then-wife Judy Huxtable plays Walter's wife. Cook's struggles with alcohol are mirrored in Haig's drinking, and there is a parallel between the fictional divorce of Walter and Lulu and Cook's own divorce from his first wife. The voice and accent Cook used for the character of Stapleton are similar to those of Cook's Beyond the Fringe colleague, Alan Bennett, and a book on Cook's comedy, How Very Interesting, speculates that the characters Cook plays in Consequences are caricatures of the four Beyond the Fringe cast members – the alcoholic Haig represents Cook, the tremulous Stapleton is Bennett, the parodically Jewish Pepperman is Miller, and the pianist Blint represents Moore. 1980s Cook starred in the LWT special Peter Cook & Co. in 1980. The show included comedy sketches, including a Tales of the Unexpected parody "Tales of the Much As We Expected". This involved Cook as Roald Dahl, explaining his name had been Ronald before he dropped the "n". The cast included Cleese, Rowan Atkinson, Beryl Reid, Paula Wilcox, and Terry Jones. Partly spurred by Moore's growing film star status, Cook moved to Hollywood in that year, and appeared as an uptight English butler to a wealthy American woman in a short-lived United States television sitcom, The Two of Us, also making cameo appearances in a couple of undistinguished films. In 1983, Cook played the role of Richard III in the first episode of Blackadder, "The Foretelling", which parodies Laurence Olivier's portrayal. In 1984, he played the role of Nigel, the mathematics teacher, in Jeannot Szwarc's film Supergirl, working alongside the evil Selena played by Faye Dunaway. He then narrated the short film Diplomatix by Norwegian comedy trio Kirkvaag, Lystad, and Mjøen, which won the "Special Prize of the City of Montreux" at the Montreux Comedy Festival in 1985. In 1986, he partnered Joan Rivers on her UK talk show. He appeared as Mr Jolly in 1987 in The Comic Strip Presents...''' episode "Mr. Jolly Lives Next Door", playing an assassin who covers the sound of his murders by playing Tom Jones records. That same year, Cook appeared in The Princess Bride as the "Impressive Clergyman" who officiates at the wedding ceremony between Buttercup and Prince Humperdinck, uttering the now-famous line "Mawage!" Also that year, he spent time working with humourist Martin Lewis on a political satire about the 1988 US presidential elections for HBO, but the script went unproduced. Lewis suggested that Cook team with Moore for the US Comic Relief telethon for the homeless. The duo reunited and performed their "One Leg Too Few" sketch. Cook again collaborated with Moore for the 1989 Secret Policeman's Biggest Ball. A 1984 commercial for John Harvey & Sons showed Cook at a poolside party drinking Harvey's Bristol Cream sherry. He then says to "throw away those silly little glasses" whereupon the other party guests toss their sunglasses in the swimming pool. In 1988, Cook appeared as a contestant on the improvisation comedy show Whose Line Is It Anyway? He was declared the winner, his prize being to read the credits in the style of a New York cab driver – a character he had portrayed in Peter Cook & Co.Cook occasionally called in to Clive Bull's night-time phone-in radio show on LBC in London. Using the name "Sven from Swiss Cottage", he mused on love, loneliness, and herrings in a mock Norwegian accent. Jokes included Sven's attempts to find his estranged wife, in which he often claimed to be telephoning the show from all over the world, and his hatred of the Norwegian obsession with fish. While Bull was clearly aware that Sven was fictional, he did not learn of the caller's real identity until later. Revival In late 1989, Cook married for the third time, to Malaysian-born property developer Chiew Lin Chong in Torbay, Devon. She provided him with some stability in his personal life, and he reduced his drinking to the extent that for a time he was teetotal. He lived alone in a small 18th-century house in Perrins Walk, Hampstead, while she kept her own property just away. Cook returned to the BBC as Sir Arthur Streeb-Greebling for an appearance with Ludovic Kennedy in A Life in Pieces. The 12 interviews saw Sir Arthur recount his life, based on the song "Twelve Days of Christmas". Unscripted interviews with Cook as Streeb-Greebling and satirist Chris Morris were recorded in late 1993 and broadcast as Why Bother? on BBC Radio 3 in 1994. Morris described them: On 17 December 1993, Cook appeared on Clive Anderson Talks Back as four characters – biscuit tester and alien abductee Norman House, football manager and motivational speaker Alan Latchley, judge Sir James Beauchamp, and rock legend Eric Daley. The following day, he appeared on BBC2 performing links for Arena's "Radio Night". He also appeared in the 1993 Christmas special of One Foot in the Grave ("One Foot in the Algarve"), playing a muckraking tabloid photographer. Before the end of the following year, his mother died, and a grief-stricken Cook returned to heavy drinking. He made his last television appearance on the show Pebble Mill at One in November 1994. Personal life and death Cook was married three times. He was first married to Wendy Snowden, whom he met at university, in 1963; they had two daughters, Lucy and Daisy; they divorced in 1971. Cook then married his second wife, model and actress Judy Huxtable, in 1973, the marriage formally ending in 1989 after they had been separated for some years. He married his third and final wife, Chiew Lin Chong, in 1989, to whom he remained married until his death. Cook became stepfather to Chong's daughter, Nina. Chong died at the age of 71 in November 2016. Cook died in a coma on 9 January 1995 at age 57 at the Royal Free Hospital in Hampstead, London, from a gastrointestinal haemorrhage, a complication probably resulting from years of heavy drinking. His body was cremated at Golders Green Crematorium, and his ashes were buried in an unmarked plot behind St John-at-Hampstead, not far from his home in Perrins Walk. Dudley Moore attended Cook's memorial service at St John-at-Hampstead on 1 May 1995. He and Martin Lewis presented a two-night memorial for Cook at The Improv in Los Angeles, on 15 and 16 November 1995, to mark what would have been Cook's 58th birthday. Cook was an avid spectator of most sports and was a supporter of Tottenham Hotspur football club. Legacy Cook is widely acknowledged as a strong influence on the many British comedians who followed him from the amateur dramatic clubs of British universities to the Edinburgh Festival Fringe, and then to radio and television. On his death, some critics choose to see Cook's life as tragic, insofar as the brilliance of his youth had not been sustained in his later years. However, Cook always maintained he had no ambitions for sustained success. He assessed happiness by his friendships and his enjoyment of life. Eric Idle said Cook had not wasted his talent, but rather that the newspapers had tried to waste him. Several friends honoured him with a dedication in the closing credits of Fierce Creatures (1997), a comedy film written by John Cleese about a zoo in peril of being closed. It starred Cleese alongside Jamie Lee Curtis, Kevin Kline, and Michael Palin. The dedication displays photos and the lifespan dates of Cook and of naturalist and humourist Gerald Durrell. In 1999, the minor planet 20468 Petercook, in the main asteroid belt, was named after Cook. Channel 4 broadcast Not Only But Always, a television film dramatising the relationship between Cook and Moore, with Rhys Ifans portraying Cook. At the 2005 Edinburgh Festival Fringe, a play, Pete and Dud: Come Again written by Chris Bartlett and Nick Awde, examined the relationship from Moore's view. The play was transferred to London's West End at The Venue in 2006 and toured the UK the following year. During the West End run, Tom Goodman-Hill starred as Cook, with Kevin Bishop as Moore. A green plaque to honour Cook was unveiled by the Westminster City Council and the Heritage Foundation at the site of the Establishment Club, at 18 Greek Street, on 15 February 2009. A blue plaque was unveiled by the Torbay Civic Society on 17 November 2014 at Cook's place of birth, "Shearbridge", Middle Warberry Road, Torquay, with his widow Lin and other members of the family in attendance. A further blue plaque was commissioned and erected at the home of Torquay United, Plainmoor, Torquay, in 2015. FilmographyBachelor of Hearts (1958) – Pedestrian in Street (uncredited)Ten Thousand Talents (short film, 1960) – voiceWhat's Going on Here (TV film, 1963) The Wrong Box (1966) – Morris FinsburyAlice in Wonderland (TV film, 1966) – Mad HatterBedazzled (1967) – George Spiggott / The DevilA Dandy in Aspic (1968) – PrentissMonte Carlo or Bust! (released in the US as Those Daring Young Men in Their Jaunty Jalopies) (1969) – Maj. Digby DawlishThe Bed Sitting Room (1969) – InspectorThe Rise and Rise of Michael Rimmer (1970) – Michael RimmerBehind the Fridge (TV film, 1971) – Various CharactersAn Apple a Day (TV film, 1971) – Mr Elwood Sr.The Adventures of Barry McKenzie (1972) – DominicSaturday Night at the Baths (1975) – Himself, in theatre audience (uncredited)Find the Lady (1976) – LewenhakEric Sykes Shows a Few of Our Favourite Things (TV film, 1977) – StagehandThe Hound of the Baskervilles (1978) – Sherlock HolmesDerek and Clive Get the Horn (1979) – ClivePeter Cook & Co. (TV Special, 1980) – Various CharactersYellowbeard (1983) – Lord Percy LambournSupergirl (1984) – NigelKenny Everett's Christmas Carol (TV movie, 1985) – Ghost of Christmas Yet To ComeThe Myth (1986) – HimselfThe Princess Bride (1987) – The Impressive ClergymanWhoops Apocalypse (1988) – Sir Mortimer ChrisWithout a Clue (1988) – Norman GreenhoughJake's Journey (TV movie, 1988) – KingGetting It Right (1989) – Mr AdrianGreat Balls of Fire! (1989) – First English ReporterThe Craig Ferguson Story (TV film, 1991) – Fergus Ferguson Roger Mellie (1991) - Roger Mellie (voice)One Foot in the Algarve (1993 episode of One Foot in the Grave) – Martin TroutBlack Beauty (1994) – Lord Wexmire (final film role)Peter Cook Talks Golf Balls
on Greta Garbo. When Cook learned a few years later that the videotapes of the series were to be wiped, a common practice at the time, he offered to buy the recordings from the BBC but was refused because of copyright issues. He suggested he could purchase new tapes so that the BBC would have no need to erase the originals, but this was also turned down. Of the original 22 programmes, only eight still survive complete. A compilation of six half-hour programmes, The Best of... What's Left of... Not Only...But Also was shown on television and has been released on both VHS and DVD. With The Wrong Box (1966) and Bedazzled (1967), Cook and Moore began to act in films together. Directed by Stanley Donen, the underlying story of Bedazzled is credited to Cook and Moore and its screenplay to Cook. A comic parody of Faust, it stars Cook as George Spigott (the Devil) who tempts Stanley Moon (Moore), a frustrated, short-order chef, with the promise of gaining his heart's desire – the unattainable beauty and waitress at his cafe, Margaret Spencer (Eleanor Bron) – in exchange for his soul, but repeatedly tricks him. The film features cameo appearances by Barry Humphries as Envy and Raquel Welch as Lust. Moore composed the soundtrack music and co-wrote (with Cook) the songs performed in the film. His jazz trio backed Cook on the theme, a parodic anti-love song, which Cook delivered in a deadpan monotone and included his familiar put-down, "you fill me with inertia". In 1968, Cook and Moore briefly switched to ATV for four one-hour programmes titled Goodbye Again, based on the Pete and Dud characters. Cook's increasing alcoholism led him to become reliant on cue cards; the show was not a popular success, owing in part to a strike causing the suspension of the publication of the ITV listings magazine TV Times. John Cleese was also a cast member, who would become close lifelong friends with Cook and later collaborated on multiple projects together. 1970s In 1970, Cook took over a project initiated by David Frost for a satirical film about an opinion pollster who rises to become President of Great Britain. Under Cook's guidance, the character became modelled on Frost. The film, The Rise and Rise of Michael Rimmer, was not a success, although the cast contained notable names (including Cleese and Graham Chapman, who were co-writers). Cook became a favourite of the chat show circuit but his effort at hosting such a show for the BBC in 1971, Where Do I Sit?, was said by the critics to have been a disappointment. It was axed after only three episodes and was replaced by Michael Parkinson, the start of Parkinson's career as a chat show host. Parkinson later asked Cook what his ambitions were, Cook replied jocularly "[...] in fact, my ambition is to shut you up altogether you see!" Cook and Moore fashioned sketches from Not Only....But Also and Goodbye Again with new material into the stage revue called Behind the Fridge. This show toured Australia in 1972 before transferring to New York City in 1973, re-titled as Good Evening. Cook frequently appeared on and off stage the worse for drink. Nonetheless, the show proved very popular and it won Tony and Grammy Awards. When it finished, Moore stayed in the United States to pursue his film acting ambitions in Hollywood. Cook returned to Britain and in 1973, married the actress and model Judy Huxtable. Later, the more risqué humour of Pete and Dud went further on such LPs as "Derek and Clive". The first recording was initiated by Cook to alleviate boredom during the Broadway run of Good Evening and used material conceived years before for the two characters but considered too outrageous. One of these audio recordings was also filmed and therein tensions between the duo are seen to rise. Chris Blackwell circulated bootleg copies to friends in the music business. The popularity of the recording convinced Cook to release it commercially, although Moore was initially reluctant, fearing that his rising fame as a Hollywood star would be undermined. Two further Derek and Clive albums were released, the last accompanied by a film. Cook and Moore hosted Saturday Night Live on 24 January 1976 during the show's first season. They did a number of their classic stage routines, including "One Leg Too Few" and "Frog and Peach" among others, in addition to participating in some skits with the show's ensemble cast. In 1978, Cook appeared on the British music series Revolver as the manager of a ballroom where emerging punk and new wave acts played. For some groups, these were their first appearances on television. Cook's acerbic commentary was a distinctive aspect of the programme. In 1979, Cook recorded comedy-segments as B-sides to the Sparks 12-inch singles "Number One Song in Heaven" and "Tryouts for the Human Race". The main songwriter Ron Mael often began with a banal situation in his lyrics and then went at surreal tangents in the style of Cook and S. J. Perelman. Amnesty International performances Cook appeared at the first three fund-raising galas staged by Cleese and Martin Lewis on behalf of Amnesty International. The benefits were dubbed The Secret Policeman's Balls, though it wasn't until the third show in 1979 that the title was used. He performed on all three nights of the first show in April 1976, A Poke in the Eye (With a Sharp Stick), as an individual performer and as a member of the cast of Beyond the Fringe, which reunited for the first time since the 1960s. He also appeared in a Monty Python sketch, taking the place of Eric Idle. Cook was on the cast album of the show and in the film, Pleasure at Her Majesty's. He was in the second Amnesty gala in May 1977, An Evening Without Sir Bernard Miles. It was retitled The Mermaid Frolics for the cast album and TV special. Cook performed monologues and skits with Terry Jones. In June 1979, Cook performed all four nights of The Secret Policeman's Ball, teaming with Cleese. Cook performed a couple of solo pieces and a sketch with Eleanor Bron. He also led the ensemble in the finale – the "End of the World" sketch from Beyond the Fringe. In response to a barb in The Daily Telegraph that the show was recycled material, Cook wrote a satire of the summing-up by Justice Cantley in the trial of former Liberal Party leader Jeremy Thorpe, a summary now widely thought to show bias in favour of Thorpe. Cook performed it that same night (Friday 29 June – the third of the four nights) and the following night. The nine-minute opus, "Entirely a Matter for You", is considered by many fans and critics to be one of the finest works of Cook's career. Along with Cook, producer of the show Martin Lewis brought out an album on Virgin Records entitled Here Comes the Judge: Live, containing the live performance together with three studio tracks that further lampooned the Thorpe trial. Although unable to take part in the 1981 gala, Cook supplied the narration over the animated opening title sequence of the 1982 film of the show. With Lewis, he
start to make music, but certainly to become as visible as say Jefferson Airplane or somebody like that." DeRogatis views Revolver as another of "the first psychedelic rock masterpieces", along with Pet Sounds. The Beatles' May 1966 B-side "Rain", recorded during the Revolver sessions, was the first pop recording to contain reversed sounds. Together with further studio tricks such as varispeed, the song includes a droning melody that reflected the band's growing interest in non-Western musical form and lyrics conveying the division between an enlightened psychedelic outlook and conformism. Philo cites "Rain" as "the birth of British psychedelic rock" and describes Revolver as "[the] most sustained deployment of Indian instruments, musical form and even religious philosophy" heard in popular music up to that time. Author Steve Turner recognises the Beatles' success in conveying an LSD-inspired worldview on Revolver, particularly with "Tomorrow Never Knows", as having "opened the doors to psychedelic rock (or acid rock)". In author Shawn Levy's description, it was "the first true drug album, not [just] a pop record with some druggy insinuations", while musicologists Russell Reising and Jim LeBlanc credit the Beatles with "set[ting] the stage for an important subgenre of psychedelic music, that of the messianic pronouncement". Echard highlights early records by the 13th Floor Elevators and Love among the key psychedelic releases of 1966, along with "Shapes of Things", "Eight Miles High", "Rain" and Revolver. Originating from Austin, Texas, the first of these new bands came to the genre via the garage scene before releasing their debut album, The Psychedelic Sounds of the 13th Floor Elevators in December that year. It was the first rock album to include the adjective in its title, although the LP was released on an independent label and was little noticed at the time. Having formed in late 1965 with the aim of spreading LSD consciousness, the Elevators commissioned business cards containing an image of the third eye and the caption "Psychedelic rock". Rolling Stone highlights the 13th Floor Elevators as arguably "the most important early progenitors of psychedelic garage rock". The Beach Boys' October 1966 single "Good Vibrations" was another early pop song to incorporate psychedelic lyrics and sounds. The single's success prompted an unexpected revival in theremins and increased the awareness of analog synthesizers. As psychedelia gained prominence, Beach Boys-style harmonies would be ingrained into the newer psychedelic pop. 1967–69: Continued development Peak era In 1967, psychedelic rock received widespread media attention and a larger audience beyond local psychedelic communities. From 1967 to 1968, it was the prevailing sound of rock music, either in the more whimsical British variant, or the harder American West Coast acid rock. Music historian David Simonelli says the genre's commercial peak lasted "a brief year", with San Francisco and London recognised as the two key cultural centres. Compared with the American form, British psychedelic music was often more arty in its experimentation, and it tended to stick within pop song structures. Music journalist Mark Prendergast writes that it was only in US garage-band psychedelia that the often whimsical traits of UK psychedelic music were found. He says that aside from the work of the Byrds, Love and the Doors, there were three categories of US psychedelia: the "acid jams" of the San Francisco bands, who favoured albums over singles; pop psychedelia typified by groups such as the Beach Boys and Buffalo Springfield; and the "wigged-out" music of bands following in the example of the Beatles and the Yardbirds, such as the Electric Prunes, the Nazz, the Chocolate Watchband and the Seeds. In February 1967, the Beatles released the double A-side single "Strawberry Fields Forever" / "Penny Lane", which Ian MacDonald says launched both the "English pop-pastoral mood" typified by bands such as Pink Floyd, Family, Traffic and Fairport Convention, and English psychedelia's LSD-inspired preoccupation with "nostalgia for the innocent vision of a child". The Mellotron parts on "Strawberry Fields Forever" remain the most celebrated example of the instrument on a pop or rock recording. According to Simonelli, the two songs heralded the Beatles' brand of Romanticism as a central tenet of psychedelic rock. Jefferson Airplane's Surrealistic Pillow (February 1967) was one of the first albums to come out of San Francisco that sold well enough to bring national attention to the city's music scene. The LP tracks "White Rabbit" and "Somebody to Love" subsequently became top 10 hits in the US. Pink Floyd's "Arnold Layne" (March 1967) and "See Emily Play" (June 1967), both written by Syd Barrett, helped set the pattern for pop-psychedelia in the UK. There, "underground" venues like the UFO Club, Middle Earth Club, The Roundhouse, the Country Club and the Art Lab drew capacity audiences with psychedelic rock and ground-breaking liquid light shows. A major figure in the development of British psychedelia was the American promoter and record producer Joe Boyd, who moved to London in 1966. He co-founded venues including the UFO Club, produced Pink Floyd's "Arnold Layne", and went on to manage folk and folk rock acts including Nick Drake, the Incredible String Band and Fairport Convention. Psychedelic rock's popularity accelerated following the release of the Beatles' album Sgt. Pepper's Lonely Hearts Club Band (May 1967) and the staging of the Monterey Pop Festival in June. Sgt. Pepper was the first commercially successful work that critics recognised as a landmark aspect of psychedelia, and the Beatles' mass appeal meant that the record was played virtually everywhere. The album was highly influential on bands in the US psychedelic rock scene and its elevation of the LP format benefited the San Francisco bands. Among many changes brought about by its success, artists sought to imitate its psychedelic effects and devoted more time to creating their albums; the counterculture was scrutinised by musicians; and acts adopted its non-conformist sentiments. The 1967 Summer of Love saw a huge number of young people from across America and the world travel to Haight-Ashbury, boosting the area's population from 15,000 to around 100,000. It was prefaced by the Human Be-In event in March and reached its peak at the Monterey Pop Festival in June, the latter helping to make major American stars of Janis Joplin, lead singer of Big Brother and the Holding Company, Jimi Hendrix, and the Who. Several established British acts joined the psychedelic revolution, including Eric Burdon (previously of the Animals) and the Who, whose The Who Sell Out (December 1967) included the psychedelic-influenced "I Can See for Miles" and "Armenia City in the Sky". The Incredible String Band's The 5000 Spirits or the Layers of the Onion (July 1967) developed their folk music into a pastoral form of psychedelia. According to author Edward Macan, there ultimately existed three distinct branches of British psychedelic music. The first, dominated by Cream, the Yardbirds and Hendrix, was founded on a heavy, electric adaptation of the blues played by the Rolling Stones, adding elements such as the Who's power chord style and feedback. The second, considerably more complex form drew strongly from jazz sources and was typified by Traffic, Colosseum, If, and Canterbury scene bands such as Soft Machine and Caravan. The third branch, represented by the Moody Blues, Pink Floyd, Procol Harum and the Nice, was influenced by the later music of the Beatles. Several of the post-Sgt. Pepper English psychedelic groups developed the Beatles' classical influences further than either the Beatles or contemporaneous West Coast psychedelic bands. Among such groups, the Pretty Things abandoned their R&B roots to create S.F. Sorrow (December 1968), the first example of a psychedelic rock opera. International variants The US and UK were the major centres of psychedelic music, but in the late 1960s scenes began to develop across the world, including continental Europe, Australasia, Asia and south and Central America. In the later 1960s psychedelic scenes developed in a large number of countries in continental Europe, including the Netherlands with bands like The Outsiders, Denmark where it was pioneered by Steppeulvene, and Germany, where musicians began to fuse music of psychedelia and the electronic avant-garde. 1968 saw the first major German rock festival, the in Essen, and the foundation of the Zodiak Free Arts Lab in Berlin by Hans-Joachim Roedelius, and Conrad Schnitzler, which helped bands like Tangerine Dream and Amon Düül achieve cult status. A thriving psychedelic music scene in Cambodia, influenced by psychedelic rock and soul broadcast by US forces radio in Vietnam, was pioneered by artists such as Sinn Sisamouth and Ros Serey Sothea. In South Korea, Shin Jung-Hyeon, often considered the godfather of Korean rock, played psychedelic-influenced music for the American soldiers stationed in the country. Following Shin Jung-Hyeon, the band San Ul Lim (Mountain Echo) often combined psychedelic rock with a more folk sound. In Turkey, Anatolian rock artist Erkin Koray blended classic Turkish music and Middle Eastern themes into his psychedelic-driven rock, helping to found the Turkish rock scene with artists such as Cem Karaca, Mogollar, Baris Manco and Erkin Koray. In Brazil, the Tropicalia movement merged Brazilian and African rhythms with psychedelic rock. Musicians who were part of the movement include Caetano Veloso, Gilberto Gil, Os Mutantes, Gal Costa, Tom Zé, and the poet/lyricist Torquato Neto, all of whom participated in the 1968 album Tropicália: ou Panis et Circencis, which served as a musical manifesto. 1969–71: Decline By the end of the 1960s, psychedelic rock was in retreat. Psychedelic trends climaxed in the 1969 Woodstock festival, which saw performances by most of the major psychedelic acts, including Jimi Hendrix, Jefferson Airplane, and the Grateful Dead. LSD had been made illegal in the UK in September 1966 and in California in October; by 1967, it was outlawed throughout the United States. In 1969, the murders of Sharon Tate and Leno and Rosemary LaBianca by Charles Manson and his cult of followers, claiming to have been inspired by Beatles' songs such as "Helter Skelter", has been seen as contributing to an anti-hippie backlash. At the end of the same year, the Altamont Free Concert in California, headlined by the Rolling Stones, became notorious for the fatal stabbing of black teenager Meredith Hunter by Hells Angel security guards. George Clinton's ensembles Funkadelic and Parliament and their various spin-offs took psychedelia and funk to create their own unique style, producing over forty singles, including three in the US top ten, and three platinum albums. Brian Wilson of the Beach Boys, Brian Jones of the Rolling Stones, Peter Green and Danny Kirwan of Fleetwood Mac and Syd Barrett of Pink Floyd were early "acid casualties", helping to shift the focus of the respective bands of which they had been leading figures. Some groups, such as the Jimi Hendrix Experience and Cream, broke up. Hendrix died in London in September 1970, shortly after recording Band of Gypsys (1970), Janis Joplin died of a heroin overdose in October 1970 and they were closely followed by Jim Morrison of the Doors, who died in Paris in July 1971. By this point, many surviving acts had moved away from psychedelia into either more back-to-basics "roots rock", traditional-based, pastoral or whimsical folk, the wider experimentation of progressive rock, or riff-based heavy rock. Revivals and successors Psychedelic soul Following the lead of Hendrix in rock, psychedelia began to influence African American musicians, particularly the stars of the Motown label. This psychedelic soul was influenced by the civil rights movement, giving it a darker and more political edge than much psychedelic rock. Building on the funk sound of James Brown, it was pioneered from about 1968 by Sly and the Family Stone and The Temptations. Acts that followed them into this territory included Edwin Starr and the Undisputed Truth. George Clinton's interdependent Funkadelic and Parliament ensembles and their various spin-offs took the genre to its most extreme lengths making funk almost a religion in the 1970s, producing over forty singles, including three in the US top ten, and three platinum albums. While psychedelic rock began to waver at the end of the 1960s, psychedelic soul continued into the 1970s, peaking in popularity in the early years of the decade, and only disappearing in the late 1970s as tastes began to change. Songwriter Norman Whitfield wrote psychedelic soul songs for The Temptations and Marvin Gaye. Prog, heavy metal, and krautrock Many of the British musicians and bands that had embraced psychedelia went on to create progressive rock in the 1970s, including Pink Floyd, Soft Machine and members of Yes. King Crimson's album In the Court of the Crimson King (1969) has been seen as an important link between psychedelia and progressive rock. While bands such as Hawkwind maintained an explicitly psychedelic course into the 1970s, most dropped the psychedelic elements in favour of wider experimentation. The incorporation of jazz into the music of bands like Soft Machine and Can also contributed to the development of the jazz rock of bands like Colosseum. As they moved away from their psychedelic roots and placed increasing emphasis on electronic experimentation, German bands like Kraftwerk, Tangerine Dream, Can and Faust developed a distinctive brand of electronic rock, known as kosmische musik, or in the British press as "Kraut rock". The adoption of electronic synthesisers, pioneered by Popol Vuh from 1970, together with the work of figures like Brian Eno (for a time the keyboard player with Roxy Music), would be a major influence on subsequent electronic rock. Psychedelic rock, with
denied it at the time. "Eight Miles High" peaked at number 14 in the US and reached the top 30 in the UK. Contributing to psychedelia's emergence into the pop mainstream was the release of the Beach Boys' Pet Sounds (May 1966) and the Beatles' Revolver (August 1966). Often considered one of the earliest albums in the canon of psychedelic rock, Pet Sounds contained many elements that would be incorporated into psychedelia, with its artful experiments, psychedelic lyrics based on emotional longings and self-doubts, elaborate sound effects and new sounds on both conventional and unconventional instruments. The album track "I Just Wasn't Made for These Times" contained the first use of theremin sounds on a rock record. Scholar Philip Auslander says that even though psychedelic music is not normally associated with the Beach Boys, the "odd directions" and experiments in Pet Sounds "put it all on the map. ... basically that sort of opened the door – not for groups to be formed or to start to make music, but certainly to become as visible as say Jefferson Airplane or somebody like that." DeRogatis views Revolver as another of "the first psychedelic rock masterpieces", along with Pet Sounds. The Beatles' May 1966 B-side "Rain", recorded during the Revolver sessions, was the first pop recording to contain reversed sounds. Together with further studio tricks such as varispeed, the song includes a droning melody that reflected the band's growing interest in non-Western musical form and lyrics conveying the division between an enlightened psychedelic outlook and conformism. Philo cites "Rain" as "the birth of British psychedelic rock" and describes Revolver as "[the] most sustained deployment of Indian instruments, musical form and even religious philosophy" heard in popular music up to that time. Author Steve Turner recognises the Beatles' success in conveying an LSD-inspired worldview on Revolver, particularly with "Tomorrow Never Knows", as having "opened the doors to psychedelic rock (or acid rock)". In author Shawn Levy's description, it was "the first true drug album, not [just] a pop record with some druggy insinuations", while musicologists Russell Reising and Jim LeBlanc credit the Beatles with "set[ting] the stage for an important subgenre of psychedelic music, that of the messianic pronouncement". Echard highlights early records by the 13th Floor Elevators and Love among the key psychedelic releases of 1966, along with "Shapes of Things", "Eight Miles High", "Rain" and Revolver. Originating from Austin, Texas, the first of these new bands came to the genre via the garage scene before releasing their debut album, The Psychedelic Sounds of the 13th Floor Elevators in December that year. It was the first rock album to include the adjective in its title, although the LP was released on an independent label and was little noticed at the time. Having formed in late 1965 with the aim of spreading LSD consciousness, the Elevators commissioned business cards containing an image of the third eye and the caption "Psychedelic rock". Rolling Stone highlights the 13th Floor Elevators as arguably "the most important early progenitors of psychedelic garage rock". The Beach Boys' October 1966 single "Good Vibrations" was another early pop song to incorporate psychedelic lyrics and sounds. The single's success prompted an unexpected revival in theremins and increased the awareness of analog synthesizers. As psychedelia gained prominence, Beach Boys-style harmonies would be ingrained into the newer psychedelic pop. 1967–69: Continued development Peak era In 1967, psychedelic rock received widespread media attention and a larger audience beyond local psychedelic communities. From 1967 to 1968, it was the prevailing sound of rock music, either in the more whimsical British variant, or the harder American West Coast acid rock. Music historian David Simonelli says the genre's commercial peak lasted "a brief year", with San Francisco and London recognised as the two key cultural centres. Compared with the American form, British psychedelic music was often more arty in its experimentation, and it tended to stick within pop song structures. Music journalist Mark Prendergast writes that it was only in US garage-band psychedelia that the often whimsical traits of UK psychedelic music were found. He says that aside from the work of the Byrds, Love and the Doors, there were three categories of US psychedelia: the "acid jams" of the San Francisco bands, who favoured albums over singles; pop psychedelia typified by groups such as the Beach Boys and Buffalo Springfield; and the "wigged-out" music of bands following in the example of the Beatles and the Yardbirds, such as the Electric Prunes, the Nazz, the Chocolate Watchband and the Seeds. In February 1967, the Beatles released the double A-side single "Strawberry Fields Forever" / "Penny Lane", which Ian MacDonald says launched both the "English pop-pastoral mood" typified by bands such as Pink Floyd, Family, Traffic and Fairport Convention, and English psychedelia's LSD-inspired preoccupation with "nostalgia for the innocent vision of a child". The Mellotron parts on "Strawberry Fields Forever" remain the most celebrated example of the instrument on a pop or rock recording. According to Simonelli, the two songs heralded the Beatles' brand of Romanticism as a central tenet of psychedelic rock. Jefferson Airplane's Surrealistic Pillow (February 1967) was one of the first albums to come out of San Francisco that sold well enough to bring national attention to the city's music scene. The LP tracks "White Rabbit" and "Somebody to Love" subsequently became top 10 hits in the US. Pink Floyd's "Arnold Layne" (March 1967) and "See Emily Play" (June 1967), both written by Syd Barrett, helped set the pattern for pop-psychedelia in the UK. There, "underground" venues like the UFO Club, Middle Earth Club, The Roundhouse, the Country Club and the Art Lab drew capacity audiences with psychedelic rock and ground-breaking liquid light shows. A major figure in the development of British psychedelia was the American promoter and record producer Joe Boyd, who moved to London in 1966. He co-founded venues including the UFO Club, produced Pink Floyd's "Arnold Layne", and went on to manage folk and folk rock acts including Nick Drake, the Incredible String Band and Fairport Convention. Psychedelic rock's popularity accelerated following the release of the Beatles' album Sgt. Pepper's Lonely Hearts Club Band (May 1967) and the staging of the Monterey Pop Festival in June. Sgt. Pepper was the first commercially successful work that critics recognised as a landmark aspect of psychedelia, and the Beatles' mass appeal meant that the record was played virtually everywhere. The album was highly influential on bands in the US psychedelic rock scene and its elevation of the LP format benefited the San Francisco bands. Among many changes brought about by its success, artists sought to imitate its psychedelic effects and devoted more time to creating their albums; the counterculture was scrutinised by musicians; and acts adopted its non-conformist sentiments. The 1967 Summer of Love saw a huge number of young people from across America and the world travel to Haight-Ashbury, boosting the area's population from 15,000 to around 100,000. It was prefaced by the Human Be-In event in March and reached its peak at the Monterey Pop Festival in June, the latter helping to make major American stars of Janis Joplin, lead singer of Big Brother and the Holding Company, Jimi Hendrix, and the Who. Several established British acts joined the psychedelic revolution, including Eric Burdon (previously of the Animals) and the Who, whose The Who Sell Out (December 1967) included the psychedelic-influenced "I Can See for Miles" and "Armenia City in the Sky". The Incredible String Band's The 5000 Spirits or the Layers of the Onion (July 1967) developed their folk music into a pastoral form of psychedelia. According to author Edward Macan, there ultimately existed three distinct branches of British psychedelic music. The first, dominated by Cream, the Yardbirds and Hendrix, was founded on a heavy, electric adaptation of the blues played by the Rolling Stones, adding elements such as the Who's power chord style and feedback. The second, considerably more complex form drew strongly from jazz sources and was typified by Traffic, Colosseum, If, and Canterbury scene bands such as Soft Machine and Caravan. The third branch, represented by the Moody Blues, Pink Floyd, Procol Harum and the Nice, was influenced by the later music of the Beatles. Several of the post-Sgt. Pepper English psychedelic groups developed the Beatles' classical influences further than either the Beatles or contemporaneous West Coast psychedelic bands. Among such groups, the Pretty Things abandoned their R&B roots to create S.F. Sorrow (December 1968), the first example of a psychedelic rock opera. International variants The US and UK were the major centres of psychedelic music, but in the late 1960s scenes began to develop across the world, including continental Europe, Australasia, Asia and south and Central America. In the later 1960s psychedelic scenes developed in a large number of countries in continental Europe, including the Netherlands with bands like The Outsiders, Denmark where it was pioneered by Steppeulvene, and Germany, where musicians began to fuse music of psychedelia and the electronic avant-garde. 1968 saw the first major German rock festival, the in Essen, and the foundation of the Zodiak Free Arts Lab in Berlin by Hans-Joachim Roedelius, and Conrad Schnitzler, which helped bands like Tangerine Dream and Amon Düül achieve cult status. A thriving psychedelic music scene in Cambodia, influenced by psychedelic rock and soul broadcast by US forces radio in Vietnam, was pioneered by artists such as Sinn Sisamouth and Ros Serey Sothea. In South Korea, Shin Jung-Hyeon, often considered the godfather of Korean rock, played psychedelic-influenced music for the American soldiers stationed in the country. Following Shin Jung-Hyeon, the band San Ul Lim (Mountain Echo) often combined psychedelic rock with a more folk sound. In Turkey, Anatolian rock artist Erkin Koray blended classic Turkish music and Middle Eastern themes into his psychedelic-driven rock, helping to found the Turkish rock scene with artists such as Cem Karaca, Mogollar, Baris Manco and Erkin Koray. In Brazil, the Tropicalia movement merged Brazilian and African rhythms with psychedelic rock. Musicians who were part of the movement include Caetano Veloso, Gilberto Gil, Os Mutantes, Gal Costa, Tom Zé, and the poet/lyricist Torquato Neto, all of whom participated in the 1968 album Tropicália: ou Panis et Circencis, which served as a musical manifesto. 1969–71: Decline By the end of the 1960s, psychedelic rock was in retreat. Psychedelic trends climaxed in the 1969 Woodstock festival, which saw performances by most of the major psychedelic acts, including Jimi Hendrix, Jefferson Airplane, and the Grateful Dead. LSD had been made illegal in the UK in September 1966 and in California in October; by 1967, it was outlawed throughout the United States. In 1969, the murders of Sharon Tate and Leno and Rosemary LaBianca by Charles Manson and his cult of followers, claiming to have been inspired by Beatles' songs such as "Helter Skelter", has been seen as contributing to an anti-hippie backlash. At the end of the same year, the Altamont Free Concert in California, headlined by the Rolling Stones, became notorious for the fatal stabbing of black teenager Meredith Hunter by Hells Angel security guards. George Clinton's ensembles Funkadelic and Parliament and their various spin-offs took psychedelia and funk to create their own unique style, producing over forty singles, including three in the US top ten, and three platinum albums. Brian Wilson of the Beach Boys, Brian Jones of the Rolling Stones, Peter Green and Danny Kirwan of Fleetwood Mac and Syd Barrett of Pink Floyd were early "acid casualties", helping to shift the focus of the respective bands of which they had been leading figures. Some groups, such as the Jimi Hendrix Experience and Cream, broke up. Hendrix died in London in September 1970, shortly after recording Band of Gypsys (1970), Janis Joplin died of a heroin overdose in October 1970 and they were closely followed by Jim Morrison of the Doors, who died in Paris in July 1971. By this point, many surviving acts had moved away from psychedelia into either more back-to-basics "roots rock", traditional-based, pastoral or whimsical folk, the wider experimentation of progressive rock, or riff-based heavy rock. Revivals and successors Psychedelic soul Following the lead of Hendrix in rock, psychedelia began to influence African American musicians, particularly the stars of the Motown label. This psychedelic soul was influenced by the civil rights movement, giving it a darker and more political edge than much psychedelic rock. Building on the funk sound of James Brown, it was pioneered from about 1968 by Sly and the Family Stone and The Temptations. Acts that followed them into this territory included Edwin Starr and the Undisputed Truth. George Clinton's interdependent Funkadelic and Parliament ensembles and their various spin-offs took the genre to its most extreme lengths making funk almost a religion in the 1970s, producing over forty singles, including three in the US top ten, and three platinum albums. While psychedelic rock began to waver at the end of the 1960s, psychedelic soul continued into the 1970s, peaking in popularity in the early years of the decade, and only disappearing in the late 1970s as tastes began to change. Songwriter Norman Whitfield wrote psychedelic soul songs for The Temptations and Marvin Gaye. Prog, heavy metal, and krautrock Many of the British musicians and bands that had embraced psychedelia went on to create progressive rock in the 1970s, including Pink Floyd, Soft Machine and members of Yes. King Crimson's album In the Court of the Crimson King (1969) has been seen as an important link between psychedelia and progressive rock. While bands such as Hawkwind maintained an explicitly psychedelic course into the 1970s, most dropped the psychedelic elements in favour of wider experimentation. The incorporation of jazz into the music of bands like Soft Machine and Can also contributed to the development of the jazz rock of bands like Colosseum. As they moved away from their psychedelic roots and placed increasing emphasis on electronic experimentation, German bands like Kraftwerk, Tangerine Dream, Can and Faust developed a distinctive brand of electronic rock, known as kosmische musik, or in the British press as "Kraut rock". The adoption of electronic synthesisers, pioneered by Popol Vuh from 1970, together with the work of figures like Brian Eno (for a time the keyboard player with Roxy Music), would be a major influence on subsequent electronic rock. Psychedelic rock, with its distorted guitar sound, extended solos and adventurous compositions, has been seen as an important bridge between blues-oriented rock and later heavy metal. American bands whose loud, repetitive psychedelic rock emerged as early heavy metal included the Amboy Dukes and Steppenwolf. From England, two former guitarists with the Yardbirds, Jeff Beck and Jimmy Page, moved on to form key acts in the genre, The Jeff Beck Group and Led Zeppelin respectively. Other major pioneers of the genre had begun as blues-based psychedelic bands, including Black Sabbath, Deep Purple, Judas Priest and UFO. Psychedelic music also contributed to the origins of glam rock, with Marc Bolan changing his psychedelic folk duo into rock band T. Rex and becoming the first glam rock star from 1970. From 1971 David Bowie moved on from his early psychedelic work to develop his Ziggy Stardust persona, incorporating elements of professional make up, mime and performance into his
group for $3.3 billion. In February 2015, Philips acquired Volcano Corporation to strengthen its position in non-invasive surgery and imaging. In June 2016, Philips spun off its lighting division to focus on the healthcare division. In June 2017, Philips announced it would acquire US-based Spectranetics Corp, a manufacturer of devices to treat heart disease, for €1.9 billion (£1.68 billion) expanding its current image-guided therapy business. In May 2016, Philips' lighting division Philips Lighting went through a spin-off process, and became an independent public company named Philips Lighting N.V. In 2017, Philips launched Philips Ventures, with a health technology venture fund as its main focus. Philips Ventures invested in companies including Mytonomy (2017) and DEARhealth (2019). In 2018, the independent Philips Lighting N.V. was renamed Signify N.V. However, it continues to produce and market Philips-branded products such as Philips Hue color-changing LED light bulbs. Corporate affairs CEOs Past and present CEOs: 1891–1922: Gerard Philips 1922–1939: Anton Philips 1939–1961: Frans Otten 1961–1971: Frits Philips 1971–1977: Henk van Riemsdijk 1977–1981: Nico Rodenburg 1981–1982: Cor Dillen 1982–1986: Wisse Dekker 1986–1990: Cor van der Klugt 1990–1996: Jan Timmer 1996–2001: Cor Boonstra 2001–2011: Gerard Kleisterlee 2011–present: Frans van Houten CEOs lighting: 2003–2008: Theo van Deursen 2012–present: Eric Rondolat CFOs Past and present CFOs (chief financial officer) 1960–1968: Cor Dillen –1997: Dudley Eustace 1997–2005: Jan Hommen 2015–present: Abhijit Bhattacharya Current Executive Committee CEO: Frans van Houten CFO: Abhijit Bhattacharya COO: Sophie Bechu Chief Legal Officer: Marnix van Ginneken Chief Business Leader (Connected Care): Roy Jakobs Chief Business Leader (Personal Health): Deeptha Khanna Chief Business Leader (Image Guided Therapy): Bert van Meurs Chief Business Leader (Precision Diagnosis): Kees Wesdorp CEO Philips Domestic Appliances: Henk Siebren de Jong Chief of International Markets: Edwin Paalvast Chief Innovation & Strategy Officer: Shez Partovi Chief Market Leader (China): Andy Ho Chief Market Leader (North America): Vitor Rocha Chief Human Resources Officer: Daniela Seabrook Acquisitions Companies acquired by Philips through the years include ADAC Laboratories, Agilent Healthcare Solutions Group, Amperex, ATL Ultrasound, EKCO, Lifeline Systems, Magnavox, Marconi Medical Systems, Philips Medical purchased Intermagnetics based out of Latham, New York for 1.3 billion in 2006, Optiva, Preethi, Pye, Respironics, Inc., Sectra Mamea AB, Signetics, VISICU, Volcano, VLSI, Ximis, portions of Westinghouse and the consumer electronics operations of Philco and Sylvania. Philips abandoned the Sylvania trademark which is now owned by Havells Sylvania except in Australia, Canada, Mexico, New Zealand, Puerto Rico and the US where it is owned by Osram. Formed in November 1999 as an equal joint venture between Philips and Agilent Technologies, the light-emitting diode manufacturer Lumileds became a subsidiary of Phillips Lighting in August 2005 and a fully owned subsidiary in December 2006. An 80.1 percent stake in Lumileds was sold to Apollo Global Management in 2017. On 19 September 2018, Philips reported that it had acquired US-based Blue Willow Systems, a developer of a cloud-based senior living community resident safety platform. On 7 March 2019, Philips announced that was acquiring the Healthcare Information Systems business of Carestream Health Inc., a US-based provider of medical imaging and healthcare IT solutions for hospitals, imaging centers, and specialty medical clinics. On 18 July 2019, Philips announced that it has expanded its patient management solutions in the US with the acquisition of Boston-based start-up company Medumo. On 27 August 2020, Philips announced the acquisition of Intact Vascular, Inc., a U.S.-based developer of medical devices for minimally invasive peripheral vascular procedures. On 18 December 2020, Philips and BioTelemetry, Inc., a leading U.S.-based provider of remote cardiac diagnostics and monitoring, announced that they had entered into a definitive merger agreement. On 19 January 2021, Philips announced the acquisition of Capsule Technologies, Inc., a provider of medical device integration and data technologies for hospitals and healthcare organizations. On 9 November 2021, Philips announced the acquisition of Cardiologs, an AI-powered cardiac diagnostic technology developer, to expand its cardiac diagnostics and monitoring portfolio. Operations Philips is registered in the Netherlands as a naamloze vennootschap (public corporation) and has its global headquarters in Amsterdam. At the end of 2013, Philips had 111 manufacturing facilities, 59 R&D Facilities across 26 countries and sales and service operations in around 100 countries. Philips is organized into three main divisions: Philips Consumer Lifestyle (formerly Philips Consumer Electronics and Philips Domestic Appliances and Personal Care), Philips Healthcare (formerly Philips Medical Systems), and Philips Lighting (Former). Philips achieved total revenues of €22.579 billion in 2011, of which €8.852 billion were generated by Philips Healthcare, €7.638 billion by Philips Lighting, €5.823 billion by Philips Consumer Lifestyle and €266 million from group activities. At the end of 2011, Philips had a total of 121,888 employees, of whom around 44% were employed in Philips Lighting, 31% in Philips Healthcare and 15% in Philips Consumer Lifestyle. The lighting division was spun out as a new company called Signify, which uses the Philips brand under license. Philips invested a total of €1.61 billion in research and development in 2011, equivalent to 7.10% of sales. Philips Intellectual Property and Standards is the group-wide division responsible for licensing, trademark protection and patenting. Philips currently holds around 54,000 patent rights, 39,000 trademarks, 70,000 design rights and 4,400 domain name registrations. In the 2021 review of WIPO's annual World Intellectual Property Indicators Philips ranked 5th in the world for its 95 industrial design registrations being published under the Hague System during 2020. This position is down on their previous 4th place ranking for 85 industrial design registrations being published in 2019. Asia Thailand Philips Thailand was established in 1952. It is a subsidiary that produces healthcare, lifestyle, and lighting products. Philips started manufacturing in Thailand in 1960 with an incandescent lamp factory. Philips has diversified its production facilities to include a fluorescent lamp factory and a luminaries factory, serving Thai and worldwide markets. Hong Kong Philips Hong Kong began operation in 1948. Philips Hong Kong houses the global headquarters of Philips' Audio Business Unit. It also house Philips' Asia Pacific regional office and headquarters for its Design Division, Domestic Appliances & Personal Care Products Division, Lighting Products Division and Medical System Products Division. In 1974, Philips opened a lamp factory in Hong Kong. This has a capacity of 200 million pieces a year and is certified with ISO 9001:2000 and ISO 14001. Its product portfolio includes prefocus, lensend and E10 miniature light bulbs. Mainland China Philips established in Zhuhai, Guangdong, in 1990. The site mainly manufactures Philishaves and healthcare products. In early 2008, Philips Lighting, a division of Royal Philips Electronics, opened a small engineering center in Shanghai to adapt the company's products to vehicles in Asia. India Philips began operations in India in 1930, with the establishment of Philips Electrical Co. (India) Pvt Ltd in Kolkata as a sales outlet for imported Philips lamps. In 1938, Philips established its first Indian lamp manufacturing factory in Kolkata. In 1948, Philips started manufacturing radios in Kolkata. In 1959, a second radio factory was established near Pune. This was closed and sold around 2006. In 1957, the company converted into a public limited company, renamed "Philips India Ltd". In 1970, a new consumer electronics factory began operations in Pimpri near Pune. This is now called the 'Philips Healthcare Innovation Centre'. Also, a manufacturing facility 'Philips Centre for Manufacturing Excellence' was set up in Chakan, Pune in 2012. In 1996, the Philips Software Centre was established in Bangalore, later renamed the Philips Innovation Campus. In 2008, Philips India entered the water purifier market. In 2014, Philips was ranked 12th among India's most trusted brands according to the Brand Trust Report, a study conducted by Trust Research Advisory. Now Philips working in India as one of the most diversified health care company & broadly focusing on Imaging, Utlrasound, MA & TC products & Sleep & respiratory care products. Philips is aspiring to touch life of 40 Million patients in India by next 2 years. In 2020, Philips introduced mobile ICUs in order to support clinicians to meet the rising demand of ICU beds due to the COVID-19 pandemic. Israel Philips has been active in Israel since 1948 and in 1998, set up a wholly owned subsidiary, Philips Electronics (Israel) Ltd. The company has over 700 employees in Israel and generated sales of over $300 million in 2007. Philips Medical Systems Technologies Ltd. (Haifa) is a developer and manufacturer of Computerized Tomography (CT), diagnostic and Medical Imaging systems. The company was founded in 1969 as Elscint by Elron Electronic Industries and was acquired by Marconi Medical Systems in 1998, which was itself acquired by Philips in 2001. Philips Semiconductors formerly had major operations in Israel; these now form part of NXP Semiconductors. On 1 August 2019, Philips acquired Carestream HCIS division from Onex Corporation. As part of the acquisition, Algotec Systems LTD (Carestream HCIS R&D) located in Raanana Israel changed ownership in a share deal. In addition to that, Algotec changed its name to Philips Algotec and is part of Philips HCIS. Philips HCIS is a provider of medical imaging systems. Pakistan Philips has been active in Pakistan since 1948 and has a wholly owned subsidiary, Philips Pakistan Limited (Formerly Philips Electrical Industries of Pakistan Limited). The head office is in Karachi with regional sales offices in Lahore and Rawalpindi. Europe France Philips France has its headquarters in Suresnes. The company employs over 3600 people nationwide. Philips Lighting has manufacturing facilities in Chalon-sur-Saône (fluorescent lamps), Chartres (automotive lighting), Lamotte-Beuvron (architectural lighting by LEDs and professional indoor lighting), Longvic (lamps), Miribel (outdoor lighting), Nevers (professional indoor lighting). Germany Philips Germany was founded in 1926 in Berlin. Now its headquarters is located in Hamburg. Over 4900 people are employed in Germany. Hamburg Distribution center of the divisions Healthcare, Consumer Lifestyle, and Lighting. Philips Medical Systems DMC. Philips Innovative Technologies, Research Laboratories. Aachen Philips Innovative Technologies. Philips Innovation Services. Böblingen Philips Medical Systems, patient monitoring systems. Herrsching Philips Respironics. Ulm Philips Photonics, development and manufacture of vertical laser diodes (VCSELs) and photodiodes for sensing and data communication. Greece Philips' Greece is headquartered in Halandri, Attica. As of 2012, Philips has no manufacturing plants in Greece, although previously there have been audio, lighting and telecommunications factories. Italy Philips founded its Italian headquarter in 1918, basing it in Monza (Milan) where it still operates, for commercial activities only. Hungary Philips founded PACH (Philips Assembly Centre Hungary) in 1992, producing televisions and consumer electronics in Székesfehérvár. After TPV entering the Philips TV business, the factory was moved under TP Vision, the new joint-venture company in 2011. Products have been transferred to Poland and China and factory was closed in 2013. By Philips acquiring PLI in 2007 another Hungarian Philips factory emerged in Tamási, producing lamps under the name of Philips IPSC Tamási, later Philips Lighting. The factory was renamed to Signify in 2017, still producing Philips lighting products. Poland Philips' operations in Poland include: a European financial and accounting centre in Łódź; Philips Lighting facilities in Bielsko-Biała, Piła, and Kętrzyn; and a Philips Domestic Appliances facility in Białystok. Portugal Philips started business in Portugal in 1927, as "Philips Portuguesa S.A.R.L.". Currently, Philips Portuguesa S.A. is headquartered in Oeiras near Lisbon. There were three Philips factories in Portugal: the FAPAE lamp factory in Lisbon; the Carnaxide magnetic-core memory factory near Lisbon, where the Philips Service organization was also based; and the Ovar factory in northern Portugal making camera components and remote control devices. The company still operates in Portugal with divisions for commercial lighting, medical systems and domestic appliances. Sweden Philips Sweden has two main sites, Kista, Stockholm County, with regional sales, marketing and a customer support organization and Solna, Stockholm County, with the main office of the mammography division. United Kingdom Philips UK has its headquarters in Guildford. The company employs over 2,500 people nationwide. Philips Healthcare Informatics, Belfast develops healthcare software products. Philips Consumer Products, Guildford provides sales and marketing for televisions, including High Definition televisions, DVD recorders, hi-fi and portable audio, CD recorders, PC peripherals, cordless telephones, home and kitchen appliances, personal care (shavers, hair dryers, body beauty and oral hygiene ). Philips Dictation Systems, Colchester. Philips Lighting: sales from Guildford and manufacture in Hamilton. Philips Healthcare, Guildford. Sales and technical support for X-ray, ultrasound, nuclear medicine, patient monitoring, magnetic resonance, computed tomography, and resuscitation products. Philips Research Laboratories, Cambridge (Until 2008 based in Redhill, Surrey. Originally these were the Mullard Research Laboratories.) In the past, Philips UK also included: Consumer product manufacturing in Croydon Television Tube Manufacturing Mullard Simonstone Philips Business Communications, Cambridge: offered voice and data communications products, specialising in Customer Relationship Management (CRM) applications, IP Telephony, data networking, voice processing, command and control systems and cordless and mobile telephony. In 2006 the business was placed into a 60/40 joint venture with NEC. NEC later acquired 100 per cent ownership and the business was renamed NEC Unified Solutions. Philips Electronics Blackburn; vacuum tubes, capacitors, delay-lines, Laserdiscs, CDs. Philips Domestic Appliances Hastings: Design and Production of Electric kettles, Fan Heaters plus former EKCO brand "Thermotube" Tubular Heaters and "Hostess" Domestic Food Warming Trolleys. Mullard Southampton and Hazel Grove, Stockport. Originally brought together as a joint venture between Mullard and GEC as Associated Semiconductor Manufacturers. They developed and manufactured rectifiers, diodes, transistors, integrated circuits and electro-optical devices. These became Philips Semiconductors before becoming part of NXP. London Carriers, logistics and transport division. Mullard Equipment Limited (MEL) which produced products for the military Ada (Halifax) Ltd, maker of washing machines and spin driers, refrigerators Pye TVT Ltd of Cambridge Pye Telecommunications Ltd of Cambridge TMC Limited of Malmesbury North America Canada Philips Canada was founded in 1941 when it acquired Small Electric Motors Limited. It is well known in medical systems for diagnosis and therapy, lighting technologies, shavers, and consumer electronics. The Canadian headquarters are located in Markham, Ontario. For several years, Philips manufactured lighting products in two Canadian factories. The London, Ontario, plant opened in 1971. It produced A19 lamps (including the "Royale" long life bulbs), PAR38 lamps and T19 lamps (originally a Westinghouse lamp shape). Philips closed the factory in May 2003. The Trois-Rivières, Quebec plant was a Westinghouse facility which Philips continued to run it after buying Westinghouse's lamp division in 1983. Philips closed this factory a few years later, in the late 1980s. Mexico Philips Mexicana SA de CV is headquartered in Mexico City. Philips Lighting has manufacturing facilities in: Monterrey, Nuevo León; Ciudad Juárez, Chihuahua; and Tijuana, Baja California. Philips Consumer Electronics has
Inc., a leading U.S.-based provider of remote cardiac diagnostics and monitoring, announced that they had entered into a definitive merger agreement. On 19 January 2021, Philips announced the acquisition of Capsule Technologies, Inc., a provider of medical device integration and data technologies for hospitals and healthcare organizations. On 9 November 2021, Philips announced the acquisition of Cardiologs, an AI-powered cardiac diagnostic technology developer, to expand its cardiac diagnostics and monitoring portfolio. Operations Philips is registered in the Netherlands as a naamloze vennootschap (public corporation) and has its global headquarters in Amsterdam. At the end of 2013, Philips had 111 manufacturing facilities, 59 R&D Facilities across 26 countries and sales and service operations in around 100 countries. Philips is organized into three main divisions: Philips Consumer Lifestyle (formerly Philips Consumer Electronics and Philips Domestic Appliances and Personal Care), Philips Healthcare (formerly Philips Medical Systems), and Philips Lighting (Former). Philips achieved total revenues of €22.579 billion in 2011, of which €8.852 billion were generated by Philips Healthcare, €7.638 billion by Philips Lighting, €5.823 billion by Philips Consumer Lifestyle and €266 million from group activities. At the end of 2011, Philips had a total of 121,888 employees, of whom around 44% were employed in Philips Lighting, 31% in Philips Healthcare and 15% in Philips Consumer Lifestyle. The lighting division was spun out as a new company called Signify, which uses the Philips brand under license. Philips invested a total of €1.61 billion in research and development in 2011, equivalent to 7.10% of sales. Philips Intellectual Property and Standards is the group-wide division responsible for licensing, trademark protection and patenting. Philips currently holds around 54,000 patent rights, 39,000 trademarks, 70,000 design rights and 4,400 domain name registrations. In the 2021 review of WIPO's annual World Intellectual Property Indicators Philips ranked 5th in the world for its 95 industrial design registrations being published under the Hague System during 2020. This position is down on their previous 4th place ranking for 85 industrial design registrations being published in 2019. Asia Thailand Philips Thailand was established in 1952. It is a subsidiary that produces healthcare, lifestyle, and lighting products. Philips started manufacturing in Thailand in 1960 with an incandescent lamp factory. Philips has diversified its production facilities to include a fluorescent lamp factory and a luminaries factory, serving Thai and worldwide markets. Hong Kong Philips Hong Kong began operation in 1948. Philips Hong Kong houses the global headquarters of Philips' Audio Business Unit. It also house Philips' Asia Pacific regional office and headquarters for its Design Division, Domestic Appliances & Personal Care Products Division, Lighting Products Division and Medical System Products Division. In 1974, Philips opened a lamp factory in Hong Kong. This has a capacity of 200 million pieces a year and is certified with ISO 9001:2000 and ISO 14001. Its product portfolio includes prefocus, lensend and E10 miniature light bulbs. Mainland China Philips established in Zhuhai, Guangdong, in 1990. The site mainly manufactures Philishaves and healthcare products. In early 2008, Philips Lighting, a division of Royal Philips Electronics, opened a small engineering center in Shanghai to adapt the company's products to vehicles in Asia. India Philips began operations in India in 1930, with the establishment of Philips Electrical Co. (India) Pvt Ltd in Kolkata as a sales outlet for imported Philips lamps. In 1938, Philips established its first Indian lamp manufacturing factory in Kolkata. In 1948, Philips started manufacturing radios in Kolkata. In 1959, a second radio factory was established near Pune. This was closed and sold around 2006. In 1957, the company converted into a public limited company, renamed "Philips India Ltd". In 1970, a new consumer electronics factory began operations in Pimpri near Pune. This is now called the 'Philips Healthcare Innovation Centre'. Also, a manufacturing facility 'Philips Centre for Manufacturing Excellence' was set up in Chakan, Pune in 2012. In 1996, the Philips Software Centre was established in Bangalore, later renamed the Philips Innovation Campus. In 2008, Philips India entered the water purifier market. In 2014, Philips was ranked 12th among India's most trusted brands according to the Brand Trust Report, a study conducted by Trust Research Advisory. Now Philips working in India as one of the most diversified health care company & broadly focusing on Imaging, Utlrasound, MA & TC products & Sleep & respiratory care products. Philips is aspiring to touch life of 40 Million patients in India by next 2 years. In 2020, Philips introduced mobile ICUs in order to support clinicians to meet the rising demand of ICU beds due to the COVID-19 pandemic. Israel Philips has been active in Israel since 1948 and in 1998, set up a wholly owned subsidiary, Philips Electronics (Israel) Ltd. The company has over 700 employees in Israel and generated sales of over $300 million in 2007. Philips Medical Systems Technologies Ltd. (Haifa) is a developer and manufacturer of Computerized Tomography (CT), diagnostic and Medical Imaging systems. The company was founded in 1969 as Elscint by Elron Electronic Industries and was acquired by Marconi Medical Systems in 1998, which was itself acquired by Philips in 2001. Philips Semiconductors formerly had major operations in Israel; these now form part of NXP Semiconductors. On 1 August 2019, Philips acquired Carestream HCIS division from Onex Corporation. As part of the acquisition, Algotec Systems LTD (Carestream HCIS R&D) located in Raanana Israel changed ownership in a share deal. In addition to that, Algotec changed its name to Philips Algotec and is part of Philips HCIS. Philips HCIS is a provider of medical imaging systems. Pakistan Philips has been active in Pakistan since 1948 and has a wholly owned subsidiary, Philips Pakistan Limited (Formerly Philips Electrical Industries of Pakistan Limited). The head office is in Karachi with regional sales offices in Lahore and Rawalpindi. Europe France Philips France has its headquarters in Suresnes. The company employs over 3600 people nationwide. Philips Lighting has manufacturing facilities in Chalon-sur-Saône (fluorescent lamps), Chartres (automotive lighting), Lamotte-Beuvron (architectural lighting by LEDs and professional indoor lighting), Longvic (lamps), Miribel (outdoor lighting), Nevers (professional indoor lighting). Germany Philips Germany was founded in 1926 in Berlin. Now its headquarters is located in Hamburg. Over 4900 people are employed in Germany. Hamburg Distribution center of the divisions Healthcare, Consumer Lifestyle, and Lighting. Philips Medical Systems DMC. Philips Innovative Technologies, Research Laboratories. Aachen Philips Innovative Technologies. Philips Innovation Services. Böblingen Philips Medical Systems, patient monitoring systems. Herrsching Philips Respironics. Ulm Philips Photonics, development and manufacture of vertical laser diodes (VCSELs) and photodiodes for sensing and data communication. Greece Philips' Greece is headquartered in Halandri, Attica. As of 2012, Philips has no manufacturing plants in Greece, although previously there have been audio, lighting and telecommunications factories. Italy Philips founded its Italian headquarter in 1918, basing it in Monza (Milan) where it still operates, for commercial activities only. Hungary Philips founded PACH (Philips Assembly Centre Hungary) in 1992, producing televisions and consumer electronics in Székesfehérvár. After TPV entering the Philips TV business, the factory was moved under TP Vision, the new joint-venture company in 2011. Products have been transferred to Poland and China and factory was closed in 2013. By Philips acquiring PLI in 2007 another Hungarian Philips factory emerged in Tamási, producing lamps under the name of Philips IPSC Tamási, later Philips Lighting. The factory was renamed to Signify in 2017, still producing Philips lighting products. Poland Philips' operations in Poland include: a European financial and accounting centre in Łódź; Philips Lighting facilities in Bielsko-Biała, Piła, and Kętrzyn; and a Philips Domestic Appliances facility in Białystok. Portugal Philips started business in Portugal in 1927, as "Philips Portuguesa S.A.R.L.". Currently, Philips Portuguesa S.A. is headquartered in Oeiras near Lisbon. There were three Philips factories in Portugal: the FAPAE lamp factory in Lisbon; the Carnaxide magnetic-core memory factory near Lisbon, where the Philips Service organization was also based; and the Ovar factory in northern Portugal making camera components and remote control devices. The company still operates in Portugal with divisions for commercial lighting, medical systems and domestic appliances. Sweden Philips Sweden has two main sites, Kista, Stockholm County, with regional sales, marketing and a customer support organization and Solna, Stockholm County, with the main office of the mammography division. United Kingdom Philips UK has its headquarters in Guildford. The company employs over 2,500 people nationwide. Philips Healthcare Informatics, Belfast develops healthcare software products. Philips Consumer Products, Guildford provides sales and marketing for televisions, including High Definition televisions, DVD recorders, hi-fi and portable audio, CD recorders, PC peripherals, cordless telephones, home and kitchen appliances, personal care (shavers, hair dryers, body beauty and oral hygiene ). Philips Dictation Systems, Colchester. Philips Lighting: sales from Guildford and manufacture in Hamilton. Philips Healthcare, Guildford. Sales and technical support for X-ray, ultrasound, nuclear medicine, patient monitoring, magnetic resonance, computed tomography, and resuscitation products. Philips Research Laboratories, Cambridge (Until 2008 based in Redhill, Surrey. Originally these were the Mullard Research Laboratories.) In the past, Philips UK also included: Consumer product manufacturing in Croydon Television Tube Manufacturing Mullard Simonstone Philips Business Communications, Cambridge: offered voice and data communications products, specialising in Customer Relationship Management (CRM) applications, IP Telephony, data networking, voice processing, command and control systems and cordless and mobile telephony. In 2006 the business was placed into a 60/40 joint venture with NEC. NEC later acquired 100 per cent ownership and the business was renamed NEC Unified Solutions. Philips Electronics Blackburn; vacuum tubes, capacitors, delay-lines, Laserdiscs, CDs. Philips Domestic Appliances Hastings: Design and Production of Electric kettles, Fan Heaters plus former EKCO brand "Thermotube" Tubular Heaters and "Hostess" Domestic Food Warming Trolleys. Mullard Southampton and Hazel Grove, Stockport. Originally brought together as a joint venture between Mullard and GEC as Associated Semiconductor Manufacturers. They developed and manufactured rectifiers, diodes, transistors, integrated circuits and electro-optical devices. These became Philips Semiconductors before becoming part of NXP. London Carriers, logistics and transport division. Mullard Equipment Limited (MEL) which produced products for the military Ada (Halifax) Ltd, maker of washing machines and spin driers, refrigerators Pye TVT Ltd of Cambridge Pye Telecommunications Ltd of Cambridge TMC Limited of Malmesbury North America Canada Philips Canada was founded in 1941 when it acquired Small Electric Motors Limited. It is well known in medical systems for diagnosis and therapy, lighting technologies, shavers, and consumer electronics. The Canadian headquarters are located in Markham, Ontario. For several years, Philips manufactured lighting products in two Canadian factories. The London, Ontario, plant opened in 1971. It produced A19 lamps (including the "Royale" long life bulbs), PAR38 lamps and T19 lamps (originally a Westinghouse lamp shape). Philips closed the factory in May 2003. The Trois-Rivières, Quebec plant was a Westinghouse facility which Philips continued to run it after buying Westinghouse's lamp division in 1983. Philips closed this factory a few years later, in the late 1980s. Mexico Philips Mexicana SA de CV is headquartered in Mexico City. Philips Lighting has manufacturing facilities in: Monterrey, Nuevo León; Ciudad Juárez, Chihuahua; and Tijuana, Baja California. Philips Consumer Electronics has a manufacturing facility in Ciudad Juárez, Chihuahua. Philips Domestic Appliances formerly operated a large factory in the Industrial Vallejo sector of Mexico City but this was closed in 2004. United States Philips' Electronics North American headquarters is in Andover, Massachusetts. In early 2018, it was announced that the US headquarters would move to Cambridge, Massachusetts, by 2020. Philips Lighting has its corporate office in Somerset, New Jersey; with manufacturing plants in Danville, Kentucky; Salina, Kansas; Dallas and Paris, Texas and distribution centers in Mountain Top, Pennsylvania; El Paso, Texas; Ontario, California; and Memphis, Tennessee. Philips Healthcare is headquartered in Cambridge, Massachusetts, and operates a health-tech hub in Nashville, Tennessee, with over 1,000 jobs. The North American sales organization is based in Bothell, Washington. There are also manufacturing facilities in Andover, Massachusetts; Bothell, Washington; Baltimore, Maryland; Cleveland, Ohio; Foster City, California; Gainesville, Florida; Milpitas, California; and Reedsville, Pennsylvania. Philips Healthcare also formerly had a factory in Knoxville, Tennessee. Philips Consumer Lifestyle has its corporate office in Stamford, Connecticut. Philips Lighting has a Color Kinetics office in Burlington, Massachusetts. Philips Research North American headquarters is in Cambridge, Massachusetts. In 2007, Philips entered into a definitive merger agreement with North American luminaires company Genlyte Group Incorporated, which provides the company with a leading position in the North American luminaires (also known as "lighting fixtures"), controls and related products for a wide variety of applications, including solid state lighting. The company also acquired Respironics, which was a significant gain for its healthcare sector. On 21 February 2008, Philips completed the acquisition of Baltimore, Maryland-based VISICU. VISICU was the creator of the eICU concept of the use of Telemedicine from a centralized facility to monitor and care for ICU patients. In April 2020, the United States Department of Health & Human Services (HHS) entered into a contract with Philips Respironics for 43,000 bundled Trilogy Evo Universal ventilator (EV300) hospital ventilators. This included the production and delivery of ventilators to the Strategic National Stockpile—about 156,000 by the end of August 2020 and 187,000 more by the end of 2020. During the COVID-19 pandemic, beginning in March 2020, in response to an international demand, Philips increased production of the ventilators fourfold within five months. Production lines were added in the United States with employees working around the clock in factories producing ventilators, in Western Pennsylvania and California, for example. In March 2020, ProPublica published a series of articles on the Philips ventilator contract as negotiated by trade adviser Peter Navarro. In response to the ProPublica series, in August, the United States House of Representatives undertook a "congressional investigation" into the acquisition of the Philips ventilators. The lawmakers investigation found "evidence of fraud, waste and abuse."—the deal negotiated by Navarro had resulted in an over-payment to Philips by the US government of "hundreds of millions." Oceania Australia and New Zealand Philips Australia was founded in 1927 and is headquartered in North Ryde, New South Wales, and also manages the New Zealand operation from there. The company currently employs around 800 people. Regional sales and support offices are located in Melbourne, Brisbane, Adelaide, Perth and Auckland. Current activities include: Philips Healthcare (also responsible for New Zealand operations); Philips Lighting (also responsible for New Zealand operations); Phillips Oral Healthcare, Phillips Professional Dictation Solutions, Phillips Professional Display Solutions, Phillips AVENT Professional, Philips Consumer Lifestyle (also responsible for New Zealand operations); Philips Sleep & Respiratory Care (formerly Respironics), with its ever-increasing national network of Sleepeasy Centres; Philips Dynalite (Lighting Control systems, acquired in 2009, global design and manufacturing centre) and Philips Selecon NZ (Lighting Entertainment product design and manufacture). South America Brazil Philips do Brasil () was founded in 1924 in Rio de Janeiro. In 1929, Philips started to sell radio receivers. In the 1930s, Philips was making its light bulbs and radio receivers in Brazil. From 1939 to 1945, World War II forced Brazilian branch of Philips to sell bicycles, refrigerators and insecticides. After the war, Philips had a great industrial expansion in Brazil, and was among the first groups to establish in Manaus Free Zone. In the 1970s, Philips Records was a major player in Brazil recording industry. Nowadays, Philips do Brasil is one of the largest foreign-owned companies in Brazil. Philips uses the brand Walita for domestic appliances in Brazil. Colour television Colour television was introduced in South America by then CEO, Cor Dillen. Former operations Philips subsidiary Philips-Duphar(nl) manufactured pharmaceuticals for human and veterinary use and products for crop protection. Duphar was sold to Solvay in 1990. In subsequent years, Solvay sold off all divisions to other companies (crop protection to UniRoyal, now Chemtura, the veterinary division to Fort Dodge, a division of Wyeth, and the pharmaceutical division to Abbott Laboratories). PolyGram, Philips' music television and movies division, was sold to Seagram in 1998; merged into Universal Music Group. Philips Records continues to operate as record label of UMG, its name licensed from its former parent. In 1980 Philips acquired Marantz, a company renowned for high-end audio and video products, based at Kanagawa, Japan. In 2002 Marantz Japan merged with Denon to form D&M Holdings and Philips sold its remaining stake in D&M Holdings in 2008. Origin, now part of Atos Origin, is a former division of Philips. ASM Lithography is a spin-off from a division of Philips. Hollandse Signaalapparaten was a manufacturer of military electronics. The business was sold to Thomson-CSF in 1990 and is now Thales Nederland. NXP Semiconductors, formerly known as Philips Semiconductors, was sold a consortium of private equity investors in 2006. On 6 August 2010, NXP completed its IPO, with shares trading on NASDAQ. Ignis, of Comerio, in the province of Varese, Italy, produced washing machines, dishwashers and microwave ovens, was one of the leading companies in the domestic appliance market, holding a 38% share in 1960. In 1970, 50% of the company's capital was taken over by Philips, which acquired full control in 1972. Ignis was in those years, after Zanussi, the second largest domestic appliance manufacturer, and in 1973 its factories numbered over 10,000 employees only in Italy. With the transfer of ownership to the Dutch multinational, the corporate name of the company was changed, which became "IRE SpA" (Industrie Riunite Eurodomestici). Thereafter Philips used to sell major household appliances (whitegoods) under the name Philips. After selling the Major Domestic Appliances division to Whirlpool Corporation it changed from Philips Whirlpool to Whirlpool Philips and finally to just Whirlpool. Whirlpool bought a 53% stake in Philips' major appliance operations to form Whirlpool International. Whirlpool bought Philips' remaining interest in Whirlpool International in 1991. Philips Cryogenics was split off in 1990 to form the Stirling Cryogenics BV, Netherlands. This company is still active in the development and manufacturing of Stirling cryocoolers and cryogenic cooling systems. North American Philips distributed AKG Acoustics products under the AKG of America, Philips Audio/Video, Norelco and AKG Acoustics Inc. branding until AKG set up its North American division in San Leandro, California, in 1985. (AKG's North American division has since moved to Northridge, California.) Polymer Vision was a Philips spin-off that manufactured a flexible e-ink display screen. The company was acquired by Taiwanese contract electronics manufacturer Wistron in 2009 and it was shut down in 2012, after repeated failed attempts to find a potential buyer. Products Philips' core products are consumer electronics and electrical products (including small domestic appliances, shavers, beauty appliances, mother and childcare appliances, electric toothbrushes and coffee makers (products like Smart Phones, audio equipment, Blu-ray players, computer accessories and televisions are sold under license); and healthcare products (including CT scanners, ECG equipment, mammography equipment, monitoring equipment, MRI scanners, radiography equipment, resuscitation equipment, ultrasound equipment and X-ray equipment); In January 2020 Philips announced that it is looking to sell its domestic appliances division, which includes products like coffee machines, air purifiers and airfryers. Lighting products Professional indoor luminaires Professional outdoor luminaires Professional lamps Lighting controls and control systems Digital projection lights Horticulture lighting Solar LED lights Smart office lighting systems Smart retail lighting systems Smart city lighting systems Home lamps Home fixtures Home systems (branded as Philips Hue) Audio products Hi-fi systems Wireless speakers Radio systems Docking stations Headphones DJ mixers Alarm clocks Healthcare products Philips healthcare products include: Clinical informatics Cardiology informatics (IntelliSpace Cardiovascular, Xcelera) Enterprise Imaging Informatics (IntelliSpace PACS, XIRIS) IntelliSpace family of solutions Imaging systems Cardio/Vascular X-Ray Wires and Catheters (Verrata) Computed tomography (CT) Fluoroscopy Magnetic resonance imaging (MRI) Mammography Mobile C-Arms Nuclear medicine PET (Positron emission tomography) PET/CT Radiography Radiation oncology Systemsroots Ultrasound Diagnostic monitoring Diagnostic ECG Defibrillators Accessories Equipment Software Consumer Philips AVENTil Patient care and clinical informatics Anesthetic gas monitoring Blood pressure Capnography D.M.E. Diagnostic sleep testing ECG Enterprise patient informatics solutions OB TraceVue Compurecord ICIP eICU program Emergin Hemodynamic IntelliSpace Cardiovascular IntelliSpace PACS IntelliSpace portal Multi-measurement servers Neurophedeoiles Pulse oximetry Tasy Temperature Transcutaneous gases Ventilation ViewForum Xcelera XIRIS Xper Information Management Coat of arms/logotype Slogans Simply Years Ahead (1976–1985) Let's Make Things Better (1995–2004) Mari Jadikan Segalanya Menjadi Lebih Baik (Indonesia Only) (1995–2004) Sense & Simplicity (2004–2013) Innovation & You (2013-Now) Sponsorships In 1913, in celebration of the 100th anniversary of the liberation of the Netherlands, Philips founded Philips Sports Vereniging (Philips Sports Club, now commonly known as PSV). The club is active in numerous sports but is now best known for its football team, PSV Eindhoven, and swimming team. Philips owns the naming rights to Philips Stadium in Eindhoven, which is the home ground of PSV Eindhoven. Outside of the Netherlands, Philips sponsors and has sponsored numerous sports clubs, sports facilities and events. In November 2008, Philips renewed and extended its F1 partnership with AT&T Williams. Philips owns the naming rights to the Philips Championship, the premier basketball league in Australia, traditionally known as the National Basketball League. From 1988 to 1993, Philips was the principal sponsor of the Australian rugby league team The Balmain Tigers and Indonesian football club side Persiba Balikpapan. From 1998 to 2000, Philips sponsored the Winston Cup No. 7 entry for Geoff Bodine Racing, later Ultra Motorsports, for drivers Geoff Bodine and Michael Waltrip. From 1999 to 2018, Philips held the naming rights to Philips Arena in Atlanta, home of the Atlanta Hawks of the National Basketball Association and former home of the defunct Atlanta Thrashers of the National Hockey League. Outside of sports, Philips sponsors the international Philips Monsters of Rock festival. Environmental record Circular economy Philips and its CEO, Frans van Houten, hold several global leadership positions in advancing the circular economy, including as a founding member and co-chair of the board of directors for the Platform for Accelerating the Circular Economy (PACE), applying circular approaches in its capital equipment business, and as a global partner of the Ellen MacArthur Foundation. Green initiatives Philips also runs the EcoVision initiative, which commits to a number of environmentally positive improvements, focusing on energy efficiency. Also, Philips marks its "green" products with
or completely separated. The pelvic fins usually have one spine and up to five soft rays, positioned unusually far forward under the chin or under the belly. Scales are usually ctenoid (rough to the touch), although sometimes they are cycloid (smooth to the touch) or otherwise modified. Taxonomy Classification of this group is controversial. As traditionally defined before the introduction of cladistics, the Perciformes are almost certainly paraphyletic. Other orders that should possibly be included as suborders are the Scorpaeniformes, Tetraodontiformes, and Pleuronectiformes. Of the presently recognized suborders, several may be
Cretaceous. Among the well-known members of this group are perch and darters (Percidae), sea bass and groupers (Serranidae). Characteristics The dorsal and anal fins are divided into anterior spiny and posterior soft-rayed portions, which may be partially or completely separated. The pelvic fins usually have one spine and up to five soft rays, positioned unusually far forward under the chin or under the belly. Scales are usually ctenoid (rough to the touch), although sometimes they are cycloid (smooth to the touch) or otherwise modified. Taxonomy Classification of this group is controversial. As traditionally defined before the introduction of cladistics, the Perciformes are almost certainly paraphyletic. Other orders that should
brown. It has a flavor somewhat similar to both banana and mango, varying significantly by cultivar, and has more protein than most fruits. Species and their distributions Accepted species Asimina angustifolia Raf. 1840 not A. Gray 1886; Florida, Georgia, Alabama, South Carolina Not a valid species Asimina incana (W. Bartram) Exell - Woolly pawpaw. Florida and Georgia. (Annona incana W. Bartram) Asimina longifolia Raf. - Slimleaf pawpaw. Florida, Georgia, and Alabama. Asimina manasota DeLaney - Manasota papaw native to two counties in Florida (Manatee + Sarasota); first described in 2010 Not a valid species Asimina pulchella (Small)Rehder & Dayton - White Squirrel Banana. Endemic to 3 counties in Florida. (endangered) Asimina rugelii B.L. Rob - Yellow Squirrel Banana. Endemic to Volusia county Florida (endangered) Asimina obovata (Willd.) Nash) (Annona obovata Willd.)- Flag-pawpaw or Bigflower pawpaw - Florida Asimina parviflora (Michx.) Dunal - Smallflower pawpaw. Southern states from Texas to Virginia. Asimina pygmaea (W. Bartram) Dunal - Dwarf pawpaw. Florida and Georgia. Asimina reticulata Shuttlw. ex Chapman - Netted pawpaw. Florida and Georgia. Asimina spatulata (Kral) D.B.Ward - Slim leaf pawpaw. Florida and Alabama Not a valid species Asimina tetramera Small - Fourpetal pawpaw. Florida (endangered) Asimina triloba (L.) Dunal - Common pawpaw. Extreme southern Ontario, Canada, and the eastern United States from New York west to southeast Nebraska, and south to northern Florida and eastern Texas. (Annona triloba L.) Ecology The common pawpaw is native to shady, rich bottom lands, where it often forms a dense undergrowth in the forest, often appearing as a patch or thicket of individual small slender trees. Pawpaw flowers are insect-pollinated, but fruit production is limited since few if any pollinators are attracted to the flower's faint, or sometimes non-existent scent. The flowers produce an odor similar to that of rotting meat to attract blowflies or carrion beetles for cross pollination. Other insects that are attracted to pawpaw plants include scavenging fruit flies, carrion flies and beetles. Because of difficult pollination, some believe the flowers are self-incompatible. Pawpaw fruit may be eaten by foxes, opossums, squirrels and raccoons. Pawpaw leaves and twigs are seldom consumed by rabbits or deer. The leaves, twigs, and bark of the common pawpaw tree contain natural
- Dwarf pawpaw. Florida and Georgia. Asimina reticulata Shuttlw. ex Chapman - Netted pawpaw. Florida and Georgia. Asimina spatulata (Kral) D.B.Ward - Slim leaf pawpaw. Florida and Alabama Not a valid species Asimina tetramera Small - Fourpetal pawpaw. Florida (endangered) Asimina triloba (L.) Dunal - Common pawpaw. Extreme southern Ontario, Canada, and the eastern United States from New York west to southeast Nebraska, and south to northern Florida and eastern Texas. (Annona triloba L.) Ecology The common pawpaw is native to shady, rich bottom lands, where it often forms a dense undergrowth in the forest, often appearing as a patch or thicket of individual small slender trees. Pawpaw flowers are insect-pollinated, but fruit production is limited since few if any pollinators are attracted to the flower's faint, or sometimes non-existent scent. The flowers produce an odor similar to that of rotting meat to attract blowflies or carrion beetles for cross pollination. Other insects that are attracted to pawpaw plants include scavenging fruit flies, carrion flies and beetles. Because of difficult pollination, some believe the flowers are self-incompatible. Pawpaw fruit may be eaten by foxes, opossums, squirrels and raccoons. Pawpaw leaves and twigs are seldom consumed by rabbits or deer. The leaves, twigs, and bark of the common pawpaw tree contain natural insecticides known as acetogenins. Larvae of the zebra swallowtail butterfly feed exclusively on young leaves of the various pawpaw species, but never occur in great numbers on the plants. The paw paw is considered an evolutionary anachronism, where a now-extinct evolutionary partner, such as a Pleistocene megafauna species, formerly consumed the fruit and assisted in seed dispersal. Cultivation and uses Wild-collected fruits of the common pawpaw (Asimina triloba) have long been a favorite treat throughout the tree's extensive native range in eastern North America. Fresh pawpaw fruits are commonly eaten raw; however, they do not store or ship well unless frozen. The fruit pulp is also often used locally in baked dessert recipes, with pawpaw often substituted in many banana-based recipes. Pawpaws have never
their experience a private matter or joined a Pentecostal church afterward. The 1960s saw a new pattern develop where large numbers of Spirit baptized Christians from mainline churches in the US, Europe, and other parts of the world chose to remain and work for spiritual renewal within their traditional churches. This initially became known as New or Neo-Pentecostalism (in contrast to the older classical Pentecostalism) but eventually became known as the Charismatic Movement. While cautiously supportive of the Charismatic Movement, the failure of Charismatics to embrace traditional Pentecostal teachings, such as the prohibition of dancing, abstinence from alcohol and other drugs such as tobacco, as well as restrictions on dress and appearance following the doctrine of outward holiness, initiated an identity crisis for classical Pentecostals, who were forced to reexamine long held assumptions about what it meant to be Spirit filled. The liberalizing influence of the Charismatic Movement on classical Pentecostalism can be seen in the disappearance of many of these taboos since the 1960s. Because of this, the cultural differences between classical Pentecostals and charismatics have lessened over time. The global renewal movements manifest many of these tensions as inherent characteristics of Pentecostalism and as representative of the character of global Christianity. Beliefs Pentecostalism is an evangelical faith, emphasizing the reliability of the Bible and the need for the transformation of an individual's life through faith in Jesus. Like other evangelicals, Pentecostals generally adhere to the Bible's divine inspiration and inerrancy—the belief that the Bible, in the original manuscripts in which it was written, is without error. Pentecostals emphasize the teaching of the "full gospel" or "foursquare gospel". The term foursquare refers to the four fundamental beliefs of Pentecostalism: Jesus saves according to John 3:16; baptizes with the Holy Spirit according to Acts 2:4; heals bodily according to James 5:15; and is coming again to receive those who are saved according to 1 Thessalonians 4:16–17. Salvation The central belief of classical Pentecostalism is that through the death, burial, and resurrection of Jesus Christ, sins can be forgiven and humanity reconciled with God. This is the Gospel or "good news". The fundamental requirement of Pentecostalism is that one be born again. The new birth is received by the grace of God through faith in Christ as Lord and Savior. In being born again, the believer is regenerated, justified, adopted into the family of God, and the Holy Spirit's work of sanctification is initiated. Classical Pentecostal soteriology is generally Arminian rather than Calvinist. The security of the believer is a doctrine held within Pentecostalism; nevertheless, this security is conditional upon continual faith and repentance. Pentecostals believe in both a literal heaven and hell, the former for those who have accepted God's gift of salvation and the latter for those who have rejected it. For most Pentecostals there is no other requirement to receive salvation. Baptism with the Holy Spirit and speaking in tongues are not generally required, though Pentecostal converts are usually encouraged to seek these experiences. A notable exception is Jesus' Name Pentecostalism, most adherents of which believe both water baptism and Spirit baptism are integral components of salvation. Baptism with the Holy Spirit Pentecostals identify three distinct uses of the word "baptism" in the New Testament: Baptism into the body of Christ: This refers to salvation. Every believer in Christ is made a part of his body, the Church, through baptism. The Holy Spirit is the agent, and the body of Christ is the medium. Water baptism: Symbolic of dying to the world and living in Christ, water baptism is an outward symbolic expression of that which has already been accomplished by the Holy Spirit, namely baptism into the body of Christ. Baptism with the Holy Spirit: This is an experience distinct from baptism into the body of Christ. In this baptism, Christ is the agent and the Holy Spirit is the medium. While the figure of Jesus Christ and his redemptive work are at the center of Pentecostal theology, that redemptive work is believed to provide for a fullness of the Holy Spirit of which believers in Christ may take advantage. The majority of Pentecostals believe that at the moment a person is born again, the new believer has the presence (indwelling) of the Holy Spirit. While the Spirit dwells in every Christian, Pentecostals believe that all Christians should seek to be filled with him. The Spirit's "filling", "falling upon", "coming upon", or being "poured out upon" believers is called the baptism with the Holy Spirit. Pentecostals define it as a definite experience occurring after salvation whereby the Holy Spirit comes upon the believer to anoint and empower them for special service. It has also been described as "a baptism into the love of God". The main purpose of the experience is to grant power for Christian service. Other purposes include power for spiritual warfare (the Christian struggles against spiritual enemies and thus requires spiritual power), power for overflow (the believer's experience of the presence and power of God in their life flows out into the lives of others), and power for ability (to follow divine direction, to face persecution, to exercise spiritual gifts for the edification of the church, etc.). Pentecostals believe that the baptism with the Holy Spirit is available to all Christians. Repentance from sin and being born again are fundamental requirements to receive it. There must also be in the believer a deep conviction of needing more of God in their life, and a measure of consecration by which the believer yields themself to the will of God. Citing instances in the Book of Acts where believers were Spirit baptized before they were baptized with water, most Pentecostals believe a Christian need not have been baptized in water to receive Spirit baptism. However, Pentecostals do believe that the biblical pattern is "repentance, regeneration, water baptism, and then the baptism with the Holy Ghost". There are Pentecostal believers who have claimed to receive their baptism with the Holy Spirit while being water baptized. It is received by having faith in God's promise to fill the believer and in yielding the entire being to Christ. Certain conditions, if present in a believer's life, could cause delay in receiving Spirit baptism, such as "weak faith, unholy living, imperfect consecration, and egocentric motives". In the absence of these, Pentecostals teach that seekers should maintain a persistent faith in the knowledge that God will fulfill his promise. For Pentecostals, there is no prescribed manner in which a believer will be filled with the Spirit. It could be expected or unexpected, during public or private prayer. Pentecostals expect certain results following baptism with the Holy Spirit. Some of these are immediate while others are enduring or permanent. Most Pentecostal denominations teach that speaking in tongues is an immediate or initial physical evidence that one has received the experience. Some teach that any of the gifts of the Spirit can be evidence of having received Spirit baptism. Other immediate evidences include giving God praise, having joy, and desiring to testify about Jesus. Enduring or permanent results in the believer's life include Christ glorified and revealed in a greater way, a "deeper passion for souls", greater power to witness to nonbelievers, a more effective prayer life, greater love for and insight into the Bible, and the manifestation of the gifts of the Spirit. Holiness Pentecostals, with their background in the Wesleyan-Holiness movement, historically teach that baptism with the Holy Spirit, as evidenced by glossolalia, is the third work of grace, which follows the new birth (first work of grace) and entire sanctification (second work of grace). While the baptism with the Holy Spirit is a definite experience in a believer's life, Pentecostals view it as just the beginning of living a Spirit-filled life. Pentecostal teaching stresses the importance of continually being filled with the Spirit. There is only one baptism with the Spirit, but there should be many infillings with the Spirit throughout the believer's life. Divine healing Pentecostalism is a holistic faith, and the belief that Jesus is Healer is one quarter of the full gospel. Pentecostals cite four major reasons for believing in divine healing: 1) it is reported in the Bible, 2) Jesus' healing ministry is included in his atonement (thus divine healing is part of salvation), 3) "the whole gospel is for the whole person"—spirit, soul, and body, 4) sickness is a consequence of the Fall of Man and salvation is ultimately the restoration of the fallen world. In the words of Pentecostal scholar Vernon L. Purdy, "Because sin leads to human suffering, it was only natural for the Early Church to understand the ministry of Christ as the alleviation of human suffering, since he was God's answer to sin ... The restoration of fellowship with God is the most important thing, but this restoration not only results in spiritual healing but many times in physical healing as well." In the book In Pursuit of Wholeness: Experiencing God's Salvation for the Total Person, Pentecostal writer and Church historian Wilfred Graves, Jr. describes the healing of the body as a physical expression of salvation. For Pentecostals, spiritual and physical healing serves as a reminder and testimony to Christ's future return when his people will be completely delivered from all the consequences of the fall. However, not everyone receives healing when they pray. It is God in his sovereign wisdom who either grants or withholds healing. Common reasons that are given in answer to the question as to why all are not healed include: God teaches through suffering, healing is not always immediate, lack of faith on the part of the person needing healing, and personal sin in one's life (however, this does not mean that all illness is caused by personal sin). Regarding healing and prayer Purdy states: Pentecostals believe that prayer and faith are central in receiving healing. Pentecostals look to scriptures such as James 5:13–16 for direction regarding healing prayer. One can pray for one's own healing (verse 13) and for the healing of others (verse 16); no special gift or clerical status is necessary. Verses 14–16 supply the framework for congregational healing prayer. The sick person expresses their faith by calling for the elders of the church who pray over and anoint the sick with olive oil. The oil is a symbol of the Holy Spirit. Besides prayer, there are other ways in which Pentecostals believe healing can be received. One way is based on Mark 16:17–18 and involves believers laying hands on the sick. This is done in imitation of Jesus who often healed in this manner. Another method that is found in some Pentecostal churches is based on the account in Acts 19:11–12 where people were healed when given handkerchiefs or aprons worn by the Apostle Paul. This practice is described by Duffield and Van Cleave in Foundations of Pentecostal Theology: During the initial decades of the movement, Pentecostals thought it was sinful to take medicine or receive care from doctors. Over time, Pentecostals moderated their views concerning medicine and doctor visits; however, a minority of Pentecostal churches continues to rely exclusively on prayer and divine healing. For example, doctors in the United Kingdom reported that a minority of Pentecostal HIV patients were encouraged to stop taking their medicines and parents were told to stop giving medicine to their children, trends that placed lives at risk. Eschatology The last element of the gospel is that Jesus is the "Soon Coming King". For Pentecostals, "every moment is eschatological" since at any time Christ may return. This "personal and imminent" Second Coming is for Pentecostals the motivation for practical Christian living including: personal holiness, meeting together for worship, faithful Christian service, and evangelism (both personal and worldwide). Globally, Pentecostal attitudes to the End Times range from enthusiastic participation in the prophecy subculture to a complete lack of interest through to the more recent, optimistic belief in the coming restoration of God's kingdom. Historically, however, they have been premillennial dispensationalists believing in a pretribulation rapture. Pre-tribulation rapture theology was popularized extensively in the 1830s by John Nelson Darby, and further popularized in the United States in the early 20th century by the wide circulation of the Scofield Reference Bible. Spiritual gifts Pentecostals are continuationists, meaning they believe that all of the spiritual gifts, including the miraculous or "sign gifts", found in 1 Corinthians 12:4–11, 12:27–31, Romans 12:3–8, and Ephesians 4:7–16 continue to operate within the Church in the present time. Pentecostals place the gifts of the Spirit in context with the fruit of the Spirit. The fruit of the Spirit is the result of the new birth and continuing to abide in Christ. It is by the fruit exhibited that spiritual character is assessed. Spiritual gifts are received as a result of the baptism with the Holy Spirit. As gifts freely given by the Holy Spirit, they cannot be earned or merited, and they are not appropriate criteria with which to evaluate one's spiritual life or maturity. Pentecostals see in the biblical writings of Paul an emphasis on having both character and power, exercising the gifts in love. Just as fruit should be evident in the life of every Christian, Pentecostals believe that every Spirit-filled believer is given some capacity for the manifestation of the Spirit. It is important to note that the exercise of a gift is a manifestation of the Spirit, not of the gifted person, and though the gifts operate through people, they are primarily gifts given to the Church. They are valuable only when they minister spiritual profit and edification to the body of Christ. Pentecostal writers point out that the lists of spiritual gifts in the New Testament do not seem to be exhaustive. It is generally believed that there are as many gifts as there are useful ministries and functions in the Church. A spiritual gift is often exercised in partnership with another gift. For example, in a Pentecostal church service, the gift of tongues might be exercised followed by the operation of the gift of interpretation. According to Pentecostals, all manifestations of the Spirit are to be judged by the church. This is made possible, in part, by the gift of discerning of spirits, which is the capacity for discerning the source of a spiritual manifestation—whether from the Holy Spirit, an evil spirit, or from the human spirit. While Pentecostals believe in the current operation of all the spiritual gifts within the church, their teaching on some of these gifts has generated more controversy and interest than others. There are different ways in which the gifts have been grouped. W. R. Jones suggests three categories, illumination (Word of Wisdom, word of knowledge, discerning of spirits), action (Faith, working of miracles and gifts of healings) and communication (Prophecy, tongues and interpretation of tongues). Duffield and Van Cleave use two categories: the vocal and the power gifts. Vocal gifts The gifts of prophecy, tongues, interpretation of tongues, and words of wisdom and knowledge are called the vocal gifts. Pentecostals look to 1 Corinthians 14 for instructions on the proper use of the spiritual gifts, especially the vocal ones. Pentecostals believe that prophecy is the vocal gift of preference, a view derived from 1 Corinthians 14. Some teach that the gift of tongues is equal to the gift of prophecy when tongues are interpreted. Prophetic and glossolalic utterances are not to replace the preaching of the Word of God nor to be considered as equal to or superseding the written Word of God, which is the final authority for determining teaching and doctrine. Word of wisdom and word of knowledge Pentecostals understand the word of wisdom and the word of knowledge to be supernatural revelations of wisdom and knowledge by the Holy Spirit. The word of wisdom is defined as a revelation of the Holy Spirit that applies scriptural wisdom to a specific situation that a Christian community faces. The word of knowledge is often defined as the ability of one person to know what God is currently doing or intends to do in the life of another person. Prophecy Pentecostals agree with the Protestant principle of sola Scriptura. The Bible is the "all sufficient rule for faith and practice"; it is "fixed, finished, and objective revelation". Alongside this high regard for the authority of scripture is a belief that the gift of prophecy continues to operate within the Church. Pentecostal theologians Duffield and van Cleave described the gift of prophecy in the following manner: "Normally, in the operation of the gift of prophecy, the Spirit heavily anoints the believer to speak forth to the body not premeditated words, but words the Spirit supplies spontaneously in order to uplift and encourage, incite to faithful obedience and service, and to bring comfort and consolation." Any Spirit-filled Christian, according to Pentecostal theology, has the potential, as with all the gifts, to prophesy. Sometimes, prophecy can overlap with preaching "where great unpremeditated truth or application is provided by the Spirit, or where special revelation is given beforehand in prayer and is empowered in the delivery". While a prophetic utterance at times might foretell future events, this is not the primary purpose of Pentecostal prophecy and is never to be used for personal guidance. For Pentecostals, prophetic utterances are fallible, i.e. subject to error. Pentecostals teach that believers must discern whether the utterance has edifying value for themselves and the local church. Because prophecies are subject to the judgement and discernment of other Christians, most Pentecostals teach that prophetic utterances should never be spoken in the first person (e.g. "I, the Lord") but always in the third person (e.g. "Thus saith the Lord" or "The Lord would have..."). Tongues and interpretation A Pentecostal believer in a spiritual experience may vocalize fluent, unintelligible utterances (glossolalia) or articulate a natural language previously unknown to them (xenoglossy). Commonly termed "speaking in tongues", this vocal phenomenon is believed by Pentecostals to include an endless variety
T. B. Barratt was influenced by Seymour during a tour of the United States. By December 1906, he had returned to Europe and is credited with beginning the Pentecostal movement in Sweden, Norway, Denmark, Germany, France and England. A notable convert of Barratt was Alexander Boddy, the Anglican vicar of All Saints' in Sunderland, England, who became a founder of British Pentecostalism. Other important converts of Barratt were German minister Jonathan Paul who founded the first German Pentecostal denomination (the Mülheim Association) and Lewi Pethrus, the Swedish Baptist minister who founded the Swedish Pentecostal movement. Through Durham's ministry, Italian immigrant Luigi Francescon received the Pentecostal experience in 1907 and established Italian Pentecostal congregations in the US, Argentina (Christian Assembly in Argentina), and Brazil (Christian Congregation of Brazil). In 1908, Giacomo Lombardi led the first Pentecostal services in Italy. In November 1910, two Swedish Pentecostal missionaries arrived in Belem, Brazil and established what would become the Assembleias de Deus (Assemblies of God of Brazil). In 1908, John G. Lake, a follower of Alexander Dowie who had experienced Pentecostal Spirit baptism, traveled to South Africa and founded what would become the Apostolic Faith Mission of South Africa and the Zion Christian Church. As a result of this missionary zeal, practically all Pentecostal denominations today trace their historical roots to the Azusa Street Revival. The first generation of Pentecostal believers faced immense criticism and ostracism from other Christians, most vehemently from the Holiness movement from which they originated. Alma White, leader of the Pillar of Fire Church, wrote a book against the movement titled Demons and Tongues in 1910. She called Pentecostal tongues "satanic gibberish" and Pentecostal services "the climax of demon worship". Famous holiness preacher W. B. Godbey characterized those at Azusa Street as "Satan's preachers, jugglers, necromancers, enchanters, magicians, and all sorts of mendicants". To Dr. G. Campbell Morgan, Pentecostalism was "the last vomit of Satan", while Dr. R. A. Torrey thought it was "emphatically not of God, and founded by a Sodomite". The Pentecostal Church of the Nazarene, one of the largest holiness groups, was strongly opposed to the new Pentecostal movement. To avoid confusion, the church changed its name in 1919 to the Church of the Nazarene. A. B. Simpson's Christian and Missionary Alliance negotiated a compromise position unique for the time. Simpson believed that Pentecostal tongues speaking was a legitimate manifestation of the Holy Spirit, but he did not believe it was a necessary evidence of Spirit baptism. This view on speaking in tongues ultimately led to what became known as the "Alliance position" articulated by A. W. Tozer as "seek not—forbid not". Early controversies The first Pentecostal converts were mainly derived from the Holiness movement and adhered to a Wesleyan understanding of sanctification as a definite, instantaneous experience and second work of grace. Problems with this view arose when large numbers of converts entered the movement from non-Wesleyan backgrounds, especially from Baptist churches. In 1910, William Durham of Chicago first articulated the Finished Work, a doctrine which located sanctification at the moment of salvation and held that after conversion the Christian would progressively grow in grace in a lifelong process. This teaching polarized the Pentecostal movement into two factions: Holiness Pentecostalism and Finished Work Pentecostalism. The Wesleyan doctrine was strongest in the Southern denominations, such as the Church of God (Cleveland), Church of God in Christ, and the Pentecostal Holiness Church; these bodies are classed as Holiness Pentecostal denominations. The Finished Work, however, would ultimately gain ascendancy among Pentecostals, in denominations such as the Assemblies of God, which was the first Finished Work Pentecostal denomination. After 1911, most new Pentecostal denominations would adhere to Finished Work sanctification. In 1914, a group of 300 predominately white Pentecostal ministers and laymen from all regions of the United States gathered in Hot Springs, Arkansas, to create a new, national Pentecostal fellowship—the General Council of the Assemblies of God. By 1911, many of these white ministers were distancing themselves from an existing arrangement under an African-American leader. Many of these white ministers were licensed by the African-American, C. H. Mason under the auspices of the Church of God in Christ, one of the few legally chartered Pentecostal organizations at the time credentialing and licensing ordained Pentecostal clergy. To further such distance, Bishop Mason and other African-American Pentecostal leaders were not invited to the initial 1914 fellowship of Pentecostal ministers. These predominately white ministers adopted a congregational polity, whereas the COGIC and other Southern groups remained largely episcopal and rejected a Finished Work understanding of Sanctification. Thus, the creation of the Assemblies of God marked an official end of Pentecostal doctrinal unity and racial integration. Among these Finished Work Pentecostals, the new Assemblies of God would soon face a "new issue" which first emerged at a 1913 camp meeting. During a baptism service, the speaker, R. E. McAlister, mentioned that the Apostles baptized converts once in the name of Jesus Christ, and the words "Father, Son, and Holy Ghost" were never used in baptism. This inspired Frank Ewart who claimed to have received as a divine prophecy revealing a nontrinitarian conception of God. Ewart believed that there was only one personality in the Godhead—Jesus Christ. The terms "Father" and "Holy Ghost" were titles designating different aspects of Christ. Those who had been baptized in the Trinitarian fashion needed to submit to rebaptism in Jesus' name. Furthermore, Ewart believed that Jesus' name baptism and the gift of tongues were essential for salvation. Ewart and those who adopted his belief, which is known as Oneness Pentecostalism, called themselves "oneness" or "Jesus' Name" Pentecostals, but their opponents called them "Jesus Only". Amid great controversy, the Assemblies of God rejected the Oneness teaching, and many of its churches and pastors were forced to withdraw from the denomination in 1916. They organized their own Oneness groups. Most of these joined Garfield T. Haywood, an African-American preacher from Indianapolis, to form the Pentecostal Assemblies of the World. This church maintained an interracial identity until 1924 when the white ministers withdrew to form the Pentecostal Church, Incorporated. This church later merged with another group forming the United Pentecostal Church International. This controversy among the Finished Work Pentecostals caused Holiness Pentecostals to further distance themselves from Finished Work Pentecostals, who they viewed as heretical. 1930–59 While Pentecostals shared many basic assumptions with conservative Protestants, the earliest Pentecostals were rejected by Fundamentalist Christians who adhered to cessationism. In 1928, the World Christian Fundamentals Association labeled Pentecostalism "fanatical" and "unscriptural". By the early 1940s, this rejection of Pentecostals was giving way to a new cooperation between them and leaders of the "new evangelicalism", and American Pentecostals were involved in the founding of the 1942 National Association of Evangelicals. Pentecostal denominations also began to interact with each other both on national levels and international levels through the Pentecostal World Fellowship, which was founded in 1947. Some Pentecostal churches in Europe, especially in Italy and Germany, during the war were also victims of the Holocaust. Because of their tongues speaking their members were considered mentally ill, and many pastors were sent either to confinement or to concentration camps. Though Pentecostals began to find acceptance among evangelicals in the 1940s, the previous decade was widely viewed as a time of spiritual dryness, when healings and other miraculous phenomena were perceived as being less prevalent than in earlier decades of the movement. It was in this environment that the Latter Rain Movement, the most important controversy to affect Pentecostalism since World War II, began in North America and spread around the world in the late 1940s. Latter Rain leaders taught the restoration of the fivefold ministry led by apostles. These apostles were believed capable of imparting spiritual gifts through the laying on of hands. There were prominent participants of the early Pentecostal revivals, such as Stanley Frodsham and Lewi Pethrus, who endorsed the movement citing similarities to early Pentecostalism. However, Pentecostal denominations were critical of the movement and condemned many of its practices as unscriptural. One reason for the conflict with the denominations was the sectarianism of Latter Rain adherents. Many autonomous churches were birthed out of the revival. A simultaneous development within Pentecostalism was the postwar Healing Revival. Led by healing evangelists William Branham, Oral Roberts, Gordon Lindsay, and T. L. Osborn, the Healing Revival developed a following among non-Pentecostals as well as Pentecostals. Many of these non-Pentecostals were baptized in the Holy Spirit through these ministries. The Latter Rain and the Healing Revival influenced many leaders of the charismatic movement of the 1960s and 1970s. 1960–present Before the 1960s, most non-Pentecostal Christians who experienced the Pentecostal baptism in the Holy Spirit typically kept their experience a private matter or joined a Pentecostal church afterward. The 1960s saw a new pattern develop where large numbers of Spirit baptized Christians from mainline churches in the US, Europe, and other parts of the world chose to remain and work for spiritual renewal within their traditional churches. This initially became known as New or Neo-Pentecostalism (in contrast to the older classical Pentecostalism) but eventually became known as the Charismatic Movement. While cautiously supportive of the Charismatic Movement, the failure of Charismatics to embrace traditional Pentecostal teachings, such as the prohibition of dancing, abstinence from alcohol and other drugs such as tobacco, as well as restrictions on dress and appearance following the doctrine of outward holiness, initiated an identity crisis for classical Pentecostals, who were forced to reexamine long held assumptions about what it meant to be Spirit filled. The liberalizing influence of the Charismatic Movement on classical Pentecostalism can be seen in the disappearance of many of these taboos since the 1960s. Because of this, the cultural differences between classical Pentecostals and charismatics have lessened over time. The global renewal movements manifest many of these tensions as inherent characteristics of Pentecostalism and as representative of the character of global Christianity. Beliefs Pentecostalism is an evangelical faith, emphasizing the reliability of the Bible and the need for the transformation of an individual's life through faith in Jesus. Like other evangelicals, Pentecostals generally adhere to the Bible's divine inspiration and inerrancy—the belief that the Bible, in the original manuscripts in which it was written, is without error. Pentecostals emphasize the teaching of the "full gospel" or "foursquare gospel". The term foursquare refers to the four fundamental beliefs of Pentecostalism: Jesus saves according to John 3:16; baptizes with the Holy Spirit according to Acts 2:4; heals bodily according to James 5:15; and is coming again to receive those who are saved according to 1 Thessalonians 4:16–17. Salvation The central belief of classical Pentecostalism is that through the death, burial, and resurrection of Jesus Christ, sins can be forgiven and humanity reconciled with God. This is the Gospel or "good news". The fundamental requirement of Pentecostalism is that one be born again. The new birth is received by the grace of God through faith in Christ as Lord and Savior. In being born again, the believer is regenerated, justified, adopted into the family of God, and the Holy Spirit's work of sanctification is initiated. Classical Pentecostal soteriology is generally Arminian rather than Calvinist. The security of the believer is a doctrine held within Pentecostalism; nevertheless, this security is conditional upon continual faith and repentance. Pentecostals believe in both a literal heaven and hell, the former for those who have accepted God's gift of salvation and the latter for those who have rejected it. For most Pentecostals there is no other requirement to receive salvation. Baptism with the Holy Spirit and speaking in tongues are not generally required, though Pentecostal converts are usually encouraged to seek these experiences. A notable exception is Jesus' Name Pentecostalism, most adherents of which believe both water baptism and Spirit baptism are integral components of salvation. Baptism with the Holy Spirit Pentecostals identify three distinct uses of the word "baptism" in the New Testament: Baptism into the body of Christ: This refers to salvation. Every believer in Christ is made a part of his body, the Church, through baptism. The Holy Spirit is the agent, and the body of Christ is the medium. Water baptism: Symbolic of dying to the world and living in Christ, water baptism is an outward symbolic expression of that which has already been accomplished by the Holy Spirit, namely baptism into the body of Christ. Baptism with the Holy Spirit: This is an experience distinct from baptism into the body of Christ. In this baptism, Christ is the agent and the Holy Spirit is the medium. While the figure of Jesus Christ and his redemptive work are at the center of Pentecostal theology, that redemptive work is believed to provide for a fullness of the Holy Spirit of which believers in Christ may take advantage. The majority of Pentecostals believe that at the moment a person is born again, the new believer has the presence (indwelling) of the Holy Spirit. While the Spirit dwells in every Christian, Pentecostals believe that all Christians should seek to be filled with him. The Spirit's "filling", "falling upon", "coming upon", or being "poured out upon" believers is called the baptism with the Holy Spirit. Pentecostals define it as a definite experience occurring after salvation whereby the Holy Spirit comes upon the believer to anoint and empower them for special service. It has also been described as "a baptism into the love of God". The main purpose of the experience is to grant power for Christian service. Other purposes include power for spiritual warfare (the Christian struggles against spiritual enemies and thus requires spiritual power), power for overflow (the believer's experience of the presence and power of God in their life flows out into the lives of others), and power for ability (to follow divine direction, to face persecution, to exercise spiritual gifts for the edification of the church, etc.). Pentecostals believe that the baptism with the Holy Spirit is available to all Christians. Repentance from sin and being born again are fundamental requirements to receive it. There must also be in the believer a deep conviction of needing more of God in their life, and a measure of consecration by which the believer yields themself to the will of God. Citing instances in the Book of Acts where believers were Spirit baptized before they were baptized with water, most Pentecostals believe a Christian need not have been baptized in water to receive Spirit baptism. However, Pentecostals do believe that the biblical pattern is "repentance, regeneration, water baptism, and then the baptism with the Holy Ghost". There are Pentecostal believers who have claimed to receive their baptism with the Holy Spirit while being water baptized. It is received by having faith in God's promise to fill the believer and in yielding the entire being to Christ. Certain conditions, if present in a believer's life, could cause delay in receiving Spirit baptism, such as "weak faith, unholy living, imperfect consecration, and egocentric motives". In the absence of these, Pentecostals teach that seekers should maintain a persistent faith in the knowledge that God will fulfill his promise. For Pentecostals, there is no prescribed manner in which a believer will be filled with the Spirit. It could be expected or unexpected, during public or private prayer. Pentecostals expect certain results following baptism with the Holy Spirit. Some of these are immediate while others are enduring or permanent. Most Pentecostal denominations teach that speaking in tongues is an immediate or initial physical evidence that one has received the experience. Some teach that any of the gifts of the Spirit can be evidence of having received Spirit baptism. Other immediate evidences include giving God praise, having joy, and desiring to testify about Jesus. Enduring or permanent results in the believer's life include Christ glorified and revealed in a greater way, a "deeper passion for souls", greater power to witness to nonbelievers, a more effective prayer life, greater love for and insight into the Bible, and the manifestation of the gifts of the Spirit. Holiness Pentecostals, with their background in the Wesleyan-Holiness movement, historically teach that baptism with the Holy Spirit, as evidenced by glossolalia, is the third work of grace, which follows the new birth (first work of grace) and entire sanctification (second work of grace). While the baptism with the Holy Spirit is a definite experience in a believer's life, Pentecostals view it as just the beginning of living a Spirit-filled life. Pentecostal teaching stresses the importance of continually being filled with the Spirit. There is only one baptism with the Spirit, but there should be many infillings with the Spirit throughout the believer's life. Divine healing Pentecostalism is a holistic faith, and the belief that Jesus is Healer is one quarter of the full gospel. Pentecostals cite four major reasons for believing in divine healing: 1) it is reported in the Bible, 2) Jesus' healing ministry is included in his atonement (thus divine healing is part of salvation), 3) "the whole gospel is for the whole person"—spirit, soul, and body, 4) sickness is a consequence of the Fall of Man and salvation is ultimately the restoration of the fallen world. In the words of Pentecostal scholar Vernon L. Purdy, "Because sin leads to human suffering, it was only natural for the Early Church to understand the ministry of Christ as the alleviation of human suffering, since he was God's answer to sin ... The restoration of fellowship with God is the most important thing, but this restoration not only results in spiritual healing but many times in physical healing as well." In the book In Pursuit of Wholeness: Experiencing God's Salvation for the Total Person, Pentecostal writer and Church historian Wilfred Graves, Jr. describes the healing of the body as a physical expression of salvation. For Pentecostals, spiritual and physical healing serves as a reminder and testimony to Christ's future return when his people will be completely delivered from all the consequences of the fall. However, not everyone receives healing when they pray. It is God in his sovereign wisdom who either grants or withholds healing. Common reasons that are given in answer to the question as to why all are not healed include: God teaches through suffering, healing is not always immediate, lack of faith on the part of the person needing healing, and personal sin in one's life (however, this does not mean that all illness is caused by personal sin). Regarding healing and prayer Purdy states: Pentecostals believe that prayer and faith are central in receiving healing. Pentecostals look to scriptures such as James 5:13–16 for direction regarding healing prayer. One can pray for one's own healing (verse 13) and for the healing of others (verse 16); no special gift or clerical status is necessary. Verses 14–16 supply the framework for congregational healing prayer. The sick person expresses their faith by calling for the elders of the church who pray over and anoint the sick with olive oil. The oil is a symbol of the Holy Spirit. Besides prayer, there are other ways in which Pentecostals believe healing can be received. One way is based on Mark 16:17–18 and involves believers laying hands on the sick. This is done in imitation of Jesus who often healed in this manner. Another method that is found in some Pentecostal churches is based on the account in Acts 19:11–12 where people were healed when given handkerchiefs or aprons worn by the Apostle Paul. This practice is described by Duffield and Van Cleave in Foundations of Pentecostal Theology: During the initial decades of the movement, Pentecostals thought it was sinful to take medicine or receive care from doctors. Over time, Pentecostals moderated their views concerning medicine and doctor visits; however, a minority of Pentecostal churches continues to rely exclusively on prayer and divine healing. For example, doctors in the United Kingdom reported that a minority of Pentecostal HIV patients were encouraged to stop taking their medicines and parents were told to stop giving medicine to their children, trends that placed lives at risk. Eschatology The last element of the gospel is that Jesus is the "Soon Coming King". For Pentecostals, "every moment is eschatological" since at any time Christ may return. This "personal and imminent" Second Coming is for Pentecostals the motivation for practical Christian living including: personal holiness, meeting together for worship, faithful Christian service, and evangelism (both personal and worldwide). Globally, Pentecostal attitudes to the End Times range from enthusiastic participation in the prophecy subculture to a complete lack of interest through to the more recent, optimistic belief in the coming restoration of God's kingdom. Historically, however, they have been premillennial dispensationalists believing in a pretribulation rapture. Pre-tribulation rapture theology was popularized extensively in the 1830s by John Nelson Darby, and further popularized in the United States in the early 20th century by the wide circulation of the Scofield Reference Bible. Spiritual gifts Pentecostals are continuationists, meaning they believe that all of the spiritual gifts, including the miraculous or "sign gifts", found in 1 Corinthians 12:4–11, 12:27–31, Romans 12:3–8, and Ephesians 4:7–16 continue to operate within the Church in the present time. Pentecostals place the gifts of the Spirit in context with the fruit of the Spirit. The fruit of the Spirit is the result of the new birth and continuing to abide in Christ. It is by the fruit exhibited that spiritual character is assessed. Spiritual gifts are received as a result of the baptism with the Holy Spirit. As gifts freely given by the Holy Spirit, they cannot be earned or merited, and they are not appropriate criteria with which to evaluate one's spiritual life or maturity. Pentecostals see in the biblical writings of Paul an emphasis on having both character and power, exercising the gifts in love. Just as fruit should be evident in the life of every Christian, Pentecostals believe that every Spirit-filled believer is given some capacity for the manifestation of the Spirit. It is important to note that the exercise of a gift is a manifestation of the Spirit, not of the gifted person, and though the gifts operate through people, they are primarily gifts given to the Church. They are valuable only when they minister spiritual profit and edification to the body of Christ. Pentecostal writers point out that the lists of spiritual gifts in the New Testament do not seem to be exhaustive. It is generally believed that there are as many gifts as there are useful ministries and functions in the Church. A spiritual gift is often exercised in partnership with another gift. For example, in a Pentecostal church service, the gift of tongues might be exercised followed by the operation of the gift of interpretation. According to Pentecostals, all manifestations of the Spirit are to be judged by the church. This is made possible, in part, by the gift of discerning of spirits, which is the capacity for discerning the source of a spiritual manifestation—whether from the Holy Spirit, an evil spirit, or from the human spirit. While Pentecostals believe in the current operation of all the spiritual gifts within the church, their teaching on some of these gifts has generated more controversy and interest than others. There are different ways in which the gifts have been grouped. W. R. Jones suggests three categories, illumination (Word of Wisdom, word of knowledge, discerning of spirits), action (Faith, working of miracles and gifts of healings) and communication (Prophecy, tongues and interpretation of tongues). Duffield and Van Cleave use two categories: the vocal and the power gifts. Vocal gifts The gifts of prophecy, tongues, interpretation of tongues, and words of wisdom and knowledge are called the vocal gifts. Pentecostals look to 1 Corinthians 14 for instructions on the proper use of the spiritual gifts, especially the vocal ones. Pentecostals believe that prophecy is the vocal gift of preference, a view derived from 1 Corinthians 14. Some teach that the gift of tongues is equal to the gift of prophecy when tongues are interpreted. Prophetic and glossolalic utterances are not to replace the preaching of the Word of God nor to be considered as equal to or superseding the written Word
to ideas put forth by Hippocrates, Democritus and other pre-Darwinian scientists in proposing that the whole of parental organisms participate in heredity (thus the prefix pan). Darwin wrote that Hippocrates' pangenesis was "almost identical with mine—merely a change of terms—and an application of them to classes of facts necessarily unknown to the old philosopher." The historian of science Conway Zirkle wrote that: Zirkle demonstrated that the idea of inheritance of acquired characteristics had become fully accepted by the 16th century and remained immensely popular through to the time of Lamarck's work, at which point it began to draw more criticism due to lack of hard evidence. He also stated that pangenesis was the only scientific explanation ever offered for this concept, developing from Hippocrates' belief that "the semen was derived from the whole body." In the 13th century, pangenesis was commonly accepted on the principle that semen was a refined version of food unused by the body, which eventually translated to 15th and 16th century widespread use of pangenetic principles in medical literature, especially in gynecology. Later pre-Darwinian important applications of the idea included hypotheses about the origin of the differentiation of races. A theory put forth by Pierre Louis Maupertuis in 1745 called for particles from both parents governing the attributes of the child, although some historians have called his remarks on the subject cursory and vague. In 1749, the French naturalist Georges-Louis Leclerc, Comte de Buffon developed a hypothetical system of heredity much like Darwin's pangenesis, wherein 'organic molecules' were transferred to offspring during reproduction and stored in the body during development. Commenting on Buffon's views, Darwin stated, "If Buffon had assumed that his organic molecules had been formed by each separate unit throughout the body, his view and mine would have been very closely similar." In 1801, Erasmus Darwin advocated a hypothesis of pangenesis in the third edition of his book Zoonomia. In 1809, Jean-Baptiste Lamarck in his Philosophie Zoologique put forth evidence for the idea that characteristics acquired during the lifetime of an organism, either from effects of the environment or may be passed on to the offspring. Charles Darwin first had significant contact with Lamarckism during his time at the University of Edinburgh Medical School in the late 1820s, both through Robert Edmond Grant, whom he assisted in research, and in Erasmus's journals. Darwin's first known writings on the topic of Lamarckian ideas as they related to inheritance are found in a notebook he opened in 1837, also entitled Zoonomia. Historian Johnathan Hodge states that the theory of pangenesis itself first appeared in Darwin's notebooks in 1841. In 1861, the Irish physician Henry Freke developed a variant of pangenesis in his book Origin of Species by Means of Organic Affinity. Freke proposed that all life was developed from microscopic organic agents which he named granules, which existed as 'distinct species of organizing matter' and would develop into different biological structures. Four years before the publication of Variation, in his 1864 book Principles of Biology, Herbert Spencer proposed a theory of "physiological units" similar to Darwin's gemmules, which likewise were said to be related to specific body parts and responsible for the transmission of characteristics of those body parts to offspring. He supported the Lamarckian idea of transmission of acquired characteristics. Darwin had debated whether to publish a theory of heredity for an extended period of time due to its highly speculative nature. He decided to include pangenesis in Variation after sending a 30 page manuscript to his close friend and supporter Thomas Huxley in May 1865, which was met by significant criticism from Huxley that made Darwin even more hesitant. However, Huxley eventually advised Darwin to publish, writing: "Somebody rummaging among your papers half a century hence will find Pangenesis & say 'See this wonderful anticipation of our modern Theories—and that stupid ass, Huxley, prevented his publishing them'" Darwin's initial version of pangenesis appeared in the first edition of Variation in 1868, and was later reworked for the publication of a second edition in 1875. Theory Darwin Darwin's pangenesis theory attempted to explain the process of sexual reproduction, inheritance of traits, and complex developmental phenomena such as cellular regeneration in a unified mechanistic structure. Longshan Liu wrote that in modern terms, pangenesis deals with issues of "dominance inheritance, graft hybridization, reversion, xenia, telegony, the inheritance of acquired characters, regeneration and many groups of facts pertaining to variation, inheritance and development." Mechanistically, Darwin proposed pangenesis to occur through the transfer of organic particles which he named 'gemmules.' Gemmules, which he also sometimes referred to as }, pangenes, granules, or germs, were supposed to be shed by the organs of the body and carried in the bloodstream to the reproductive organs where they accumulated in the germ cells or gametes. Their accumulation was thought to occur by some sort of a 'mutual affinity.' Each gemmule was said to be specifically related to a certain body part- as described, they did not contain information about the entire organism. The different types were assumed to be dispersed through the whole body, and capable of self-replication given 'proper nutriment'. When passed on to offspring via the reproductive process, gemmules were thought to be responsible for developing into each part of an organism and expressing characteristics inherited from both parents. Darwin thought this to occur in a literal sense: he explained cell proliferation to progress as gemmules to bind to more developed cells of their same character and mature. In this sense, the uniqueness of each individual would be due to their unique mixture of their parents' gemmules, and therefore characters. Similarity to one parent over the other could be explained by a quantitative superiority of one parent's gemmules. Yongshen Lu points out that Darwin knew of cells' ability to multiply by self-division, so it is unclear how Darwin supposed the two proliferation mechanisms to relate to each other. He did clarify in a later statement that he had always supposed gemmules to only bind to and proliferate from developing cells, not mature ones. Darwin hypothesized that gemmules might be able to survive and multiply outside of the body in a letter to J. D. Hooker in 1870. Some gemmules were thought to remain dormant for generations, whereas others were routinely expressed by all offspring. Every child was built up from selective expression of the mixture of the parents and grandparents' gemmules coming from either side. Darwin likened this to gardening: a flowerbed could be sprinkled with seeds "most of which soon germinate, some lie for a period dormant, whilst others perish." He did not claim gemmules were in the blood, although his theory was often interpreted in this way. Responding to Fleming Jenkin's review of On the Origin of Species, he argued that pangenesis would permit the preservation of some favourable variations in a population so that they wouldn't die out through blending. Darwin thought that environmental effects that caused altered characteristics would lead to altered gemmules for the affected body part. The altered gemmules would then have a chance of being transferred to offspring, since they were assumed to be produced throughout an organisms life. Thus, pangenesis theory allowed for the Lamarckian idea of transmission of characteristics acquired
Plants Under Domestication, intending it to fill what he perceived as a major gap in evolutionary theory at the time. The etymology of the word comes from the Greek words pan (a prefix meaning "whole", "encompassing") and genesis ("birth") or genos ("origin"). Pangenesis mirrored ideas originally formulated by Hippocrates and other pre-Darwinian scientists, but using new concepts such as cell theory, explaining cell development as beginning with gemmules which were specified to be necessary for the occurrence of new growths in an organism, both in initial development and regeneration. It also accounted for regeneration and the Lamarckian concept of the inheritance of acquired characteristics, as a body part altered by the environment would produce altered gemmules. This made Pangenesis popular among the neo-Lamarckian school of evolutionary thought. This hypothesis was made effectively obsolete after the 1900 rediscovery among biologists of Gregor Mendel's theory of the particulate nature of inheritance. Early history Pangenesis was similar to ideas put forth by Hippocrates, Democritus and other pre-Darwinian scientists in proposing that the whole of parental organisms participate in heredity (thus the prefix pan). Darwin wrote that Hippocrates' pangenesis was "almost identical with mine—merely a change of terms—and an application of them to classes of facts necessarily unknown to the old philosopher." The historian of science Conway Zirkle wrote that: Zirkle demonstrated that the idea of inheritance of acquired characteristics had become fully accepted by the 16th century and remained immensely popular through to the time of Lamarck's work, at which point it began to draw more criticism due to lack of hard evidence. He also stated that pangenesis was the only scientific explanation ever offered for this concept, developing from Hippocrates' belief that "the semen was derived from the whole body." In the 13th century, pangenesis was commonly accepted on the principle that semen was a refined version of food unused by the body, which eventually translated to 15th and 16th century widespread use of pangenetic principles in medical literature, especially in gynecology. Later pre-Darwinian important applications of the idea included hypotheses about the origin of the differentiation of races. A theory put forth by Pierre Louis Maupertuis in 1745 called for particles from both parents governing the attributes of the child, although some historians have called his remarks on the subject cursory and vague. In 1749, the French naturalist Georges-Louis Leclerc, Comte de Buffon developed a hypothetical system of heredity much like Darwin's pangenesis, wherein 'organic molecules' were transferred to offspring during reproduction and stored in the body during development. Commenting on Buffon's views, Darwin stated, "If Buffon had assumed that his organic molecules had been formed by each separate unit throughout the body, his view and mine would have been very closely similar." In 1801, Erasmus Darwin advocated a hypothesis of pangenesis in the third edition of his book Zoonomia. In 1809, Jean-Baptiste Lamarck in his Philosophie Zoologique put forth evidence for the idea that characteristics acquired during the lifetime of an organism, either from effects of the environment or may be passed on to the offspring. Charles Darwin first had significant contact with Lamarckism during his time at the University of Edinburgh Medical School in the late 1820s, both through Robert Edmond Grant, whom he assisted in research, and in Erasmus's journals. Darwin's first known writings on the topic of Lamarckian ideas as they related to inheritance are found in a notebook he opened in 1837, also entitled Zoonomia. Historian Johnathan Hodge states that the theory of pangenesis itself first appeared in Darwin's notebooks in 1841. In 1861, the Irish physician Henry Freke developed a variant of pangenesis in his book Origin of Species by Means of Organic Affinity. Freke proposed that all life was developed from microscopic organic agents which he named granules, which existed as 'distinct species of organizing matter' and would develop into different biological structures. Four years before the publication of Variation, in his 1864 book Principles of Biology, Herbert Spencer proposed a theory of "physiological units" similar to Darwin's gemmules, which likewise were said to be related to specific body parts and responsible for the transmission of characteristics of those body parts to offspring. He supported the Lamarckian idea of transmission of acquired characteristics. Darwin had debated whether to publish a theory of heredity for an extended period of time due to its highly speculative nature. He decided to include pangenesis in Variation after sending a 30 page manuscript to his close friend and supporter Thomas Huxley in May 1865, which was met by significant criticism from Huxley that made Darwin even more hesitant. However, Huxley eventually advised Darwin to publish, writing: "Somebody rummaging among your papers half a century hence will find Pangenesis & say 'See this wonderful anticipation of our modern Theories—and that stupid ass, Huxley, prevented his publishing them'" Darwin's initial version of pangenesis appeared in the first edition of Variation in 1868, and was later reworked for the publication of a second edition in 1875. Theory Darwin Darwin's pangenesis theory attempted to explain the process of sexual reproduction, inheritance of traits, and complex developmental phenomena such as cellular regeneration in a unified mechanistic structure. Longshan Liu wrote that in modern terms, pangenesis deals with issues of "dominance inheritance, graft hybridization, reversion, xenia, telegony, the inheritance of acquired characters, regeneration and many groups of facts pertaining to variation, inheritance and development." Mechanistically, Darwin proposed pangenesis to occur through the transfer of organic particles which he named 'gemmules.' Gemmules, which he also sometimes referred to as }, pangenes, granules, or germs, were supposed to be shed by the organs of the body and carried in the bloodstream to the reproductive organs where they accumulated in the germ cells or gametes. Their accumulation was thought to occur by some sort of a 'mutual affinity.' Each gemmule was said to be specifically related to a certain body part- as described, they did not contain information about the entire organism. The different types were assumed to be dispersed through the whole body, and capable of self-replication given 'proper nutriment'. When passed on to offspring via the reproductive process, gemmules were thought to be responsible for developing into each part of an organism and expressing characteristics inherited from both parents. Darwin thought this to occur in a literal sense: he explained cell proliferation to progress as gemmules to bind to more developed cells of their same character and mature. In this sense, the uniqueness of each individual would be due to their unique mixture of their parents' gemmules, and therefore characters. Similarity to one parent over the other could be explained by a quantitative superiority of one parent's gemmules. Yongshen Lu points out that Darwin knew of cells' ability to multiply by self-division, so it is unclear how
shoulder and may have weighed up to , almost double the weight of some sauropods like Diplodocus carnegii. The largest extant proboscidean is the African bush elephant, with a record of size of at the shoulder and . In addition to their enormous size, later proboscideans are distinguished by tusks and long, muscular trunks, which were less developed or absent in early proboscideans. Elephants are the largest existing land animals. Three species are currently recognised: the African bush elephant, the African forest elephant, and the Asian elephant. Elephantidae is the only surviving family of the order Proboscidea; extinct members include the mastodons. The family Elephantidae also contains several extinct groups, including the mammoths and straight-tusked elephants. African elephants have larger ears and concave backs, whereas Asian elephants have smaller ears, and convex or level backs. Distinctive features of all elephants include a long proboscis called a trunk, tusks, large ear flaps, massive legs, and tough but sensitive skin. The trunk is used for breathing, bringing food and water to the mouth, and grasping objects. Tusks, which are derived from the incisor teeth, serve both as weapons and as tools for moving objects and digging. The large ear flaps assist in maintaining a constant body temperature as well as in communication. The pillar-like legs carry their great weight. Evolution The earliest known proboscidean is Eritherium, followed by Phosphatherium, a small animal about the size of a fox. Both date from late Paleocene deposits of Morocco. Proboscideans evolved in Africa, where they increased in size and diversity during the Eocene and early Oligocene. Proboscideans have evolved greatly over time through three major forms of radiation: radiation of primitive Lophodont forms, radiation of gomphotheres and stegodons, and radiation of elephantidae. These forms of radiation have illustrated that proboscideans characteristics such as trunk, large ears, tusks, flaps, and huge ears have evolved and were appearing late in the modern form. Several primitive families from these epochs have been described, including the Numidotheriidae, Moeritheriidae, and Barytheriidae, all found exclusively in Africa. The Anthracobunidae from the Indian subcontinent were also believed to be a family of proboscideans, but were excluded from the Proboscidea by Shoshani and Tassy (2005) and have more recently been assigned to the Perissodactyla. When Africa became connected to Europe and Asia after the shrinking of the Tethys Sea, proboscideans migrated into Eurasia, with some families eventually reaching the Americas. Proboscideans found in Eurasia as well as Africa include the Deinotheriidae, which thrived during the Miocene and into the early Quaternary, Stegolophodon, an early genus of the disputed family Stegodontidae; the highly diverse Gomphotheriidae and Amebelodontidae; and the Mammutidae, or mastodons.
the Greek and the Latin proboscis) are a taxonomic order of afrotherian mammals containing one living family (Elephantidae) and several extinct families. First described by J. Illiger in 1811, it encompasses the elephants and their close relatives. From the mid-Miocene onwards, most proboscideans were very large. The largest land mammal of all time may have been a proboscidean; Palaeoloxodon namadicus was up to at the shoulder and may have weighed up to , almost double the weight of some sauropods like Diplodocus carnegii. The largest extant proboscidean is the African bush elephant, with a record of size of at the shoulder and . In addition to their enormous size, later proboscideans are distinguished by tusks and long, muscular trunks, which were less developed or absent in early proboscideans. Elephants are the largest existing land animals. Three species are currently recognised: the African bush elephant, the African forest elephant, and the Asian elephant. Elephantidae is the only surviving family of the order Proboscidea; extinct members include the mastodons. The family Elephantidae also contains several extinct groups, including the mammoths and straight-tusked elephants. African elephants have larger ears and concave backs, whereas Asian elephants have smaller ears, and convex or level backs. Distinctive features of all elephants include a long proboscis called a trunk, tusks, large ear flaps, massive legs, and tough but sensitive skin. The trunk is used for breathing, bringing food and water to the mouth, and grasping objects. Tusks, which are derived from the incisor teeth, serve both as weapons and as tools for moving objects and digging. The large ear flaps assist in maintaining a constant body temperature as well as in communication. The pillar-like legs carry their great weight. Evolution The earliest known proboscidean is Eritherium, followed by Phosphatherium, a small animal about the size of a fox. Both date from late Paleocene deposits of Morocco. Proboscideans evolved in Africa, where they increased in size and diversity during the Eocene and early Oligocene. Proboscideans have evolved greatly over time through three major forms of radiation: radiation of primitive Lophodont forms, radiation of gomphotheres and stegodons, and radiation of elephantidae. These forms of radiation have illustrated that proboscideans characteristics such as trunk, large ears, tusks, flaps, and huge ears have evolved and were appearing late in the modern form. Several primitive families from these epochs have been described, including the Numidotheriidae, Moeritheriidae, and Barytheriidae, all found exclusively in Africa. The Anthracobunidae from the Indian subcontinent were also believed to be a family of proboscideans, but were excluded from
likely preferred soft food over tough and hard food. Paranthropus species were generalist feeders, but P. robustus was likely an omnivore, whereas P. boisei was likely herbivorous and mainly ate bulbotubers. They were bipeds. Despite their robust heads, they had comparatively small bodies. Average weight and height are estimated to be at for P. robustus males, at for P. boisei males, at for P. robustus females, and at for P. boisei females. They were possibly polygamous and patrilocal, but there are no modern analogues for australopithecine societies. They are associated with bone tools and contested as the earliest evidence of fire usage. They typically inhabited woodlands, and coexisted with some early human species, namely A. africanus, H. habilis and H. erectus. They were preyed upon by the large carnivores of the time, specifically crocodiles, leopards, sabertoothed cats and hyenas. Taxonomy Species P. robustus The genus Paranthropus was first erected by Scottish-South African palaeontologist Robert Broom in 1938, with the type species P. robustus. "Paranthropus" derives from Ancient Greek παρα para beside or alongside; and άνθρωπος ánthropos man. The type specimen, a male braincase, TM 1517, was discovered by schoolboy Gert Terblanche at the Kromdraai fossil site, about southwest of Pretoria, South Africa. By 1988, at least six individuals were unearthed in around the same area, now known as the Cradle of Humankind. In 1948, at Swartkrans Cave, in about the same vicinity as Kromdraai, Broom and South African palaeontologist John Talbot Robinson described P. crassidens based on a subadult jaw, SK 6. He believed later Paranthropus were morphologically distinct from earlier Paranthropus in the cave—that is, the Swartkrans Paranthropus were reproductively isolated from Kromdraai Paranthropus and the former eventually speciated. By 1988, several specimens from Swartkrans had been placed into P. crassidens. However, this has since been synonymised with P. robustus as the two populations do not seem to be very distinct. P. boisei In 1959, P. boisei was discovered by Mary Leakey at Olduvai Gorge, Tanzania (specimen OH 5). Her husband Louis named it Zinjanthropus boisei because he believed it differed greatly from Paranthropus and Australopithecus. The name derives from "Zinj", an ancient Arabic word for the coast of East Africa, and "boisei", referring to their financial benefactor Charles Watson Boise. However, this genus was rejected at Mr. Leakey's presentation before the 4th Pan-African Congress on Prehistory, as it was based on a single specimen. The discovery of the Peninj Mandible made the Leakey's reclassify their species as Australopithecus (Zinjanthropus) boisei in 1964, but in 1967, South African palaeoanthropologist Phillip V. Tobias subsumed it into Australopithecus as A. boisei. However, as more specimens were found, the combination Paranthropus boisei became more popular. It is debated whether the wide range of variation in jaw size indicates simply sexual dimorphism or a grounds for identifying a new species. It could be explained as groundmass filling in cracks naturally formed after death, inflating the perceived size of the bone. P. boisei also has a notably wide range of variation in skull anatomy, but these features likely have no taxonomic bearing. P. aethiopicus In 1968, French palaeontologists Camille Arambourg and Yves Coppens described "Paraustralopithecus aethiopicus" based on a toothless mandible from the Shungura Formation, Ethiopia (Omo 18). In 1976, American anthropologist Francis Clark Howell and Breton anthropologist Yves Coppens reclassified it as A. africanus. In 1986, after the discovery of the skull KNM WT 17000 by English anthropologist Alan Walker and Richard Leakey classified it into Paranthropus as P. aethiopicus. There is debate whether this is synonymous with P. boisei, the main argument for separation being the skull seems less adapted for chewing tough vegetation. In 1989, palaeoartist and zoologist Walter Ferguson reclassified KNM WT 17000 into a new species, walkeri, because he considered the skull's species designation questionable as it comprised the skull whereas the holotype of P. aethiopicus comprised only the mandible. Ferguson's classification is almost universally ignored, and is considered to be synonymous with P. aethiopicus. Others In 1963, while in the Congo, French ethnographer Charles Cordier assigned the name "P. congensis" to a super-strong, monstrous ape-man cryptid called "Kikomba", "Apamándi", "Abanaánji", "Zuluzúgu", or "Tshingómbe" by various native tribes which he heard stories about. In 2015, Ethiopian palaeoanthropologist Yohannes Haile-Selassie and colleagues described the 3.5–3.2 Ma A. deyiremeda based on three jawbones from the Afar Region, Ethiopia. They noted that, though it shares many similarities with Paranthropus, it may not have been closely related because it lacked enlarged molars which characterize the genus. Nonetheless, in 2018, independent researcher Johan Nygren recommended moving it to Paranthropus based on dental and presumed dietary similarity. Validity In 1951, American anthropologists Sherwood Washburn and Bruce D. Patterson were the first to suggest that Paranthropus should be considered a junior synonym of Australopithecus as the former was only known from fragmentary remains at the time, and dental differences were too minute to serve as justification. In face of calls for subsumation, Leakey and Robinson continued defending its validity. Various other authors were still unsure until more complete remains were found. Paranthropus is sometimes classified as a subgenus of Australopithecus. There is currently no clear consensus on the validity of Paranthropus. The argument rests upon whether the genus is monophyletic—is composed of a common ancestor and all of its descendants—and the argument against monophyly (that the genus is paraphyletic) says that P. robustus and P. boisei evolved similar gorilla-like heads independently of each other by coincidence (convergent evolution), as chewing adaptations in hominins evolve very rapidly and multiple times at various points in the family tree (homoplasy). In 1999, a chimp-like ulna forearm bone was assigned to P. boisei, the first discovered ulna of the species, which was markedly different from P. robustus ulnae, which could suggest paraphyly. Evolution P. aethiopicus is the earliest member of the genus, with the oldest remains, from the Ethiopian Omo Kibish Formation, dated to 2.6 mya at the end of the Pliocene. It is sometimes regarded as the direct ancestor of P. boisei and P. robustus. It is possible that P. aethiopicus evolved even earlier, up to 3.3 mya, on the expansive Kenyan floodplains of the time. The oldest P. boisei remains date to about 2.3 mya from Malema, Malawi. P. boisei changed remarkably little over its nearly one-million-year existence. Paranthropus had spread into South Africa by 2 mya with the earliest P. robustus remains. It is sometimes suggested that Paranthropus and Homo are sister taxa, both evolving from Australopithecus. This may have occurred during a drying trend 2.8–2.5 mya in the Great Rift Valley, which caused the retreat of woodland environments in favor of open savanna, with forests growing only along rivers and lakes. Homo evolved in the former, and Paranthropus in the latter riparian environment. However, the classifications of Australopithecus species is problematic. Evolutionary tree according to a 2019 study: Description Skull Paranthropus had a massively built, tall and flat skull, with a prominent gorilla-like sagittal crest along the midline which anchored massive temporalis muscles used in chewing. Like other australopithecines, Paranthropus exhibited sexual dimorphism, with males notably larger than females. They had large molars with a relatively thick tooth enamel coating (post-canine megadontia), and comparatively small incisors (similar in size to modern humans), possibly adaptations to processing abrasive foods. The teeth of P. aethiopicus developed faster than those of P. boisei. Paranthropus had adaptations to the skull to resist large bite loads while feeding, namely the expansive squamosal sutures. The notably thick palate was once thought to have been an adaptation to resist a high bite force, but is better explained as a byproduct of facial lengthening and nasal anatomy. In P. boisei, the jaw hinge was adapted to grinding food side-to-side (rather than up-and-down in modern humans), which is better at processing the starchy abrasive foods that likely made up the bulk of its diet. P. robustus may have chewed in a front-to-back direction instead, and had less exaggerated (less derived) anatomical features than P. boisei as it perhaps did not require them with this kind of chewing strategy. This may have also allowed P. robustus to better process tougher foods. The braincase volume averaged about , comparable to gracile australopithecines, but smaller than Homo. Modern human brain volume averages for men and for women. Limbs and locomotion Unlike P. robustus, the forearms of P. boisei were heavily built, which might suggest habitual suspensory behaviour as in orangutans and gibbons. A P. boisei shoulder blade indicates long infraspinatus muscles, which is also associated with suspensory behavior. A P. aethiopicus ulna, on the other hand, shows more similarities to Homo than P. boisei. Paranthropus were bipeds, and their hips, legs and feet resemble A. afarensis and modern humans. The pelvis is similar to A. afarensis, but the hip joints are smaller in P. robustus. The physical similarity implies a similar walking gait. Their modern-humanlike big toe indicates a modern-humanlike foot posture and range of motion, but the more distal ankle joint would have inhibited the modern human toe-off gait cycle. By 1.8 mya, Paranthropus and H. habilis may have achieved about the same grade of bipedality. Height and weight In comparison to the large, robust head, the body was rather small. Average weight for P. robustus may have been for males and for females; and for P. boisei for males and for females. At Swartkrans Cave Members 1 and 2, about 35% of the P. robustus individuals are estimated to have weighed , 22% about , and the remaining 43% bigger than the former but less than . At Member 3, all individuals were about . Female weight was about the same in contemporaneous H. erectus, but male H. erectus were on average heavier than P. robustus males. P. robustus sites are oddly dominated by small adults, which could be explained as heightened predation or mortality of the larger males of a group. The largest-known Paranthropus individual was estimated at . According to a 1991 study, based on femur length and using the dimensions of modern humans, male and female P. robustus are estimated to have stood on average , respectively, and P. boisei . However, the latter estimates are problematic as there were no positively identified male P. boisei femurs at the time. In 2013, a 1.34 Ma male P. boisei partial skeleton was estimated to be at least and . Pathology Paranthropus seems to have had notably high rates of pitting enamel hypoplasia (PEH), where tooth enamel formation is spotty instead of mostly uniform. In P. robustus, about 47% of baby teeth and 14% of adult teeth
more similarities to Homo than P. boisei. Paranthropus were bipeds, and their hips, legs and feet resemble A. afarensis and modern humans. The pelvis is similar to A. afarensis, but the hip joints are smaller in P. robustus. The physical similarity implies a similar walking gait. Their modern-humanlike big toe indicates a modern-humanlike foot posture and range of motion, but the more distal ankle joint would have inhibited the modern human toe-off gait cycle. By 1.8 mya, Paranthropus and H. habilis may have achieved about the same grade of bipedality. Height and weight In comparison to the large, robust head, the body was rather small. Average weight for P. robustus may have been for males and for females; and for P. boisei for males and for females. At Swartkrans Cave Members 1 and 2, about 35% of the P. robustus individuals are estimated to have weighed , 22% about , and the remaining 43% bigger than the former but less than . At Member 3, all individuals were about . Female weight was about the same in contemporaneous H. erectus, but male H. erectus were on average heavier than P. robustus males. P. robustus sites are oddly dominated by small adults, which could be explained as heightened predation or mortality of the larger males of a group. The largest-known Paranthropus individual was estimated at . According to a 1991 study, based on femur length and using the dimensions of modern humans, male and female P. robustus are estimated to have stood on average , respectively, and P. boisei . However, the latter estimates are problematic as there were no positively identified male P. boisei femurs at the time. In 2013, a 1.34 Ma male P. boisei partial skeleton was estimated to be at least and . Pathology Paranthropus seems to have had notably high rates of pitting enamel hypoplasia (PEH), where tooth enamel formation is spotty instead of mostly uniform. In P. robustus, about 47% of baby teeth and 14% of adult teeth were affected, in comparison to about 6.7% and 4.3%, respectively, in any other tested hominin species. The condition of these holes covering the entire tooth is consistent with the modern human ailment amelogenesis imperfecta. However, since circular holes in enamel coverage are uniform in size, only present on the molar teeth, and have the same severity across individuals, the PEH may have been a genetic condition. It is possible that the coding-DNA concerned with thickening enamel also left them more vulnerable to PEH. There have been 10 identified cases of cavities in P. robustus, indicating a rate similar to modern humans. A molar from Drimolen, South Africa, showed a cavity on the tooth root, a rare occurrence in fossil great apes. In order for cavity-creating bacteria to reach this area, the individual would have had to have also presented either alveolar resportion, which is commonly associated with gum disease; or super-eruption of teeth which occurs when teeth become worn down and have to erupt a bit more in order to maintain a proper bite, and this exposed the root. The latter is most likely, and the exposed root seems to have caused hypercementosis to anchor the tooth in place. The cavity seems to have been healing, which may have been caused by a change in diet or mouth microbiome, or the loss of the adjacent molar. Palaeobiology Diet It was once thought P. boisei cracked open nuts with its powerful teeth, giving OH 5 the nickname "Nutcracker Man". However, like gorillas, Paranthropus likely preferred soft foods, but would consume tough or hard food during leaner times, and the powerful jaws were used only in the latter situation. In P. boisei, thick enamel was more likely used to resist abrasive gritty particles rather than to minimize chipping while eating hard foods. In fact, there is a distinct lack of tooth fractures which would have resulted from such activity. Paranthropus were generalist feeders, but diet seems to have ranged dramatically with location. The South African P. robustus appears to have been an omnivore, with a diet similar to contemporaneous Homo and nearly identical to the later H. ergaster, and subsisted on mainly C4 savanna plants and C3 forest plants, which could indicate either seasonal shifts in diet or seasonal migration from forest to savanna. In leaner times it may have fallen back on brittle food. It likely also consumed seeds and possibly tubers or termites. A high cavity rate could indicate honey consumption. The East African P. boisei, on the other hand, seems to have been largely herbivorous and fed on C4 plants. Its powerful jaws allowed it to consume a wide variety of different plants, though it may have largely preferred nutrient-rich bulbotubers as these are known to thrive in the well-watered woodlands it is thought to have inhabited. Feeding on these, P. boisei may have been able to meet its daily caloric requirements of approximately 9,700 kJ after about 6 hours of foraging. Juvenile P. robustus may have relied more on tubers than adults, given the elevated levels of strontium compared to adults in teeth from Swartkrans Cave, which, in the area, was most likely sourced from tubers. Dentin exposure on juvenile teeth could indicate early weaning, or a more abrasive diet than adults which wore away the cementum and enamel coatings, or both. It is also possible juveniles were less capable of removing grit from dug-up food rather than purposefully seeking out more abrasive foods. Technology Bone tools dating between 2.3 and 0.6 mya have been found in abundance in Swartkrans, Kromdraai and Drimolen caves, and are often associated with P. robustus. Though Homo is also known from these caves, their remains are comparatively scarce to Paranthropus, making Homo-attribution unlikely. The tools also cooccur with Homo-associated Oldawan and possibly Acheulian stone tool industries. The bone tools were typically sourced from the shaft of long bones from medium- to large-sized mammals, but tools made sourced from mandibles, ribs and horn cores have also been found. Bone tools have also been found at Oldawan Gorge and directly associated with P. boisei, the youngest dating to 1.34 mya, though a great proportion of other bone tools from here have ambiguous attribution. Stone tools from Kromdraai could possibly be attributed to P. robustus, as no Homo have been found there yet. The bone tools were not manufactured or purposefully shaped for a task. However, since the bones display no weathering (and were not scavenged randomly), and there is a preference displayed for certain bones, raw materials were likely specifically hand picked. This could indicate a similar cognitive ability to contemporary Stone Age Homo. Bone tools may have been used to cut or process vegetation, or dig up tubers or termites, The form of P. robustus incisors appear to be intermediate between H. erectus and modern humans, which could indicate less food processing done by the teeth due to preparation with simple tools. Burnt bones were also associated with the inhabitants of Swartkrans, which could indicate some of the earliest fire usage. However, these bones were found in Member 3, where Paranthropus remains are rarer than H. erectus, and it is also possible the bones were burned in a wildfire and washed into the cave as it is known the bones were not burned onsite. Social structure Given the marked anatomical and physical differences with modern great apes, there may be no modern analogue for australopithecine societies, so comparisons drawn with modern primates will not be entirely accurate. Paranthropus had pronounced sexual dimorphism, with males notably larger than females, which is commonly correlated with a male-dominated polygamous society. P. robustus may have had a harem society similar to modern forest-dwelling silverback gorillas, where one male has exclusive breeding rights to a group of females, as male-female size disparity is comparable to gorillas (based on facial dimensions), and younger males were less robust than older males (delayed maturity is also exhibited in gorillas). However, if P. robustus preferred a savanna habitat, a multi-male society would have been more productive to better defend the troop from predators in the more exposed environment, much like savanna baboons. Further, among primates, delayed maturity is also exhibited in the rhesus monkey which has a multi-male society, and may not be an accurate indicator of social structure. A 2011 strontium isotope study of P. robustus teeth from the dolomite Sterkfontein Valley found that, like other hominins, but unlike other great apes, P. robustus females were more likely to leave their place of birth (patrilocal). This also discounts the plausibility of a harem society, which would have resulted in a matrilocal society due to heightened male–male competition. Males did not seem to have ventured very far from the valley, which could either indicate small home ranges, or that they preferred dolomitic landscapes due to perhaps cave abundance or factors related to vegetation growth. Life history Dental development seems to have followed about the same timeframe as it does in modern humans and most other hominins, but, since Paranthropus molars are markedly larger, rate of tooth eruption would have been accelerated. Their life history may have mirrored that of gorillas as they have the same brain volume, which (depending on the subspecies) reach physical maturity from 12–18 years and have birthing intervals of 40–70 months. Palaeoecology Habitat It is generally thought that Paranthropus preferred to inhabit wooded, riverine landscapes. The teeth of Paranthropus, H. habilis and H. erectus are all known from various overlapping beds in East Africa, such as at Olduvai Gorge and the Turkana Basin. P. robustus and H. erectus also appear to have coexisted. P. boisei, known from the Great Rift Valley, may have typically inhabited wetlands along lakes and rivers, wooded or arid shrublands, and semiarid woodlands, though their presence in the savanna-dominated Malawian Chiwondo Beds implies they could tolerate a range of habitats. During the Pleistocene, there seems to have been coastal and montane forests in Eastern Africa. More expansive river valleys—namely the Omo River Valley—may have served as important refuges for forest-dwelling creatures. Being cut off from the forests of Central Africa by a savanna corridor, these East African forests would have promoted high rates of endemism, especially during times of climatic volatility. The Cradle of Humankind, the only area P. robustus is known from, was mainly dominated by the springbok Antidorcas recki, but other antelope, giraffes and elephants were also seemingly abundant megafauna. Other known primates are early Homo, the hamadryas baboon, and the extinct colobine monkey Cercopithecoides williamsi. Predators The left foot of a P. boisei specimen (though perhaps actually belonging to H. habilis) from Olduvai Gorge seems to have been bitten off by a crocodile, possibly Crocodylus anthropophagus, and another's leg shows evidence of leopard predation. Other likely
of the teeth vary according to diet. The incisors and canines can be very small or completely absent, as in the two African species of rhinoceros. In the horses, usually only the males possess canines. The surface shape and height of the molars is heavily dependent on whether soft leaves or hard grass make up the main component of their diets. Three or four cheek teeth are present on each jaw half, so the dental formula of odd-toed ungulates is: Gut All perissodactyls are hindgut fermenters. In contrast to ruminants, hindgut fermenters store digested food that has left the stomach in an enlarged cecum, where the food is digested by bacteria. No gallbladder is present. The stomach of perissodactyls is simply built, while the cecum accommodates up to in horses. The intestine is very long, reaching up to in horses. Extraction of nutrients from food is relatively inefficient, which probably explains why no odd-toed ungulates are small; for large animals, nutritional requirements per unit of body weight are lower and the surface-area-to-volume ratio is smaller. Distribution The present distribution of most perissodactyl species is only a small fraction of their original range. Members of this group are now found only in Central and South America, eastern and southern Africa, and central, southern, and southeastern Asia. During the peak of odd-toed ungulate existence, from the Eocene to the Oligocene, perissodactyls were distributed over much of the globe, the only exceptions being Australia and Antarctica. Horses and tapirs arrived in South America after the formation of the Isthmus of Panama in the Pliocene, around 3 million years ago. In North America, they died out around 10,000 years ago, while in Europe, the tarpans disappeared in the 19th century. Hunting and habitat restriction have reduced the present-day species to fragmented relict populations. In contrast, domesticated horses and donkeys have gained a worldwide distribution, and feral animals of both species are now also found in regions outside of their original range, such as in Australia. Lifestyle and diet Perissodactyls inhabit a number of different habitats, leading to different lifestyles. Tapirs are solitary and inhabit mainly tropical rainforests. Rhinos tend to live alone in rather dry savannas, and in Asia, wet marsh or forest areas. Horses inhabit open areas such as grasslands, steppes, or semi-deserts, and live together in groups. Odd-toed ungulates are exclusively herbivores that feed, to varying degrees, on grasses, leaves, and other plant parts. A distinction is often made between primarily grass feeders (white rhinos, equines) and leaf feeders (tapirs, other rhinos). Reproduction and development Odd-toed ungulates are characterized by a long gestation period and a small litter size, usually delivering a single young. The gestation period is 330–500 days, being longest in the rhinos. Newborn perissodactyls are precocial, meaning offspring are born already quite independent, for example, young horses can begin to follow the mother after a few hours. The young are nursed for a relatively long time, often into their second year, reaching sexual maturity around eight or ten years old. Perissodactyls are long-lived, with several species, such as rhinos, reaching an age of almost 50 years in captivity. Taxonomy Outer taxonomy Traditionally, the odd-toed ungulates were classified with other mammals such as artiodactyls, hyraxes, elephants and other "ungulates". A close family relationship with hyraxes was suspected based on similarities in the construction of the ear and the course of the carotid artery. Recent molecular genetic studies, however, have shown the ungulates to be polyphyletic, meaning that in some cases the similarities are the result of convergent evolution rather than common ancestry. Elephants and hyraxes are now considered to belong to Afrotheria, so are not closely related to the perissodactyls. These in turn are in the Laurasiatheria, a superorder that had its origin in the former supercontinent Laurasia. Molecular genetic findings suggest that the cloven Artiodactyla (containing the cetaceans as a deeply nested subclade) are the sister taxon of the Perissodactyla; together, the two groups form the Euungulata. More distant are the bats (Chiroptera) and Ferae (a common taxon of carnivorans, Carnivora, and pangolins, Pholidota). In a discredited alternative scenario, a close relationship exists between perissodactyls, carnivores, and bats, this assembly comprising the Pegasoferae. According to studies published in March 2015, odd-toed ungulates are in a close family relationship with at least some of the so-called Meridiungulata, a very diverse group of mammals living from the Paleocene to the Pleistocene in South America, whose systematic unity is largely unexplained. Some of these were classified on the basis of their paleogeographic distribution. However, a close relationship can be worked out to perissodactyls by means of protein sequencing and comparison with fossil collagen from remnants of phylogenetically young members of the Meridiungulata (specifically Macrauchenia from the Litopterna and Toxodon from the Notoungulata). Both kinship groups, the odd-toed ungulates and the Litopterna-Notoungulata, are now in the higher-level taxon of Panperissodactyla. This kinship group is included among the Euungulata which also contains the even-toed ungulates and whales (Artiodactyla). The separation of the Litopterna-Notoungulata group from the perissodactyls probably took place before the Cretaceous–Paleogene extinction event. "Condylarths" can probably be considered the starting point for the development of the two groups, as they represent a heterogeneous group of primitive ungulates that mainly inhabited the northern hemisphere in the Paleogene. Modern members Odd-toed ungulates (Perissodactyla) comprise three living families with around 17 species—in the horse the exact count is still controversial. Rhinos and tapirs are more closely related to each other than to the horses. The separation of horses from other perissodactyls took place according to molecular genetic analysis in the Paleocene some 56 million years ago, while the rhinos and tapirs split off in the lower-middle Eocene, about 47 million years ago. Order Perissodactyla Suborder Hippomorpha Family Equidae: horses and allies, seven species in one genus Equus ferus Tarpan, †Equus ferus ferus Przewalski's horse, Equus ferus przewalskii Domestic horse, Equus ferus caballus African wild ass, Equus africanus Nubian wild ass, Equus africanus africanus Somali wild ass, Equus africanus somaliensis Domesticated ass (donkey), Equus africanus asinus Atlas wild ass, †Equus africanus atlanticus Onager or Asiatic wild ass, Equus hemionus Mongolian wild ass, Equus hemionus hemionus Turkmenian kulan, Equus hemionus kulan Persian onager, Equus hemionus onager Indian wild ass, Equus hemionus khur Syrian wild ass, †Equus hemionus hemippus Kiang or Tibetan wild ass, Equus kiang Western kiang, Equus kiang kiang Eastern kiang, Equus kiang holdereri Southern kiang, Equus kiang polyodon Plains zebra, Equus quagga Quagga, †Equus quagga quagga Burchell's zebra, Equus quagga burchellii Grant's zebra, Equus quagga boehmi Maneless zebra, Equus quagga borensis Chapman's zebra, Equus quagga chapmani Crawshay's zebra, Equus quagga crawshayi Selous' zebra, Equus quagga selousi Mountain zebra, Equus zebra Cape mountain zebra, Equus zebra zebra Hartmann's mountain zebra, Equus zebra hartmannae Grévy's zebra, Equus grevyi Suborder Ceratomorpha Family Tapiridae: tapirs, five species in one genus Brazilian tapir, Tapirus terrestris Mountain tapir, Tapirus pinchaque Baird's tapir, Tapirus bairdii Malayan tapir, Tapirus indicus Kabomani tapir, Tapirus kabomani Family Rhinocerotidae: rhinoceroses, five species in four genera Black rhinoceros, Diceros bicornis Southern black rhinoceros, †Diceros bicornis bicornis North-eastern black rhinoceros, †Diceros bicornis brucii Chobe black rhinoceros, Diceros bicornis chobiensis Uganda black rhinoceros, Diceros bicornis ladoensis Western black rhinoceros, †Diceros bicornis longipes Eastern black rhinoceros, Diceros bicornis michaeli South-central black rhinoceros, Diceros bicornis minor South-western black rhinoceros, Diceros bicornis occidentalis White rhinoceros, Ceratotherium simum Southern white rhinoceros, Ceratotherium simum simum Northern white rhinoceros, Ceratotherium simum cottoni Indian rhinoceros, Rhinoceros unicornis Javan rhinoceros, Rhinoceros sondaicus Indonesian Javan rhinoceros, Rhinoceros sondaicus sondaicus Vietnamese Javan rhinoceros, Rhinoceros sondaicus annamiticus Indian Javan rhinoceros, †Rhinoceros sondaicus inermis Sumatran rhinoceros, Dicerorhinus sumatrensis Western Sumatran rhinoceros, Dicerorhinus sumatrensis sumatrensis Eastern Sumatran rhinoceros, Dicerorhinus sumatrensis harrissoni Northern Sumatran rhinoceros, †Dicerorhinus sumatrensis lasiotis Prehistoric members There are many perissodactyl fossils of multivariant form. The major lines of development include the following groups: The Brontotherioidea were among the earliest known large mammals, consisting of the families of Brontotheriidae (synonym Titanotheriidae), the most well known representative being Megacerops and the more basal family Lambdotheriidae. They were generally characterized in their late phase by a bony horn at the transition from the nose to the frontal bone and flat molars suitable for chewing soft plant food. The Brontotheroidea, which were almost exclusively confined to North America and Asia, died out at the beginning of the Upper Eocene. The Equoidea (equines) also developed in the Eocene. The Palaeotheriidae are known mainly from Europe; their most famous member is Eohippus, which became extinct in the Oligocene. In contrast, the horse family (Equidae) flourished and spread. Over time this group saw a reduction in toe number, extension of the limbs, and the progressive adjustment of the teeth for eating hard grasses. The Chalicotherioidea represented another characteristic group, consisting of the families Chalicotheriidae and Lophiodontidae. The Chalicotheriidae developed claws instead of hooves and considerable extension of the forelegs. The best-known genera include Chalicotherium and Moropus. The Chalicotherioidea died out in the Pleistocene. The Rhinocerotoidea (rhino relatives) included a large variety of forms from the Eocene up to the Oligocene, including dog-size leaf feeders, semiaquatic animals, and also huge long-necked animals. Only a few had horns on the nose. The Amynodontidae were hippo-like, aquatic animals. The Hyracodontidae developed long limbs and long necks that were most pronounced in the Paraceratherium (formerly known as Baluchitherium or Indricotherium), the second largest known land mammal ever to have lived (after Palaeoloxodon namadicus). The rhinos (Rhinocerotidae) emerged in the Middle Eocene; five species survive to the present day. The Tapiroidea reached their greatest diversity in the Eocene, when several families lived in Eurasia and North America. They retained a primitive physique and are noted for the development of a trunk. The extinct families within this group include the Helaletidae. Several mammal groups traditionally classified as condylarths, long-understood to be a wastebasket taxon, such as hyopsodontids and phenacodontids, are now understood to be part of the odd-toed ungulate assemblage. Phenacodontids seem to be stem-perissodactyls, while hyopsodontids are closely related to horses and brontotheres, despite their more primitive overall appearance. Desmostylia and Anthracobunidae have traditionally been placed among the afrotheres, but they may actually represent stem-perissodactyls. They are an early lineage of mammals that took to the water, spreading across semi-aquatic to fully marine niches in the Tethys Ocean and the northern Pacific. However, later studies have shown that, while anthracobunids are definite perissodactyls, desmostylians have enough mixed characters to suggest that a position among the Afrotheria is not out of the question. Order Perissodactyla Suborder Hippomorpha †Hyopsodontidae †Pachynolophidae †Brontotheriidae Superfamily Equoidea †Indolophidae †Palaeotheriidae (might be a basal perissodactyl grade instead) †Suborder Ancylopoda †Isectolophidae (basal ancylopodans and ceratomorphs) †Lophiodontidae Superfamily Chalicotherioidea †Eomoropidae (basal grade of chalicotheroids) †Chalicotheriidae Suborder Ceratomorpha Superfamily Rhinocerotoidea †Amynodontidae †Hyracodontidae Superfamily Tapiroidea †Deperetellidae †Rhodopagidae (sometimes recognized as a subfamily of deperetellids) †Lophialetidae †Eoletidae (sometimes recognized as a subfamily of lophialetids) †Anthracobunidae (a family of stem-perissodactyls; from the Early to Middle Eocene epoch) †Phenacodontidae (a clade of stem-perissodactyls; from the Early Palaeocene to the Middle Eocene epoch) Higher classification of perissodactyls Relationships within the large group of odd-toed ungulates are not fully understood. Initially, after the establishment of "Perissodactyla" by Richard Owen in 1848, the present-day representatives were considered equal in rank. In the first half of the 20th century, a more systematic differentiation of odd-toed ungulates began, based on a consideration of fossil forms, and they were placed in two major suborders: Hippomorpha and Ceratomorpha. The Hippomorpha comprises today's horses and their extinct members (Equoidea); the Ceratomorpha consist of tapirs and rhinos plus their extinct members (Tapiroidea and Rhinocerotoidea). The names Hippomorpha and Ceratomorpha were introduced in 1937 by Horace Elmer Wood, in response to criticism of the name "Solidungula" that he proposed three years previously. It had been based on the grouping of horses and Tridactyla and on the rhinoceros/tapir complex. The extinct brontotheriidae were also classified under Hippomorpha and therefore possess a close relationship to horses. Some researchers accept this assignment because of similar dental features, but there is also the view that a very basal position within the odd-toed ungulates places them rather in the group of Titanotheriomorpha. Originally, the Chalicotheriidae were seen as members of Hippomorpha, and presented as such in 1941. William Berryman Scott thought that, as claw-bearing perissodactyls, they belong in the new suborder Ancylopoda (where Ceratomorpha and Hippomorpha as odd-toed ungulates were combined in the group of Chelopoda). The term Ancylopoda, coined by Edward Drinker Cope in 1889, had been established for chalicotheres. However, further morphological studies from the 1960s showed a middle position of Ancylopoda between Hippomorpha and Ceratomorpha. Leonard Burton Radinsky saw all three major groups of odd-toed ungulates as peers, based on the extremely long and independent phylogenetic development of the three lines. In the 1980s, Jeremy J. Hooker saw a general similarity of Ancylopoda and Ceratomorpha based on dentition, especially in the earliest members, leading to the unification in 1984 of the two submissions in the interim order, Tapiromorpha.
horse, Equus ferus przewalskii Domestic horse, Equus ferus caballus African wild ass, Equus africanus Nubian wild ass, Equus africanus africanus Somali wild ass, Equus africanus somaliensis Domesticated ass (donkey), Equus africanus asinus Atlas wild ass, †Equus africanus atlanticus Onager or Asiatic wild ass, Equus hemionus Mongolian wild ass, Equus hemionus hemionus Turkmenian kulan, Equus hemionus kulan Persian onager, Equus hemionus onager Indian wild ass, Equus hemionus khur Syrian wild ass, †Equus hemionus hemippus Kiang or Tibetan wild ass, Equus kiang Western kiang, Equus kiang kiang Eastern kiang, Equus kiang holdereri Southern kiang, Equus kiang polyodon Plains zebra, Equus quagga Quagga, †Equus quagga quagga Burchell's zebra, Equus quagga burchellii Grant's zebra, Equus quagga boehmi Maneless zebra, Equus quagga borensis Chapman's zebra, Equus quagga chapmani Crawshay's zebra, Equus quagga crawshayi Selous' zebra, Equus quagga selousi Mountain zebra, Equus zebra Cape mountain zebra, Equus zebra zebra Hartmann's mountain zebra, Equus zebra hartmannae Grévy's zebra, Equus grevyi Suborder Ceratomorpha Family Tapiridae: tapirs, five species in one genus Brazilian tapir, Tapirus terrestris Mountain tapir, Tapirus pinchaque Baird's tapir, Tapirus bairdii Malayan tapir, Tapirus indicus Kabomani tapir, Tapirus kabomani Family Rhinocerotidae: rhinoceroses, five species in four genera Black rhinoceros, Diceros bicornis Southern black rhinoceros, †Diceros bicornis bicornis North-eastern black rhinoceros, †Diceros bicornis brucii Chobe black rhinoceros, Diceros bicornis chobiensis Uganda black rhinoceros, Diceros bicornis ladoensis Western black rhinoceros, †Diceros bicornis longipes Eastern black rhinoceros, Diceros bicornis michaeli South-central black rhinoceros, Diceros bicornis minor South-western black rhinoceros, Diceros bicornis occidentalis White rhinoceros, Ceratotherium simum Southern white rhinoceros, Ceratotherium simum simum Northern white rhinoceros, Ceratotherium simum cottoni Indian rhinoceros, Rhinoceros unicornis Javan rhinoceros, Rhinoceros sondaicus Indonesian Javan rhinoceros, Rhinoceros sondaicus sondaicus Vietnamese Javan rhinoceros, Rhinoceros sondaicus annamiticus Indian Javan rhinoceros, †Rhinoceros sondaicus inermis Sumatran rhinoceros, Dicerorhinus sumatrensis Western Sumatran rhinoceros, Dicerorhinus sumatrensis sumatrensis Eastern Sumatran rhinoceros, Dicerorhinus sumatrensis harrissoni Northern Sumatran rhinoceros, †Dicerorhinus sumatrensis lasiotis Prehistoric members There are many perissodactyl fossils of multivariant form. The major lines of development include the following groups: The Brontotherioidea were among the earliest known large mammals, consisting of the families of Brontotheriidae (synonym Titanotheriidae), the most well known representative being Megacerops and the more basal family Lambdotheriidae. They were generally characterized in their late phase by a bony horn at the transition from the nose to the frontal bone and flat molars suitable for chewing soft plant food. The Brontotheroidea, which were almost exclusively confined to North America and Asia, died out at the beginning of the Upper Eocene. The Equoidea (equines) also developed in the Eocene. The Palaeotheriidae are known mainly from Europe; their most famous member is Eohippus, which became extinct in the Oligocene. In contrast, the horse family (Equidae) flourished and spread. Over time this group saw a reduction in toe number, extension of the limbs, and the progressive adjustment of the teeth for eating hard grasses. The Chalicotherioidea represented another characteristic group, consisting of the families Chalicotheriidae and Lophiodontidae. The Chalicotheriidae developed claws instead of hooves and considerable extension of the forelegs. The best-known genera include Chalicotherium and Moropus. The Chalicotherioidea died out in the Pleistocene. The Rhinocerotoidea (rhino relatives) included a large variety of forms from the Eocene up to the Oligocene, including dog-size leaf feeders, semiaquatic animals, and also huge long-necked animals. Only a few had horns on the nose. The Amynodontidae were hippo-like, aquatic animals. The Hyracodontidae developed long limbs and long necks that were most pronounced in the Paraceratherium (formerly known as Baluchitherium or Indricotherium), the second largest known land mammal ever to have lived (after Palaeoloxodon namadicus). The rhinos (Rhinocerotidae) emerged in the Middle Eocene; five species survive to the present day. The Tapiroidea reached their greatest diversity in the Eocene, when several families lived in Eurasia and North America. They retained a primitive physique and are noted for the development of a trunk. The extinct families within this group include the Helaletidae. Several mammal groups traditionally classified as condylarths, long-understood to be a wastebasket taxon, such as hyopsodontids and phenacodontids, are now understood to be part of the odd-toed ungulate assemblage. Phenacodontids seem to be stem-perissodactyls, while hyopsodontids are closely related to horses and brontotheres, despite their more primitive overall appearance. Desmostylia and Anthracobunidae have traditionally been placed among the afrotheres, but they may actually represent stem-perissodactyls. They are an early lineage of mammals that took to the water, spreading across semi-aquatic to fully marine niches in the Tethys Ocean and the northern Pacific. However, later studies have shown that, while anthracobunids are definite perissodactyls, desmostylians have enough mixed characters to suggest that a position among the Afrotheria is not out of the question. Order Perissodactyla Suborder Hippomorpha †Hyopsodontidae †Pachynolophidae †Brontotheriidae Superfamily Equoidea †Indolophidae †Palaeotheriidae (might be a basal perissodactyl grade instead) †Suborder Ancylopoda †Isectolophidae (basal ancylopodans and ceratomorphs) †Lophiodontidae Superfamily Chalicotherioidea †Eomoropidae (basal grade of chalicotheroids) †Chalicotheriidae Suborder Ceratomorpha Superfamily Rhinocerotoidea †Amynodontidae †Hyracodontidae Superfamily Tapiroidea †Deperetellidae †Rhodopagidae (sometimes recognized as a subfamily of deperetellids) †Lophialetidae †Eoletidae (sometimes recognized as a subfamily of lophialetids) †Anthracobunidae (a family of stem-perissodactyls; from the Early to Middle Eocene epoch) †Phenacodontidae (a clade of stem-perissodactyls; from the Early Palaeocene to the Middle Eocene epoch) Higher classification of perissodactyls Relationships within the large group of odd-toed ungulates are not fully understood. Initially, after the establishment of "Perissodactyla" by Richard Owen in 1848, the present-day representatives were considered equal in rank. In the first half of the 20th century, a more systematic differentiation of odd-toed ungulates began, based on a consideration of fossil forms, and they were placed in two major suborders: Hippomorpha and Ceratomorpha. The Hippomorpha comprises today's horses and their extinct members (Equoidea); the Ceratomorpha consist of tapirs and rhinos plus their extinct members (Tapiroidea and Rhinocerotoidea). The names Hippomorpha and Ceratomorpha were introduced in 1937 by Horace Elmer Wood, in response to criticism of the name "Solidungula" that he proposed three years previously. It had been based on the grouping of horses and Tridactyla and on the rhinoceros/tapir complex. The extinct brontotheriidae were also classified under Hippomorpha and therefore possess a close relationship to horses. Some researchers accept this assignment because of similar dental features, but there is also the view that a very basal position within the odd-toed ungulates places them rather in the group of Titanotheriomorpha. Originally, the Chalicotheriidae were seen as members of Hippomorpha, and presented as such in 1941. William Berryman Scott thought that, as claw-bearing perissodactyls, they belong in the new suborder Ancylopoda (where Ceratomorpha and Hippomorpha as odd-toed ungulates were combined in the group of Chelopoda). The term Ancylopoda, coined by Edward Drinker Cope in 1889, had been established for chalicotheres. However, further morphological studies from the 1960s showed a middle position of Ancylopoda between Hippomorpha and Ceratomorpha. Leonard Burton Radinsky saw all three major groups of odd-toed ungulates as peers, based on the extremely long and independent phylogenetic development of the three lines. In the 1980s, Jeremy J. Hooker saw a general similarity of Ancylopoda and Ceratomorpha based on dentition, especially in the earliest members, leading to the unification in 1984 of the two submissions in the interim order, Tapiromorpha. At the same time he expanded the Ancylopoda to include the Lophiodontidae. The name "Tapiromorpha" goes back to Ernst Haeckel, who coined it in 1873, but it was long considered synonymous to Ceratomorpha because Wood had not considered it in 1937 when Ceratomorpha were named, since the term had been used quite differently in the past. Also in 1984, Robert M. Schoch used the conceptually similar term Moropomorpha, which today applies synonymously to Tapiromorpha. Included within the Tapiromorpha are the now extinct Isectolophidae, a sister group of the Ancylopoda-Ceratomorpha group and thus the most primitive members of this relationship complex. Evolutionary history Origins The evolutionary development of Perissodactyla is well documented in the fossil record. Numerous finds are evidence of the adaptive radiation of this group, which was once much more varied and widely dispersed. Radinskya from the late Paleocene of East Asia is often considered to be one of the oldest close relatives of the ungulates. Its 8 cm skull must have belonged to a very small and primitive animal with a π-shaped crown pattern on the enamel of its rear molars similar to that of perissodactyls and their relatives, especially the rhinos. Finds of Cambaytherium and Kalitherium in the Cambay shale of western India indicate an origin in Asia dating to the Lower Eocene roughly 54.5 million years ago. Their teeth also show similarities to Radinskya as well as to the Tethytheria clade. The saddle-shaped configuration of the navicular joints and the mesaxonic construction of the front and hind feet also indicates a close relationship to Tethytheria. However, this construction deviates from that of Cambaytherium, indicating that it is actually a member of a sister group. Ancestors of Perissodactyla may have arrived via an island bridge from the Afro-Arab landmass onto the Indian subcontinent as it drifted north towards Asia. A study on Cambaytherium suggests an origin in India prior or near its collision with Asia. The alignment of hyopsodontids and phenacodontids to Perissodactyla in general suggests an older Laurasian origin and distribution for the clade, dispersed across the northern continents already in the early Paleocene. These forms already show a fairly well-developed molar morphology, with no intermediary forms as evidence of the course of its development. The close relationship between meridiungulate mammals and perissoodactyls in particular is of interest since the latter appear in South America soon after the K–T event, implying rapid ecological radiation and dispersal after the mass extinction. Phylogeny The Perissodactyla appear relatively abruptly at the beginning of the Lower Paleocene before about 63 million years ago, both in North America and Asia, in the form of phenacodontids and hyopsodontids. The oldest finds from an extant group originate among other sources from Sifrhippus, an ancestor of the horses from the Willswood lineup in northwestern Wyoming. The distant ancestors of tapirs appeared not too long after that in the Ghazij lineup in Balochistan, such as Ganderalophus, as well as Litolophus from the
tens digit (if any). Examples: 1–3 with 2-3: value 9 (nine pips altogether) 2–3 with 5-6: value 6 (16 pips; drop the 10) 5–5 with 4-6: value 0 (20 pips; ones digit is zero) Gongs and Wongs There are special ways in which a hand can score more than nine points. The double-one tiles and double-six tiles are known as the Day and Teen tiles, respectively. The combination of a Day or Teen with an eight results in a Gong, worth 10 points, while putting either of them with a nine creates a Wong, worth 11. However, when a Day or Teen is paired with any other tile, the standard scoring rules apply. Gee Joon tiles The 1-2 and the 2-4 tiles are called Gee Joon'' tiles and act as limited wild cards. When used as part of a hand, these tiles may be scored as either 3 or 6, whichever results in a higher hand value. For example, a hand of 1-2 and 5-6 scores as seven rather than four. Pairs The 32 tiles in a Chinese dominoes set can be arranged into 16 pairs, as shown in the picture at the top of this article. Eleven of these pairs have identical tiles, and five of these pairs are made up of two tiles that score the same, but look different. (The latter group includes the Gee Joon tiles, which can score the same, whether as three or six.) Any hand consisting of a pair outscores a non-pair, regardless of the pip counts. (Pairs are often thought of as being worth 12 points each.) When the player and dealer both have a pair, the higher-ranked pair wins. Ranking is determined not by the sum of the tiles' pips, but rather by aesthetics; the order must be memorized. The highest pairs are the Gee Joon tiles, the Teens, the Days, and the red eights. The lowest pairs are the mismatched nines, eights, sevens, and fives. Ties When
tiles are considered distinguishable. However, there are 3,620 distinct sets of 4 tiles when the tiles of a pair are considered indistinguishable. There are 496 ways to select 2 of the 32 tiles when the 32 tiles are considered distinguishable. There are 136 distinct hands (pairs of tiles) when the tiles of a pair are considered indistinguishable. Basic scoring The name "pai gow" is loosely translated as "make nine" or "card nine". This reflects the fact that, with a few high-scoring exceptions, the maximum score for a hand is nine. If a hand consists of two tiles that do not form a pair, its value is determined by adding up the total number of pips on the tiles and dropping the tens digit (if any). Examples: 1–3 with 2-3: value 9 (nine pips altogether) 2–3 with 5-6: value 6 (16 pips; drop the 10) 5–5 with 4-6: value 0 (20 pips; ones digit is zero) Gongs and Wongs There are special ways in which a hand can score more than nine points. The double-one tiles and double-six tiles are known as the Day and Teen tiles, respectively. The combination of a Day or Teen with an eight results in a Gong, worth 10 points, while putting either of them with a nine creates a Wong, worth 11. However, when a Day or Teen is paired with any other tile, the standard scoring rules apply. Gee Joon tiles The 1-2 and the 2-4 tiles are called Gee Joon'' tiles and act as limited wild cards. When used as part of a hand, these tiles may be scored as either 3 or 6, whichever results in a higher hand value. For example, a hand of 1-2 and 5-6 scores as seven rather than four. Pairs The 32 tiles in a Chinese dominoes set can be arranged into 16 pairs, as shown in the picture at the top of this article. Eleven of these pairs have identical tiles, and five of these pairs are made up of two tiles that score the same, but look different. (The latter group includes the Gee Joon tiles, which can score the same, whether as three or six.) Any hand consisting of a pair outscores a non-pair, regardless of the pip counts. (Pairs are often thought of as being worth 12 points each.) When the player and dealer both have a pair, the higher-ranked pair wins. Ranking is determined not by the sum of the tiles' pips, but rather by aesthetics; the order must be memorized. The highest pairs are the Gee Joon tiles, the Teens, the Days, and the red eights. The lowest pairs are the mismatched nines, eights, sevens, and fives. Ties When the player and dealer display hands with the same score, the one with the highest-valued tile (based on the pair rankings described above) is the winner. For example, a player's hand of 3-4 and 2-2 and a dealer's hand of 5-6 and 5-5 would each score one point. However, since the dealer's 5-5 outranks the other three tiles, he would win the hand. If the scores are tied, and if the player and dealer each have an identical highest-ranking tile, then the dealer wins. For example, if the player held 2-2 and 1–6, and the dealer held 2-2 and 3–4, the dealer would win since the scores (1 each) and the higher tiles (2-2) are the same. The lower-ranked tile in each hand is never used to break a tie. There are two exceptions to the method described above. First, although the Gee Joon tiles form the highest-ranking pair when used together, they are considered to have no value individually when evaluating ties. Second, any zero-zero tie is won by the dealer, regardless of the tiles in the two hands. Strategy The key element of pai gow strategy is to present the optimal front and rear hands based on the tiles dealt to the player. There
On the real numbers the usual less than relation < is a strict partial order and the same is also true of the usual greater than relation > on By definition, every strict weak order is a strict partial order. The set of subsets of a given set (its power set) ordered by inclusion (see Fig.1). Similarly, the set of sequences ordered by subsequence, and the set of strings ordered by substring. The set of natural numbers equipped with the relation of divisibility. (see Fig.3 and Fig.6) The vertex set of a directed acyclic graph ordered by reachability. The set of subspaces of a vector space ordered by inclusion. For a partially ordered set P, the sequence space containing all sequences of elements from P, where sequence a precedes sequence b if every item in a precedes the corresponding item in b. Formally, if and only if for all ; that is, a componentwise order. For a set X and a partially ordered set P, the function space containing all functions from X to P, where f ≤ g if and only if f(x) ≤ g(x) for all A fence, a partially ordered set defined by an alternating sequence of order relations a < b > c < d ... The set of events in special relativity and, in most cases, general relativity, where for two events X and Y, X ≤ Y if and only if Y is in the future light cone of X. An event Y can only be causally affected by X if X ≤ Y. One familiar example of a partially ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, with neither being a descendant of the other. Orders on the Cartesian product of partially ordered sets In order of increasing strength, i.e., decreasing sets of pairs, three of the possible partial orders on the Cartesian product of two partially ordered sets are (see Fig.4): the lexicographical order: (a, b) ≤ (c, d) if a < c or (a = c and b ≤ d); the product order: (a, b) ≤ (c, d) if a ≤ c and b ≤ d; the reflexive closure of the direct product of the corresponding strict orders: (a, b) ≤ (c, d) if (a < c and b < d) or (a = c and b = d). All three can similarly be defined for the Cartesian product of more than two sets. Applied to ordered vector spaces over the same field, the result is in each case also an ordered vector space. See also orders on the Cartesian product of totally ordered sets. Sums of partially ordered sets Another way to combine two (disjoint) posets is the ordinal sum (or linear sum), Z = X ⊕ Y, defined on the union of the underlying sets X and Y by the order a ≤Z b if and only if: a, b ∈ X with a ≤X b, or a, b ∈ Y with a ≤Y b, or a ∈ X and b ∈ Y. If two posets are well-ordered, then so is their ordinal sum. Series-parallel partial orders are formed from the ordinal sum operation (in this context called series composition) and another operation called parallel composition. Parallel composition is the disjoint union of two partially ordered sets, with no order relation between elements of one set and elements of the other set. Derived notions The examples use the poset consisting of the set of all subsets of a three-element set ordered by set inclusion (see Fig.1). a is related to b when a ≤ b. This does not imply that b is also related to a, because the relation need not be symmetric. For example, is related to but not the reverse. a and b are comparable if a ≤ b or b ≤ a. Otherwise they are incomparable. For example, and are comparable, while and are not. A total order or linear order is a partial order under which every pair of elements is comparable, i.e. trichotomy holds. For example, the natural numbers with their standard order. A chain is a subset of a poset that is a totally ordered set. For example, is a chain. An antichain is a subset of a poset in which no two distinct elements are comparable. For example, the set of singletons An element a is said to be strictly less than an element b, if a ≤ b and For example, is strictly less than An element a is said to be covered by another element b, written a ⋖ b (or a <: b), if a is strictly less than b and no third element c fits between them; formally: if both a ≤ b and are true, and a ≤ c ≤ b is false for each c with Using the strict order <, the relation a ⋖ b can be equivalently rephrased as "a < b but not a < c < b for any c". For example, is covered by but is not covered by Extrema There are several notions of "greatest" and "least" element in a poset notably: Greatest element and least element: An element is a if for every element An element is a if for every element A poset can only have one greatest or least element. In our running example, the set is the greatest element, and is the least. Maximal elements and minimal elements: An element is a maximal element if there is no element such that Similarly, an element is a minimal element if there is no element such that If a poset has a greatest element, it must be the unique maximal element, but otherwise there can be more than one maximal element, and similarly for least elements and minimal elements. In our running example, and are the maximal and minimal elements. Removing these, there are 3 maximal elements and 3 minimal elements (see Fig.5). Upper and lower bounds: For a subset A of P, an element x in P is an upper bound of A if a ≤ x, for each element a in A. In particular, x need not be in A to be an upper bound of A. Similarly, an element x in P is a lower bound of A if a ≥ x, for each element a in A. A greatest element of P is an upper bound of P itself, and a least element is a lower bound of P. In our example, the set is an for the collection of elements As another example, consider the positive integers, ordered by divisibility: 1 is a least element, as it divides all other elements; on the other hand this poset does not have a greatest element (although if one would include 0 in the poset, which is a multiple of any integer, that would be a greatest element; see Fig.6). This partially ordered set does not even have any maximal elements, since any g divides for instance 2g, which is distinct from it, so g is not maximal. If the number 1 is excluded, while keeping divisibility as ordering on the elements greater than 1, then the resulting poset does not have a least element, but any prime number is a minimal element for it. In this poset, 60 is an upper bound (though not a least upper bound) of the subset which does not
, and assume that the other relations are defined appropriately. Defining via a non-strict partial order is most common. Some authors use different symbols than such as or to distinguish partial orders from total orders. When referring to partial orders, should not be taken as the complement of . The relation is the converse of the irreflexive kernel of , which is always a subset of the complement of , but is equal to the complement of if, and only if, is a total order. Examples Standard examples of posets arising in mathematics include: The real numbers, or in general any totally ordered set, ordered by the standard less-than-or-equal relation ≤, is a non-strict partial order. On the real numbers the usual less than relation < is a strict partial order and the same is also true of the usual greater than relation > on By definition, every strict weak order is a strict partial order. The set of subsets of a given set (its power set) ordered by inclusion (see Fig.1). Similarly, the set of sequences ordered by subsequence, and the set of strings ordered by substring. The set of natural numbers equipped with the relation of divisibility. (see Fig.3 and Fig.6) The vertex set of a directed acyclic graph ordered by reachability. The set of subspaces of a vector space ordered by inclusion. For a partially ordered set P, the sequence space containing all sequences of elements from P, where sequence a precedes sequence b if every item in a precedes the corresponding item in b. Formally, if and only if for all ; that is, a componentwise order. For a set X and a partially ordered set P, the function space containing all functions from X to P, where f ≤ g if and only if f(x) ≤ g(x) for all A fence, a partially ordered set defined by an alternating sequence of order relations a < b > c < d ... The set of events in special relativity and, in most cases, general relativity, where for two events X and Y, X ≤ Y if and only if Y is in the future light cone of X. An event Y can only be causally affected by X if X ≤ Y. One familiar example of a partially ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, with neither being a descendant of the other. Orders on the Cartesian product of partially ordered sets In order of increasing strength, i.e., decreasing sets of pairs, three of the possible partial orders on the Cartesian product of two partially ordered sets are (see Fig.4): the lexicographical order: (a, b) ≤ (c, d) if a < c or (a = c and b ≤ d); the product order: (a, b) ≤ (c, d) if a ≤ c and b ≤ d; the reflexive closure of the direct product of the corresponding strict orders: (a, b) ≤ (c, d) if (a < c and b < d) or (a = c and b = d). All three can similarly be defined for the Cartesian product of more than two sets. Applied to ordered vector spaces over the same field, the result is in each case also an ordered vector space. See also orders on the Cartesian product of totally ordered sets. Sums of partially ordered sets Another way to combine two (disjoint) posets is the ordinal sum (or linear sum), Z = X ⊕ Y, defined on the union of the underlying sets X and Y by the order a ≤Z b if and only if: a, b ∈ X with a ≤X b, or a, b ∈ Y with a ≤Y b, or a ∈ X and b ∈ Y. If two posets are well-ordered, then so is their ordinal sum. Series-parallel partial orders are formed from the ordinal sum operation (in this context called series composition) and another operation called parallel composition. Parallel composition is the disjoint union of two partially ordered sets, with no order relation between elements of one set and elements of the other set. Derived notions The examples use the poset consisting of the set of all subsets of a three-element set ordered by set inclusion (see Fig.1). a is related to b when a ≤ b. This does not imply that b is also related to a, because the relation need not be symmetric. For example, is related to but not the reverse. a and b are comparable if a ≤ b or b ≤ a. Otherwise they are incomparable. For example, and are comparable, while and are not. A total order or linear order is a partial order under which every pair of elements is comparable, i.e. trichotomy holds. For example, the natural numbers with their standard order. A chain is a subset of a poset that is a totally ordered set. For example, is a chain. An antichain is a subset of a poset in which no two distinct elements are comparable. For example, the set of singletons An element a is said to be strictly less than an element b, if a ≤ b and For example, is strictly less than An element a is said to be covered by another element b, written a ⋖ b (or a <: b), if a is strictly less than b and no third element c fits between them; formally: if both a ≤ b and are true, and a ≤ c ≤ b is false for each c with Using the strict order <, the relation a ⋖ b can be equivalently rephrased as "a < b but not a < c < b for any c". For example, is covered by but is not covered by Extrema There are several notions of "greatest" and "least" element in a poset notably: Greatest element and least element: An element is a if for every element An element is a if for every element A poset can only have one greatest or least element. In our running example, the set is the greatest element, and is the least. Maximal elements and minimal elements: An element is a maximal element if there is no element such that Similarly, an element is a minimal element if there is no element such that If a poset has a greatest element, it must be the unique maximal element, but otherwise there can be more than one maximal element, and similarly for least elements and minimal elements. In our running example, and are the maximal and minimal elements. Removing these, there are 3 maximal elements and 3 minimal elements (see Fig.5). Upper and lower bounds: For a subset A of P, an element x in P is an upper bound of A if a ≤ x, for each element a in A. In particular, x need not be in A to be an upper bound of A. Similarly, an element x in P is a lower bound of A if a ≥ x, for each element a in A. A greatest element of P is an upper bound of P itself, and a least element is a lower bound of P. In our example, the set is an for the collection of elements As another example, consider the positive integers, ordered by divisibility: 1 is a least element, as it divides all other elements; on the other hand this poset does not have a greatest element (although if one would include 0 in the poset, which is a multiple of any integer, that would be a greatest element; see Fig.6). This partially ordered set does not even have any maximal elements, since any g divides for instance 2g, which is distinct from it, so g is not maximal. If the number 1 is excluded, while keeping divisibility as ordering on the elements greater than 1, then the resulting poset does not have a least element, but any prime number is a minimal element for it. In this poset, 60 is an upper bound (though not a least upper bound) of the subset which does not have any lower bound (since 1 is not in the poset); on the other hand 2 is a lower bound of the subset of powers of 2, which does not have any upper bound. Mappings between partially ordered sets Given two partially ordered sets (S, ≤) and (T, ≼), a function is called order-preserving, or monotone, or isotone, if for all implies f(x) ≼ f(y). If (U, ≲) is also a partially ordered set, and both and are order-preserving, their composition is order-preserving, too. A function is called order-reflecting if for all f(x) ≼ f(y) implies If is both order-preserving and order-reflecting, then it is called an order-embedding of (S, ≤) into (T, ≼). In the latter case, is necessarily injective, since implies and in turn according to the antisymmetry of If an order-embedding between two posets S and T exists, one says that S can
Psyche (psychology), the totality of the human mind, conscious and unconscious Psyche, an 1846 book about the unconscious by Carl Gustav Carus Psyche, an 1890-94 book about the ancient Greek concept of soul by Erwin Rohde Psyche (consciousness journal), a periodical on the study of consciousness Psyche, a digital magazine on psychology published by Aeon Religion and mythology Psyche (mythology), a mortal woman in Greek mythology who became the wife of Eros and the goddess of the soul Soul in the Bible, spirit or soul in Judaic and Christian philosophy and theology Arts and media Based on Cupid and Psyche The story of Cupid and Psyche, mainly known from the Latin novel
journal), a periodical on the study of consciousness Psyche, a digital magazine on psychology published by Aeon Religion and mythology Psyche (mythology), a mortal woman in Greek mythology who became the wife of Eros and the goddess of the soul Soul in the Bible, spirit or soul in Judaic and Christian philosophy and theology Arts and media Based on Cupid and Psyche The story of Cupid and Psyche, mainly known from the Latin novel by Apuleius, and depicted in many forms: Cupid and Psyche (Capitoline Museums), a Roman statue Marlborough gem, a 1st-century carved cameo Landscape with Psyche Outside the Palace of Cupid, a painting by Claude Lorrain, National Gallery London Psyche Revived by Cupid's Kiss a sculpture of 1793 by Antonio Canova Cupid and Psyche (Thorvaldsen), a sculpture of 1808, Copenhagen Love and Psyche (David), a painting of 1817, now in Cleveland Psyché (play), a 1671 tragedy-ballet by Molière Psyche
accompanying narrative. Parmenides attempted to distinguish between the unity of nature and its variety, insisting in the Way of Truth upon the reality of its unity, which is therefore the object of knowledge, and upon the unreality of its variety, which is therefore the object, not of knowledge, but of opinion. In the Way of Opinion he propounded a theory of the world of seeming and its development, pointing out, however, that, in accordance with the principles already laid down, these cosmological speculations do not pretend to anything more than mere appearance. Proem In the proem, Parmenides describes the journey of the poet, escorted by maidens ("the daughters of the Sun made haste to escort me, having left the halls of Night for the light"), from the ordinary daytime world to a strange destination, outside our human paths. Carried in a whirling chariot, and attended by the daughters of Helios the Sun, the man reaches a temple sacred to an unnamed goddess (variously identified by the commentators as Nature, Wisdom, Necessity or Themis), by whom the rest of the poem is spoken. The goddess resides in a well-known mythological space: where Night and Day have their meeting place. Its essential character is that here all opposites are undivided, or one. He must learn all things, she tells him – both truth, which is certain, and human opinions, which are uncertain – for though one cannot rely on human opinions, they represent an aspect of the whole truth.Welcome, youth, who come attended by immortal charioteers and mares which bear you on your journey to our dwelling. For it is no evil fate that has set you to travel on this road, far from the beaten paths of men, but right and justice. It is meet that you learn all things — both the unshakable heart of well-rounded truth and the opinions of mortals in which there is not true belief. (B 1.24–30) The Way of Truth The section known as "the way of truth" discusses that which is real and contrasts with the argument in the section called "the way of opinion," which discusses that which is illusory. Under the "way of truth," Parmenides stated that there are two ways of inquiry: that it is, on the one side, and that it is not on the other side. He said that the latter argument is never feasible because there is no thing that can not be: "For never shall this prevail, that things that are not, are." Thinking and the thought that it is are the same; for you will not find thinking apart from what is, in relation to which it is uttered. (B 8.34–36)For to be aware and to be are the same. (B 3)It is necessary to speak and to think what is; for being is, but nothing is not. (B 6.1–2)Helplessness guides the wandering thought in their breasts; they are carried along deaf and blind alike, dazed, beasts without judgment, convinced that to be and not to be are the same and not the same, and that the road of all things is a backward-turning one. (B 6.5–9)Only one thing exists, which is timeless, uniform, and unchanging:How could what is perish? How could it have come to be? For if it came into being, it is not; nor is it if ever it is going to be. Thus coming into being is extinguished, and destruction unknown. (B 8.20–22)Nor was [it] once, nor will [it] be, since [it] is, now, all together, / One, continuous; for what coming-to-be of it will you seek? / In what way, whence, did [it] grow? Neither from what-is-not shall I allow / You to say or think; for it is not to be said or thought / That [it] is not. And what need could have impelled it to grow / Later or sooner, if it began from nothing? Thus [it] must either be completely or not at all. (B 8.5–11)[What exists] is now, all at once, one and continuous... Nor is it divisible, since it is all alike; nor is there any more or less of it in one place which might prevent it from holding together, but all is full of what is. (B 8.5–6, 8.22–24)And it is all one to me / Where I am to begin; for I shall return there again. (B 5) Perception vs. Logos Parmenides claimed that there is no truth in the opinions of the mortals. Genesis-and-destruction, as Parmenides emphasizes, is a false opinion, because to be means to be completely, once and for all. What exists can in no way not exist. For this view, that That Which Is Not exists, can never predominate. You must debar your thought from this way of search, nor let ordinary experience in its variety force you along this way, (namely, that of allowing) the eye, sightless as it is, and the ear, full of sound, and the tongue, to rule; but (you must) judge by means of the Reason (Logos) the much-contested proof which is expounded by me. (B 7.1–8.2) The Way of Opinion After the exposition of the arche (ἀρχή), i.e. the origin, the necessary part of reality that is understood through reason or logos (that [it] Is), in the next section, the Way of Appearance/Opinion/Seeming, Parmenides gives a cosmology. He proceeds to explain the structure of the becoming cosmos (which is an illusion, of course) that comes from this origin. The structure of the cosmos is a fundamental binary principle that governs the manifestations of all the particulars: "the aether fire of flame" (B 8.56), which is gentle, mild, soft, thin and clear, and self-identical, and the other is "ignorant night", body thick and heavy.The mortals lay down and decided well to name two forms (i.e. the flaming light and obscure darkness of night), out of which it is necessary not to make one, and in this they are led astray. (B 8.53–4)The structure of the cosmos then generated is recollected by Aetius (II, 7, 1): For Parmenides says that there are circular bands wound round one upon the other, one made of the rare, the other of the dense; and others between these mixed of light and darkness. What surrounds them all is solid like a wall. Beneath it is a fiery band, and what is in the very middle of them all is solid, around which again is a fiery band. The most central of the mixed bands is for them all the origin and cause of motion and becoming, which he also calls steering goddess and keyholder and Justice and Necessity. The air has been separated off from the earth, vapourized by its more violent condensation, and the sun and the circle of the Milky Way are exhalations of fire. The moon is a mixture of both earth and fire. The aether lies around above all else, and beneath it is ranged that fiery part which we call heaven, beneath which are the regions around the earth.Cosmology originally comprised the greater part of his poem, him explaining the world's origins and operations. Some idea of the sphericity of the Earth seems to have been known to Parmenides. Parmenides also outlined the phases of the moon, highlighted in a rhymed translation by Karl Popper: Smith stated:Of the cosmogony of Parmenides, which was carried out very much in detail, we possess only a few fragments and notices, which are difficult to understand, according to which, with an approach to the doctrines of the Pythagoreans, he conceived the spherical mundane system, surrounded by a circle of the pure light (Olympus, Uranus); in the centre of this mundane system the solid earth, and between the two the circle of the milkyway, of the morning or evening star, of the sun, the planets, and the moon; which circle he regarded as a mixture of the two primordial elements. The fragments read: Interpretations The traditional interpretation of Parmenides' work is that he argued that the every-day perception of reality of the physical world (as described in doxa) is mistaken, and that the reality of the world is 'One Being' (as described in aletheia): an unchanging, ungenerated, indestructible whole. Under the Way of Opinion, Parmenides set out a contrasting but more conventional view of the world, thereby becoming an early exponent of the duality of appearance and reality. For him and his pupils, the phenomena of movement and change are simply appearances of a changeless, eternal reality. Parmenides was not struggling to formulate the laws of conservation of mass and conservation of energy; he was struggling with the metaphysics of change, which is still a relevant philosophical topic today. Moreover, he argued that movement was impossible because it requires moving into "the void", and Parmenides identified "the void" with nothing, and therefore (by definition) it does not exist. That which does exist is The Parmenidean One. Since existence is an immediately intuited fact, non-existence is the wrong path because a thing cannot disappear, just as something cannot originate from nothing. In such mystical experience (unio mystica), however, the distinction between subject and object disappears along with the distinctions between objects, in addition to the fact that if nothing cannot be, it cannot be the object of thought either. William Smith also wrote in Dictionary of Greek and Roman Biography
narrator travels "beyond the beaten paths of mortal men" to receive a revelation from an unnamed goddess (generally thought to be Persephone or Dikē) on the nature of reality. Aletheia, an estimated 90% of which has survived, and doxa, most of which no longer exists, are then presented as the spoken revelation of the goddess without any accompanying narrative. Parmenides attempted to distinguish between the unity of nature and its variety, insisting in the Way of Truth upon the reality of its unity, which is therefore the object of knowledge, and upon the unreality of its variety, which is therefore the object, not of knowledge, but of opinion. In the Way of Opinion he propounded a theory of the world of seeming and its development, pointing out, however, that, in accordance with the principles already laid down, these cosmological speculations do not pretend to anything more than mere appearance. Proem In the proem, Parmenides describes the journey of the poet, escorted by maidens ("the daughters of the Sun made haste to escort me, having left the halls of Night for the light"), from the ordinary daytime world to a strange destination, outside our human paths. Carried in a whirling chariot, and attended by the daughters of Helios the Sun, the man reaches a temple sacred to an unnamed goddess (variously identified by the commentators as Nature, Wisdom, Necessity or Themis), by whom the rest of the poem is spoken. The goddess resides in a well-known mythological space: where Night and Day have their meeting place. Its essential character is that here all opposites are undivided, or one. He must learn all things, she tells him – both truth, which is certain, and human opinions, which are uncertain – for though one cannot rely on human opinions, they represent an aspect of the whole truth.Welcome, youth, who come attended by immortal charioteers and mares which bear you on your journey to our dwelling. For it is no evil fate that has set you to travel on this road, far from the beaten paths of men, but right and justice. It is meet that you learn all things — both the unshakable heart of well-rounded truth and the opinions of mortals in which there is not true belief. (B 1.24–30) The Way of Truth The section known as "the way of truth" discusses that which is real and contrasts with the argument in the section called "the way of opinion," which discusses that which is illusory. Under the "way of truth," Parmenides stated that there are two ways of inquiry: that it is, on the one side, and that it is not on the other side. He said that the latter argument is never feasible because there is no thing that can not be: "For never shall this prevail, that things that are not, are." Thinking and the thought that it is are the same; for you will not find thinking apart from what is, in relation to which it is uttered. (B 8.34–36)For to be aware and to be are the same. (B 3)It is necessary to speak and to think what is; for being is, but nothing is not. (B 6.1–2)Helplessness guides the wandering thought in their breasts; they are carried along deaf and blind alike, dazed, beasts without judgment, convinced that to be and not to be are the same and not the same, and that the road of all things is a backward-turning one. (B 6.5–9)Only one thing exists, which is timeless, uniform, and unchanging:How could what is perish? How could it have come to be? For if it came into being, it is not; nor is it if ever it is going to be. Thus coming into being is extinguished, and destruction unknown. (B 8.20–22)Nor was [it] once, nor will [it] be, since [it] is, now, all together, / One, continuous; for what coming-to-be of it will you seek? / In what way, whence, did [it] grow? Neither from what-is-not shall I allow / You to say or think; for it is not to be said or thought / That [it] is not. And what need could have impelled it to grow / Later or sooner, if it began from nothing? Thus [it] must either be completely or not at all. (B 8.5–11)[What exists] is now, all at once, one and continuous... Nor is it divisible, since it is all alike; nor is there any more or less of it in one place which might prevent it from holding together, but all is full of what is. (B 8.5–6, 8.22–24)And it is all one to me / Where I am to begin; for I shall return there again. (B 5) Perception vs. Logos Parmenides claimed that there is no truth in the opinions of the mortals. Genesis-and-destruction, as Parmenides emphasizes, is a false opinion, because to be means to be completely, once and for all. What exists can in no way not exist. For this view, that That Which Is Not exists, can never predominate. You must debar your thought from this way of search, nor let ordinary experience in its variety force you along this way, (namely, that of allowing) the eye, sightless as it is, and the ear, full of sound, and the tongue, to rule; but (you must) judge by means of the Reason (Logos) the much-contested proof which is expounded by me. (B 7.1–8.2) The Way of Opinion After the exposition of the arche (ἀρχή), i.e. the origin, the necessary part of reality that is understood through reason or logos (that [it] Is), in the next section, the Way of Appearance/Opinion/Seeming, Parmenides gives a cosmology. He proceeds to explain the structure of the becoming cosmos (which is an illusion, of course) that comes from this origin. The structure of the cosmos is a fundamental binary principle that governs the manifestations of all the particulars: "the aether fire of flame" (B 8.56), which is gentle, mild, soft, thin and clear, and self-identical, and the other is "ignorant night", body thick and heavy.The mortals lay down and decided well to name two forms (i.e. the flaming light and obscure darkness of night), out of which it is necessary not to make one, and in this they are led astray. (B 8.53–4)The structure of the cosmos then generated is recollected by Aetius (II, 7, 1): For Parmenides says that there are circular bands wound round one upon the other, one made of the rare, the other of the dense; and others between these mixed of light and darkness. What surrounds them all is solid like a wall. Beneath it is a fiery band, and what is in the very middle of them all is solid, around which again is a fiery band. The most central of the mixed bands is for them all the origin and cause of motion and becoming, which he also calls steering goddess and keyholder and Justice and Necessity. The air has been separated off from the earth, vapourized by its more violent condensation, and the sun and the circle of the Milky Way are exhalations of fire. The moon is a mixture of both earth and fire. The aether lies around above all else, and beneath it is ranged that fiery part which we call heaven, beneath which are the regions around the earth.Cosmology originally comprised the greater part of his poem, him explaining the world's origins and operations. Some idea of the sphericity of the Earth seems to have been known to Parmenides. Parmenides also outlined the phases of the moon, highlighted in a rhymed translation by Karl Popper: Smith stated:Of the cosmogony of Parmenides, which was carried out very much in detail, we possess only a few fragments and notices, which are difficult to understand, according to which, with an approach to the doctrines of the Pythagoreans, he conceived the spherical mundane system, surrounded by a circle of the pure light (Olympus, Uranus); in the centre of this mundane system the solid earth, and between the two the circle of the milkyway, of the morning or evening star, of the sun, the planets, and the moon; which circle he regarded as a mixture of the two primordial elements. The fragments read: Interpretations The traditional interpretation of Parmenides' work is that he argued that the every-day perception of reality of the physical world (as described in doxa) is mistaken, and that the reality of the world is 'One Being' (as described in aletheia): an unchanging, ungenerated, indestructible whole. Under the Way of Opinion, Parmenides set out a contrasting but more conventional view of the world, thereby becoming an early exponent of the duality of appearance and reality. For him and his pupils, the phenomena of movement and change are simply appearances of a changeless, eternal reality. Parmenides was not struggling to formulate the laws of conservation of mass and conservation of energy; he was struggling with the metaphysics of change, which is still a relevant philosophical topic today. Moreover, he argued that movement was impossible because it requires moving into "the void", and Parmenides identified "the void" with nothing, and therefore (by definition) it does not exist. That which does exist is The Parmenidean One. Since existence is an immediately intuited fact, non-existence is the wrong path because a thing cannot disappear, just as something cannot originate from nothing. In such mystical experience (unio mystica), however, the distinction between subject and object disappears along with the distinctions between objects, in addition to the fact that if nothing cannot be, it cannot be the object of thought either. William Smith also wrote in Dictionary of Greek and Roman Biography and Mythology:On the former reason is our guide; on the latter the eye that does not catch the object and re-echoing hearing. On the former path we convince ourselves that the existent neither has come into being, nor is perishable, and is entirely of one sort, without change and limit, neither past nor future, entirely included in the present. For it is as impossible that it can become and grow out of the existent, as that it could do so out of the non-existent; since the latter, non-existence, is absolutely inconceivable, and the former cannot precede itself; and every coming into existence presupposes a non-existence. By similar arguments divisibility, motion or change, as also infinity, are shut out from the absolutely existent, and the latter is represented as shut up in itself, so that it may be compared to a well-rounded ball; while thought is appropriated to it as its only positive definition.
full marine salinities, but actual wild breeding has never been observed. Xenopterus naritus has been reported to be first breed artificially in Sarawak, Northwestern Borneo, in June 2016, which the main purpose is for development in aquaculture of the species. In 2012, males of the species Torquigener albomaculosus were documented carving large geometric, circular structures in the seabed sand in Amami Ōshima, Japan. The structures serve to attract females, and provide a safe place for them to lay their eggs. Diet Pufferfish diets can vary depending on their environment. Traditionally, their diet consists mostly of algae and small invertebrates. They can survive on a completely vegetarian diet if their environment is lacking resources, but prefer an omnivorous food selection. Larger species of pufferfish are able to use their beak-like front teeth to break open clams, mussels, and other shellfish. Some species of pufferfish have also been known to enact various hunting techniques ranging from ambush to open-water hunting. Evolution The tetraodontids have been estimated to have diverged from diodontids between 89 and 138 million years ago. The four major clades diverged during the Cretaceous between 80 and 101 million years ago. The oldest known pufferfish genus is Eotetraodon, from the Lutetian epoch of Middle Eocene Europe, with fossils found in Monte Bolca and the Caucasus Mountains. The Monte Bolca species, E. pygmaeus, coexisted with several other tetraodontiforms, including an extinct species of diodontid, primitive boxfish (Proaracana and Eolactoria), and other, totally extinct forms, such as Zignoichthys and the spinacanthids. The extinct genus, Archaeotetraodon is known from Miocene-aged fossils from Europe. Poisoning Pufferfish can be lethal if not served properly. Puffer poisoning usually results from consumption of incorrectly prepared puffer soup, fugu chiri, or occasionally from raw puffer meat, sashimi fugu. While chiri is much more likely to cause death, sashimi fugu often causes intoxication, light-headedness, and numbness of the lips. Pufferfish tetrodotoxin deadens the tongue and lips, and induces dizziness and vomiting, followed by numbness and prickling over the body, rapid heart rate, decreased blood pressure, and muscle paralysis. The toxin paralyzes the diaphragm muscle and stops the person who has ingested it from breathing. People who live longer than 24 hours typically survive, although possibly after a coma lasting several days. The source of tetrodotoxin in puffers has been a matter of debate, but it is increasingly accepted that bacteria in the fish's intestinal tract are the source. Saxitoxin, the cause of paralytic shellfish poisoning and red tide, can also be found in certain puffers. Philippines In September 2012, the Bureau of Fisheries and Aquatic Resources in the Philippines issued a warning not to eat puffer fish, after local fishermen died upon consuming puffer fish for dinner. The warning indicated that puffer fish toxin is 100 times more potent than cyanide. Thailand Pufferfish, called pakapao in Thailand, are usually consumed by mistake. They are often cheaper than other fish, and because they contain inconsistent levels of toxins between fish and season, there is little awareness or monitoring of the danger. Consumers are regularly hospitalized and some even die from the poisoning. United States Cases of neurological symptoms, including numbness and tingling of the lips and mouth, have been reported to rise after the consumption of puffers caught in the area of Titusville, Florida, USA. The symptoms generally resolve within hours to days, although one affected individual required intubation for 72 hours. As a result, Florida banned the harvesting of puffers from certain bodies of water. Treatment Treatment is
Distribution They are most diverse in the tropics, relatively uncommon in the temperate zone, and completely absent from cold waters. Ecology and life history Most pufferfish species live in marine or brackish waters, but some can enter fresh water. About 35 species spend their entire lifecycles in fresh water. These freshwater species are found in disjunct tropical regions of South America (Colomesus asellus), Africa (six Tetraodon species), and Southeast Asia (Auriglobus, Carinotetraodon, Dichotomyctere, Leiodon and Pao). Natural defenses The puffer's unique and distinctive natural defenses help compensate for its slow locomotion. It moves by combining pectoral, dorsal, anal, and caudal fin motions. This makes it highly maneuverable, but very slow, so a comparatively easy predation target. Its tail fin is mainly used as a rudder, but it can be used for a sudden evasive burst of speed that shows none of the care and precision of its usual movements. The puffer's excellent eyesight, combined with this speed burst, is the first and most important defense against predators. The pufferfish's secondary defense mechanism, used if successfully pursued, is to fill its extremely elastic stomach with water (or air when outside the water) until it is much larger and almost spherical in shape. Even if they are not visible when the puffer is not inflated, all puffers have pointed spines, so a hungry predator may suddenly find itself facing an unpalatable, pointy ball rather than a slow, easy meal. Predators that do not heed this warning (or are "lucky" enough to catch the puffer suddenly, before or during inflation) may die from choking, and predators that do manage to swallow the puffer may find their stomachs full of tetrodotoxin (TTX), making puffers an unpleasant, possibly lethal, choice of prey. This neurotoxin is found primarily in the ovaries and liver, although smaller amounts exist in the intestines and skin, as well as trace amounts in muscle. It does not always have a lethal effect on large predators, such as sharks, but it can kill humans. Larval pufferfish are chemically defended by the presence of TTX on the surface of skin, which causes predators to spit them out. Not all puffers are necessarily poisonous; the flesh of the northern puffer is not toxic (a level of poison can be found in its viscera) and it is considered a delicacy in North America. Takifugu oblongus, for example, is a fugu puffer that is not poisonous, and toxin level varies widely even in fish that are. A puffer's neurotoxin is not necessarily as toxic to other animals as it is to humans, and puffers are eaten routinely by some species of fish, such as lizardfish and sharks. Puffers are able to move their eyes independently, and many species can change the color or intensity of their patterns in response to environmental changes. In these respects, they are somewhat similar to the terrestrial chameleon. Although most puffers are drab, many have bright colors and distinctive markings, and make no attempt to hide from predators. This is likely an example of honestly signaled aposematism. Dolphins have been filmed expertly handling pufferfish amongst themselves in an apparent attempt to get intoxicated or enter a trance-like state. Reproduction Many marine puffers have a pelagic, or open-ocean, life stage. Spawning occurs after males slowly push females to the water surface or join females already present. The eggs are spherical and buoyant. Hatching occurs after roughly four days. The fry are tiny, but under magnification have a shape usually reminiscent of a pufferfish. They have a functional mouth and eyes, and must eat within a few days. Brackish-water puffers may breed in bays in a manner similar to marine species, or may breed more similarly to the freshwater species, in cases where they have moved far enough upriver. Reproduction in freshwater species varies quite a bit. The dwarf puffers court with males following females, possibly displaying the crests and keels unique to this subgroup of species. After the female accepts his advances, she will lead the male into plants or another form of cover, where she can release eggs for fertilization. The male may help her by rubbing against her side. This has been observed in captivity,
set. A partial function is often used when its exact domain of definition is not known or difficult to specify. This is the case in calculus, where, for example, the quotient of two functions is a partial function whose domain of definition cannot contain the zeros of the denominator. For this reason, in calculus, and more generally in mathematical analysis, a partial function is generally called simply a . In computability theory, a general recursive function is a partial function from the integers to the integers; for many of them no algorithm can exist for deciding whether they are in fact total. When arrow notation is used for functions, a partial function from to is sometimes written as , or However, there is no general convention, and the latter notation is more commonly used for injective functions. Specifically, for a partial function and any one has either: (it is a single element in ), or is undefined. For example, if is the square root function restricted to the integers defined by: if, and only if, then is only defined if is a perfect square (that is, ). So but is undefined. Basic concepts A partial function is said to be injective, surjective, or bijective when the function given by the restriction of the partial function to its domain of definition is injective, surjective, bijective respectively. Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective. An injective partial function may be inverted to an injective partial function, and a partial function which is both injective and surjective has an injective function as inverse. Furthermore, a function which is injective may be inverted to an injective partial function. The notion of transformation can be generalized to partial functions as well. A partial transformation is a function where both and are subsets of some set Function A function is a binary relation that is functional (also called right-unique) and serial (also called left-total). This is a stronger definition than that of a partial function which only requires the functional property. Function spaces The set of all partial functions from a set to a set denoted by is the union of all functions defined on subsets of with same codomain : the latter also written as In finite case, its cardinality is because any partial
and exceptions are suppressed, e.g. when the square root of a negative number is requested. In a programming language where function parameters are statically typed, a function may be defined as a partial function because the language's type system cannot express the exact domain of the function, so the programmer instead gives it the smallest domain which is expressible as a type and contains the domain of definition of the function. In category theory In category theory, when considering the operation of morphism composition in concrete categories, the composition operation is a function if and only if has one element. The reason for this is that two morphisms and can only be composed as if that is, the codomain of must equal the domain of The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and point-preserving maps. One textbook notes that "This formal completion of sets and partial maps by adding “improper,” “infinite” elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science." The category of sets and partial bijections is equivalent to its dual. It is the prototypical inverse category. In abstract algebra Partial algebra generalizes the notion of universal algebra to partial operations. An example would be a field, in which the multiplicative inversion is the only proper partial operation (because division by zero is not defined). The set of all partial functions (partial transformations) on a given base set, forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on ), typically denoted by The set of all partial bijections on forms the symmetric inverse semigroup. Charts and atlases for manifolds and fiber bundles Charts in the atlases which specify the structure of manifolds and fiber bundles are partial functions. In the case of manifolds, the domain is the point set of the manifold. In the case of fiber bundles, the domain is the space of the fiber bundle. In these applications, the most important construction is the transition map, which is the composite of one chart with the inverse of another. The initial classification of manifolds and fiber bundles is largely expressed in terms of constraints on these transition maps. The reason for the use of partial functions instead of functions is to permit general global topologies to be represented by stitching together local patches to describe the global structure. The "patches" are the domains where the charts are defined. See also References Martin Davis (1958), Computability and Unsolvability, McGraw–Hill Book Company, Inc, New York. Republished by Dover in 1982. . Stephen Kleene (1952), Introduction to Meta-Mathematics, North-Holland Publishing Company, Amsterdam, Netherlands, 10th printing with corrections added on 7th printing (1974). . Harold S. Stone (1972), Introduction to Computer Organization and Data Structures, McGraw–Hill
electrode (E) exposed to the light, and a collector (C) whose voltage VC can be externally controlled. A positive external voltage is used to direct the photoemitted electrons onto the collector. If the frequency and the intensity of the incident radiation are fixed, the photoelectric current I increases with an increase in the positive voltage, as more and more electrons are directed onto the electrode. When no additional photoelectrons can be collected, the photoelectric current attains a saturation value. This current can only increase with the increase of the intensity of light. An increasing negative voltage prevents all but the highest-energy electrons from reaching the collector. When no current is observed through the tube, the negative voltage has reached the value that is high enough to slow down and stop the most energetic photoelectrons of kinetic energy Kmax. This value of the retarding voltage is called the stopping potential or cut off potential Vo. Since the work done by the retarding potential in stopping the electron of charge e is eVo, the following must hold eVo = Kmax. The current-voltage curve is sigmoidal, but its exact shape depends on the experimental geometry and the electrode material properties. For a given metal surface, there exists a certain minimum frequency of incident radiation below which no photoelectrons are emitted. This frequency is called the threshold frequency. Increasing the frequency of the incident beam increases the maximum kinetic energy of the emitted photoelectrons, and the stopping voltage has to increase. The number of emitted electrons may also change because the probability that each photon results in an emitted electron is a function of photon energy. An increase in the intensity of the same monochromatic light (so long as the intensity is not too high), which is proportional to the number of photons impinging on the surface in a given time, increases the rate at which electrons are ejected—the photoelectric current I—but the kinetic energy of the photoelectrons and the stopping voltage remain the same. For a given metal and frequency of incident radiation, the rate at which photoelectrons are ejected is directly proportional to the intensity of the incident light. The time lag between the incidence of radiation and the emission of a photoelectron is very small, less than 10−9 second. Angular distribution of the photoelectrons is highly dependent on polarization (the direction of the electric field) of the incident light, as well as the emitting material's quantum properties such as atomic and molecular orbital symmetries and the electronic band structure of crystalline solids. In materials without macroscopic order, the distribution of electrons tends to peak in the direction of polarization of linearly polarized light. The experimental technique that can measure these distributions to infer the material's properties is angle-resolved photoemission spectroscopy. Theoretical explanation In 1905, Einstein proposed a theory of the photoelectric effect using a concept first put forward by Max Planck that light consists of tiny packets of energy known as photons or light quanta. Each packet carries energy that is proportional to the frequency of the corresponding electromagnetic wave. The proportionality constant has become known as the Planck constant. The maximum kinetic energy of the electrons that were delivered this much energy before being removed from their atomic binding is where is the minimum energy required to remove an electron from the surface of the material. It is called the work function of the surface and is sometimes denoted or . If the work function is written as the formula for the maximum kinetic energy of the ejected electrons becomes Kinetic energy is positive, and is required for the photoelectric effect to occur. The frequency is the threshold frequency for the given material. Above that frequency, the maximum kinetic energy of the photoelectrons as well as the stopping voltage in the experiment rise linearly with the frequency, and have no dependence on the number of photons and the intensity of the impinging monochromatic light. Einstein's formula, however simple, explained all the phenomenology of the photoelectric effect, and had far-reaching consequences in the development of quantum mechanics. Photoemission from atoms, molecules and solids Electrons that are bound in atoms, molecules and solids each occupy distinct states of well-defined binding energies. When light quanta deliver more than this amount of energy to an individual electron, the electron may be emitted into free space with excess (kinetic) energy that is higher than the electron's binding energy. The distribution of kinetic energies thus reflects the distribution of the binding energies of the electrons in the atomic, molecular or crystalline system: an electron emitted from the state at binding energy is found at kinetic energy . This distribution is one of the main characteristics of the quantum system, and can be used for further studies in quantum chemistry and quantum physics. Models of photoemission from solids The electronic properties of ordered, crystalline solids are determined by the distribution of the electronic states with respect to energy and momentum—the electronic band structure of the solid. Theoretical models of photoemission from solids show that this distribution is, for the most part, preserved in the photoelectric effect. The phenomenological three-step model for ultraviolet and soft X-ray excitation decomposes the effect into these steps: Inner photoelectric effect in the bulk of the material that is a direct optical transition between an occupied and an unoccupied electronic state. This effect is subject to quantum-mechanical selection rules for dipole transitions. The hole left behind the electron can give rise to secondary electron emission, or the so-called Auger effect, which may be visible even when the primary photoelectron does not leave the material. In molecular solids phonons are excited in this step and may be visible as satellite lines in the final electron energy. Electron propagation to the surface in which some electrons may be scattered because of interactions with other constituents of the solid. Electrons that originate deeper in the solid are much more likely to suffer collisions and emerge with altered energy and momentum. Their mean-free path is a universal curve dependent on electron's energy. Electron escape through the surface barrier into free-electron-like states of the vacuum. In this step the electron loses energy in the amount of the work function of the surface, and suffers from the momentum loss in the direction perpendicular to the surface. Because the binding energy of electrons in solids is conveniently expressed with respect to the highest occupied state at the Fermi energy , and the difference to the free-space (vacuum) energy is the work function of the surface, the kinetic energy of the electrons emitted from solids is usually written as . There are cases where the three-step model fails to explain peculiarities of the photoelectron intensity distributions. The more elaborate one-step model treats the effect as a coherent process of photoexcitation into the final state of a finite crystal for which the wave function is free-electron-like outside of the crystal, but has a decaying envelope inside. History 19th century In 1839, Alexandre Edmond Becquerel discovered the photovoltaic effect while studying the effect of light on electrolytic cells. Though not equivalent to the photoelectric effect, his work on photovoltaics was instrumental in showing a strong relationship between light and electronic properties of materials. In 1873, Willoughby Smith discovered photoconductivity in selenium while testing the metal for its high resistance properties in conjunction with his work involving submarine telegraph cables. Johann Elster (1854–1920) and Hans Geitel (1855–1923), students in Heidelberg, investigated the effects produced by light on electrified bodies and developed the first practical photoelectric cells that could be used to measure the intensity of light. They arranged metals with respect to their power of discharging negative electricity: rubidium, potassium, alloy of potassium and sodium, sodium, lithium, magnesium, thallium and zinc; for copper, platinum, lead, iron, cadmium, carbon, and mercury the effects with ordinary light were too small to be measurable. The order of the metals for this effect was the same as in Volta's series for contact-electricity, the most electropositive metals giving the largest photo-electric effect. In 1887, Heinrich Hertz observed the photoelectric effect and reported on the production and reception of electromagnetic waves. The receiver in his apparatus consisted of a coil with a spark gap, where a spark would be seen upon detection of electromagnetic waves. He placed the apparatus in a darkened box to see the spark better. However, he noticed that the maximum spark length was reduced when inside the box. A glass panel placed between the source of electromagnetic waves and the receiver absorbed ultraviolet radiation that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he replaced the glass with quartz, as quartz does not absorb UV radiation. The discoveries by Hertz led to a series of investigations by Hallwachs, Hoor, Righi and Stoletov on the effect of light, and especially of ultraviolet light, on charged bodies. Hallwachs connected a zinc plate to an electroscope. He allowed ultraviolet light to fall on a freshly cleaned zinc plate and observed that the zinc plate became uncharged if initially negatively charged, positively charged if initially uncharged, and more positively charged if initially positively charged. From these observations he concluded that some negatively charged particles were emitted by the zinc plate when exposed to ultraviolet light. With regard to the Hertz effect, the researchers from the start showed the complexity of the phenomenon of photoelectric fatigue—the progressive diminution of the effect observed upon fresh metallic surfaces. According to Hallwachs, ozone played an important part in the phenomenon, and the emission was influenced by oxidation, humidity, and the degree of polishing of the surface. It was at the time unclear whether fatigue is absent in a vacuum. In the period from 1888 until 1891, a detailed analysis of the photoeffect was performed by Aleksandr Stoletov with results reported in six publications. Stoletov invented a new experimental setup which was more suitable for a quantitative analysis of the photoeffect. He discovered a direct proportionality between the intensity of light and the induced photoelectric current (the first law of photoeffect or Stoletov's law). He measured the dependence of the intensity of the photo electric current on the gas pressure, where he found the existence of an optimal gas pressure corresponding to a maximum photocurrent; this property was used for the creation of solar cells. Many substances besides metals discharge negative electricity under the action of ultraviolet light. G. C. Schmidt and O. Knoblauch compiled a list of these substances. In 1899, J. J. Thomson investigated ultraviolet light in Crookes tubes. Thomson deduced that the ejected particles, which he called corpuscles, were of the same nature as cathode rays. These particles later became known as the electrons. Thomson enclosed a metal plate (a cathode) in a vacuum tube, and exposed it to high-frequency radiation. It was thought that the oscillating electromagnetic fields caused the atoms' field to resonate and, after reaching a certain amplitude, caused subatomic corpuscles to be emitted, and current to be detected. The amount of this current varied with the intensity and color of the radiation. Larger radiation intensity or frequency would produce more current. During the years 1886–1902, Wilhelm Hallwachs and Philipp Lenard investigated the phenomenon of photoelectric emission in detail. Lenard observed that a current flows through an evacuated glass tube enclosing two electrodes when ultraviolet radiation falls on one of them. As soon as ultraviolet radiation is stopped, the current also stops. This initiated the concept of photoelectric emission. The discovery of the ionization of gases by ultraviolet light was made by Philipp Lenard in 1900. As the effect was produced across several centimeters of air and yielded a greater number of positive ions than negative, it was natural to interpret the phenomenon, as J. J. Thomson did, as a Hertz effect upon the particles present in the gas. 20th century In 1902, Lenard observed that the energy of individual emitted electrons increased with the frequency (which is related to the color) of the light. This appeared to be at odds with Maxwell's wave theory of light, which predicted that the electron energy would be proportional to the intensity of the radiation. Lenard observed the variation in electron energy with light frequency using a powerful electric arc lamp which enabled him to investigate large changes in intensity, and that had sufficient power to enable him to investigate the variation of the electrode's potential with light frequency. He found the electron energy by relating it to the maximum stopping potential (voltage) in a phototube. He found that the maximum electron kinetic energy is determined by the frequency of the light. For example, an increase in frequency results in an increase in the maximum kinetic energy calculated for an electron upon liberation – ultraviolet radiation would require a higher applied stopping potential to stop current in a phototube than blue light. However, Lenard's results were qualitative rather than quantitative because of the difficulty in performing the experiments: the experiments needed to be done on freshly cut metal so that the pure metal was observed, but it oxidized in a matter of minutes even in the partial vacuums he used. The current emitted by the surface was determined by the light's intensity, or brightness: doubling the intensity of the light doubled the number of electrons emitted from the surface. The researches of Langevin and those of Eugene Bloch have shown that the greater part of the Lenard effect is certainly due to the Hertz effect. The Lenard effect upon the gas itself nevertheless does exist. Refound by J. J. Thomson and then more decisively by Frederic Palmer, Jr., the gas photoemission was studied and showed very different characteristics than those at first attributed to it by Lenard. In 1900, while studying black-body radiation, the German physicist Max Planck suggested in his "On the Law of Distribution of Energy in the Normal Spectrum" paper that the energy carried by electromagnetic waves could only be released in packets of energy. In 1905, Albert Einstein published a paper advancing the hypothesis that light energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect. Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called Planck's constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a key step in the development of quantum mechanics.
is highly dependent on polarization (the direction of the electric field) of the incident light, as well as the emitting material's quantum properties such as atomic and molecular orbital symmetries and the electronic band structure of crystalline solids. In materials without macroscopic order, the distribution of electrons tends to peak in the direction of polarization of linearly polarized light. The experimental technique that can measure these distributions to infer the material's properties is angle-resolved photoemission spectroscopy. Theoretical explanation In 1905, Einstein proposed a theory of the photoelectric effect using a concept first put forward by Max Planck that light consists of tiny packets of energy known as photons or light quanta. Each packet carries energy that is proportional to the frequency of the corresponding electromagnetic wave. The proportionality constant has become known as the Planck constant. The maximum kinetic energy of the electrons that were delivered this much energy before being removed from their atomic binding is where is the minimum energy required to remove an electron from the surface of the material. It is called the work function of the surface and is sometimes denoted or . If the work function is written as the formula for the maximum kinetic energy of the ejected electrons becomes Kinetic energy is positive, and is required for the photoelectric effect to occur. The frequency is the threshold frequency for the given material. Above that frequency, the maximum kinetic energy of the photoelectrons as well as the stopping voltage in the experiment rise linearly with the frequency, and have no dependence on the number of photons and the intensity of the impinging monochromatic light. Einstein's formula, however simple, explained all the phenomenology of the photoelectric effect, and had far-reaching consequences in the development of quantum mechanics. Photoemission from atoms, molecules and solids Electrons that are bound in atoms, molecules and solids each occupy distinct states of well-defined binding energies. When light quanta deliver more than this amount of energy to an individual electron, the electron may be emitted into free space with excess (kinetic) energy that is higher than the electron's binding energy. The distribution of kinetic energies thus reflects the distribution of the binding energies of the electrons in the atomic, molecular or crystalline system: an electron emitted from the state at binding energy is found at kinetic energy . This distribution is one of the main characteristics of the quantum system, and can be used for further studies in quantum chemistry and quantum physics. Models of photoemission from solids The electronic properties of ordered, crystalline solids are determined by the distribution of the electronic states with respect to energy and momentum—the electronic band structure of the solid. Theoretical models of photoemission from solids show that this distribution is, for the most part, preserved in the photoelectric effect. The phenomenological three-step model for ultraviolet and soft X-ray excitation decomposes the effect into these steps: Inner photoelectric effect in the bulk of the material that is a direct optical transition between an occupied and an unoccupied electronic state. This effect is subject to quantum-mechanical selection rules for dipole transitions. The hole left behind the electron can give rise to secondary electron emission, or the so-called Auger effect, which may be visible even when the primary photoelectron does not leave the material. In molecular solids phonons are excited in this step and may be visible as satellite lines in the final electron energy. Electron propagation to the surface in which some electrons may be scattered because of interactions with other constituents of the solid. Electrons that originate deeper in the solid are much more likely to suffer collisions and emerge with altered energy and momentum. Their mean-free path is a universal curve dependent on electron's energy. Electron escape through the surface barrier into free-electron-like states of the vacuum. In this step the electron loses energy in the amount of the work function of the surface, and suffers from the momentum loss in the direction perpendicular to the surface. Because the binding energy of electrons in solids is conveniently expressed with respect to the highest occupied state at the Fermi energy , and the difference to the free-space (vacuum) energy is the work function of the surface, the kinetic energy of the electrons emitted from solids is usually written as . There are cases where the three-step model fails to explain peculiarities of the photoelectron intensity distributions. The more elaborate one-step model treats the effect as a coherent process of photoexcitation into the final state of a finite crystal for which the wave function is free-electron-like outside of the crystal, but has a decaying envelope inside. History 19th century In 1839, Alexandre Edmond Becquerel discovered the photovoltaic effect while studying the effect of light on electrolytic cells. Though not equivalent to the photoelectric effect, his work on photovoltaics was instrumental in showing a strong relationship between light and electronic properties of materials. In 1873, Willoughby Smith discovered photoconductivity in selenium while testing the metal for its high resistance properties in conjunction with his work involving submarine telegraph cables. Johann Elster (1854–1920) and Hans Geitel (1855–1923), students in Heidelberg, investigated the effects produced by light on electrified bodies and developed the first practical photoelectric cells that could be used to measure the intensity of light. They arranged metals with respect to their power of discharging negative electricity: rubidium, potassium, alloy of potassium and sodium, sodium, lithium, magnesium, thallium and zinc; for copper, platinum, lead, iron, cadmium, carbon, and mercury the effects with ordinary light were too small to be measurable. The order of the metals for this effect was the same as in Volta's series for contact-electricity, the most electropositive metals giving the largest photo-electric effect. In 1887, Heinrich Hertz observed the photoelectric effect and reported on the production and reception of electromagnetic waves. The receiver in his apparatus consisted of a coil with a spark gap, where a spark would be seen upon detection of electromagnetic waves. He placed the apparatus in a darkened box to see the spark better. However, he noticed that the maximum spark length was reduced when inside the box. A glass panel placed between the source of electromagnetic waves and the receiver absorbed ultraviolet radiation that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he replaced the glass with quartz, as quartz does not absorb UV radiation. The discoveries by Hertz led to a series of investigations by Hallwachs, Hoor, Righi and Stoletov on the effect of light, and especially of ultraviolet light, on charged bodies. Hallwachs connected a zinc plate to an electroscope. He allowed ultraviolet light to fall on a freshly cleaned zinc plate and observed that the zinc plate became uncharged if initially negatively charged, positively charged if initially uncharged, and more positively charged if initially positively charged. From these observations he concluded that some negatively charged particles were emitted by the zinc plate when exposed to ultraviolet light. With regard to the Hertz effect, the researchers from the start showed the complexity of the phenomenon of photoelectric fatigue—the progressive diminution of the effect observed upon fresh metallic surfaces. According to Hallwachs, ozone played an important part in the phenomenon, and the emission was influenced by oxidation, humidity, and the degree of polishing of the surface. It was at the time unclear whether fatigue is absent in a vacuum. In the period from 1888 until 1891, a detailed analysis of the photoeffect was performed by Aleksandr Stoletov with results reported in six publications. Stoletov invented a new experimental setup which was more suitable for a quantitative analysis of the photoeffect. He discovered a direct proportionality between the intensity of light and the induced photoelectric current (the first law of photoeffect or Stoletov's law). He measured the dependence of the intensity of the photo electric current on the gas pressure, where he found the existence of an optimal gas pressure corresponding to a maximum photocurrent; this property was used for the creation of solar cells. Many substances besides metals discharge negative electricity under the action of ultraviolet light. G. C. Schmidt and O. Knoblauch compiled a list of these substances. In 1899, J. J. Thomson investigated ultraviolet light in Crookes tubes. Thomson deduced that the ejected particles, which he called corpuscles, were of the same nature as cathode rays. These particles later became known as the electrons. Thomson enclosed a metal plate (a cathode) in a vacuum tube, and exposed it to high-frequency radiation. It was thought that the oscillating electromagnetic fields caused the atoms' field to resonate and, after reaching a certain amplitude, caused subatomic corpuscles to be emitted, and current to be detected. The amount of this current varied with the intensity and color of the radiation. Larger radiation intensity or frequency would produce more current. During the years 1886–1902, Wilhelm Hallwachs and Philipp Lenard investigated the phenomenon of photoelectric emission in detail. Lenard observed that a current flows through an evacuated glass tube enclosing two electrodes when ultraviolet radiation falls on one of them. As soon as ultraviolet radiation is stopped, the current also stops. This initiated the concept of photoelectric emission. The discovery of the ionization of gases by ultraviolet light was made by Philipp Lenard in 1900. As the effect was produced across several centimeters of air and yielded a greater number of positive ions than negative, it was natural to interpret the phenomenon, as J. J. Thomson did, as a Hertz effect upon the particles present in the gas. 20th century In 1902, Lenard observed that the energy of individual emitted electrons increased with the frequency (which is related to the color) of the light. This appeared to be at odds with Maxwell's wave theory of light, which predicted that the electron energy would be proportional to the intensity of the radiation. Lenard observed the variation in electron energy with light frequency using a powerful electric arc lamp which enabled him to investigate large changes in intensity, and that had sufficient power to enable him to investigate the variation of the electrode's potential with light frequency. He found the electron energy by relating it to the maximum stopping potential (voltage) in a phototube. He found that the maximum electron kinetic energy is determined by the frequency of the light. For example, an increase in frequency results in an increase in the maximum kinetic energy calculated for an electron upon liberation – ultraviolet radiation would require a higher applied stopping potential to stop current in a phototube than blue light. However, Lenard's results were qualitative rather than quantitative because of the difficulty in performing the experiments: the experiments needed to be done on freshly cut metal so that the pure metal was observed, but it oxidized in a matter of minutes even in the partial vacuums he used. The current emitted by the surface was determined by the light's intensity, or brightness: doubling the intensity of the light doubled the number of electrons emitted from the surface. The researches of Langevin and those of Eugene Bloch have shown that the greater part of the Lenard effect is certainly due to the Hertz effect. The Lenard effect upon the gas itself nevertheless does exist. Refound by J. J. Thomson and then more decisively by Frederic Palmer, Jr., the gas photoemission was studied and showed very different characteristics than those at first attributed to it by Lenard. In 1900, while studying black-body radiation, the German physicist Max Planck suggested in his "On the Law of Distribution of Energy in the Normal Spectrum" paper that the energy carried by electromagnetic waves could only be released in packets of energy. In 1905, Albert Einstein published a paper advancing the hypothesis that light energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect. Einstein theorized that the
global climate during the Paleogene departed from the hot and humid conditions of the late Mesozoic Era and began a cooling and drying trend. Though periodically disrupted by warm periods, such as the Paleocene–Eocene Thermal Maximum, this trend persisted until the end of the most recent glacial period of the current ice age, when temperatures began to rise again. The trend was partly caused by the formation of the Antarctic Circumpolar Current, which significantly lowered oceanic water temperatures. A 2018 study estimated that during the early Palaeogene about 56-48 million years ago, annual air temperatures, over land and at mid-latitude, averaged about 23–29 °C (± 4.7 °C), which is 5–10 °C higher than most previous estimates. For comparison, this was 10 to 15 °C higher than the current annual mean temperatures in these areas. The authors suggest that the current atmospheric carbon dioxide trajectory, if it continues, could establish these temperatures again. During the Paleogene, the continents continued to drift closer to their current positions. India was in the process of colliding with Asia, forming the Himalayas. The Atlantic Ocean continued to widen by a few centimeters each year. Africa was moving north to collide with Europe and form the Mediterranean Sea, while South America was moving closer to North America (they would later connect via the Isthmus of Panama). Inland seas retreated from North America early in the period. Australia had also separated from Antarctica and was
; also spelled Palaeogene or Palæogene; informally Lower Tertiary or Early Tertiary) is a geologic period and system that spans 43 million years from the end of the Cretaceous Period million years ago (Mya) to the beginning of the Neogene Period Mya. It is the beginning of the Cenozoic Era of the present Phanerozoic Eon. The earlier term Tertiary Period was used to define the span of time now covered by the Paleogene and subsequent Neogene Periods; despite no longer being recognised as a formal stratigraphic term, 'Tertiary' is still widely found in earth science literature and remains in informal use. The Paleogene is most notable for being the time during which mammals diversified from relatively small, simple forms into a large group of diverse animals in the wake of the Cretaceous–Paleogene extinction event that ended the preceding Cretaceous Period. The United States Geological Survey uses the abbreviation PE for the Paleogene, but the more commonly used abbreviation is PG with PE being used for Paleocene, an epoch within the Paleogene. This period consists of the Paleocene, Eocene, and Oligocene epochs. The end of the Paleocene (55.5/54.8 Mya) was marked by the Paleocene–Eocene Thermal Maximum, one of the most significant periods of global change during the Cenozoic, which upset oceanic and atmospheric circulation and led to
relation is a strict preorder if and only if it is a strict partial order. By definition, a strict partial order is an asymmetric strict preorder, where is called if for all Conversely, every strict preorder is a strict partial order because every transitive irreflexive relation is necessarily asymmetric. Although they are equivalent, the term "strict partial order" is typically preferred over "strict preorder" and readers are referred to the article on strict partial orders for details about such relations. In contrast to strict preorders, there are many (non-strict) preorders that are (non-strict) partial orders. Related definitions If a preorder is also antisymmetric, that is, and implies then it is a partial order. On the other hand, if it is symmetric, that is, if implies then it is an equivalence relation. A preorder is total if or for all The notion of a preordered set can be formulated in a categorical framework as a thin category; that is, as a category with at most one morphism from an object to another. Here the objects correspond to the elements of and there is one morphism for objects which are related, zero otherwise. Alternately, a preordered set can be understood as an enriched category, enriched over the category A preordered class is a class equipped with a preorder. Every set is a class and so every preordered set is a preordered class. Examples The reachability relationship in any directed graph (possibly containing cycles) gives rise to a preorder, where in the preorder if and only if there is a path from x to y in the directed graph. Conversely, every preorder is the reachability relationship of a directed graph (for instance, the graph that has an edge from x to y for every pair with However, many different graphs may have the same reachability preorder as each other. In the same way, reachability of directed acyclic graphs, directed graphs with no cycles, gives rise to partially ordered sets (preorders satisfying an additional antisymmetry property). Every finite topological space gives rise to a preorder on its points by defining if and only if x belongs to every neighborhood of y. Every finite preorder can be formed as the specialization preorder of a topological space in this way. That is, there is a one-to-one correspondence between finite topologies and finite preorders. However, the relation between infinite topological spaces and their specialization preorders is not one-to-one. A net is a directed preorder, that is, each pair of elements has an upper bound. The definition of convergence via nets is important in topology, where preorders cannot be replaced by partially ordered sets without losing important features. Further examples: The relation defined by if where f is a function into some preorder. The relation defined by if there exists some injection from x to y. Injection may be replaced by surjection, or any type of structure-preserving function, such as ring homomorphism, or permutation. The embedding relation for countable total orderings. The graph-minor relation in graph theory. A category with at most one morphism from any object x to any other object y is a preorder. Such categories are called thin. In this sense, categories "generalize" preorders by allowing more than one relation between objects: each morphism is a distinct (named) preorder relation. In computer science, one can find examples of the following preorders. Many-one and Turing reductions are preorders on complexity classes. The subtyping relations are usually preorders. Simulation preorders are preorders (hence the name). Reduction relations in abstract rewriting systems. The encompassment preorder on the set of terms, defined by if a subterm of t is a substitution instance of s. Example of a total preorder: Preference, according to common models. Uses Preorders play a pivotal role in several situations: Every preorder can be given a topology, the Alexandrov topology; and indeed, every preorder on a set is in one-to-one correspondence with an Alexandrov topology on that set. Preorders may be used to define interior algebras. Preorders provide the Kripke semantics for certain types of modal logic. Preorders are used in forcing in set theory to prove consistency and independence results. Constructions Every binary relation on a set can be extended to a preorder on by taking the transitive closure and reflexive closure, The transitive closure indicates path connection in if and only if there is an -path from to Left residual preorder induced by a binary relation Given a binary relation the complemented composition forms a preorder called the left residual, where denotes the converse relation of and denotes the complement relation of while denotes relation
which are special cases of a preorder: an antisymmetric preorder is a partial order, and a symmetric preorder is an equivalence relation. The name comes from the idea that preorders (that are not partial orders) are 'almost' (partial) orders, but not quite; they are neither necessarily antisymmetric nor asymmetric. Because a preorder is a binary relation, the symbol can be used as the notational device for the relation. However, because they are not necessarily antisymmetric, some of the ordinary intuition associated to the symbol may not apply. On the other hand, a preorder can be used, in a straightforward fashion, to define a partial order and an equivalence relation. Doing so, however, is not always useful or worthwhile, depending on the problem domain being studied. In words, when one may say that b a or that a b, or that b to a. Occasionally, the notation ← or → or is used instead of To every preorder, there corresponds a directed graph, with elements of the set corresponding to vertices, and the order relation between pairs of elements corresponding to the directed edges between vertices. The converse is not true: most directed graphs are neither reflexive nor transitive. In general, the corresponding graphs may contain cycles. A preorder that is antisymmetric no longer has cycles; it is a partial order, and corresponds to a directed acyclic graph. A preorder that is symmetric is an equivalence relation; it can be thought of as having lost the direction markers on the edges of the graph. In general, a preorder's corresponding directed graph may have many disconnected components. Formal definition Consider a homogeneous relation on some given set so that by definition, is some subset of and the notation is used in place of Then is called a or if it is reflexive and transitive; that is, if it satisfies: Reflexivity: for all and Transitivity: if for all A set that is equipped with a preorder is called a preordered set (or proset). For emphasis or contrast to strict preorders, a preorder may also be referred to as a non-strict preorder. If reflexivity is replaced with irreflexivity (while keeping transitivity) then the result is called a strict preorder; explicitly, a on is a homogeneous binary relation on that satisfies the following conditions: Irreflexivity or Anti-reflexivity: for all that is, is for all and Transitivity: if for all A binary relation is a strict preorder if and only if it is a strict partial order. By definition, a strict partial order is an asymmetric strict preorder, where is called if for all Conversely, every strict preorder is a strict partial order because every transitive irreflexive relation is necessarily asymmetric. Although they are equivalent, the term "strict partial order" is typically preferred over "strict preorder" and readers are referred to the article on strict partial orders for details about such relations. In contrast to strict preorders, there are many (non-strict) preorders that are (non-strict) partial orders. Related definitions If a preorder is also antisymmetric, that is, and implies then it is a partial order. On the other hand, if it is symmetric, that is, if implies then it is an equivalence relation. A preorder is total if or for all The notion of a preordered set can be formulated in a categorical framework as a thin category; that is, as a category with at most one morphism from an object to another. Here the objects correspond to the elements of and there is one morphism for objects which are related, zero otherwise. Alternately, a preordered set can be understood as an enriched category, enriched over the category A preordered class is a class equipped with a preorder. Every set is a class and so every preordered set is a preordered class. Examples The reachability relationship in any directed graph (possibly containing cycles) gives rise to a preorder, where in the preorder if and only if there is a path from x to y in the directed graph. Conversely, every preorder is the reachability relationship of a directed graph (for instance, the graph that has an edge from x to y for every pair with However, many different graphs may have the same reachability preorder as each other. In the same way, reachability of directed acyclic graphs, directed graphs with no cycles, gives rise to
unconscious adaptation to reality. Langs' recent work in some measure returns to the earlier Freud, in that Langs prefers a modified version of the topographic model of the mind (conscious, preconscious, and unconscious) over the structural model (id, ego, and super-ego), including the former's emphasis on trauma (though Langs looks to death-related traumas rather than sexual traumas). At the same time, Langs' model of the mind differs from Freud's in that it understands the mind in terms of evolutionary biological principles. Relational psychoanalysis Relational psychoanalysis combines interpersonal psychoanalysis with object-relations theory and with inter-subjective theory as critical for mental health. It was introduced by Stephen Mitchell. Relational psychoanalysis stresses how the individual's personality is shaped by both real and imagined relationships with others, and how these relationship patterns are re-enacted in the interactions between analyst and patient. In New York, key proponents of relational psychoanalysis include Lew Aron, Jessica Benjamin, and Adrienne Harris. Fonagy and Target, in London, have propounded their view of the necessity of helping certain detached, isolated patients, develop the capacity for "mentalization" associated with thinking about relationships and themselves. Arietta Slade, Susan Coates, and Daniel Schechter in New York have additionally contributed to the application of relational psychoanalysis to treatment of the adult patient-as-parent, the clinical study of mentalization in parent-infant relationships, and the intergenerational transmission of attachment and trauma. Interpersonal-relational psychoanalysis The term interpersonal-relational psychoanalysis is often used as a professional identification. Psychoanalysts under this broader umbrella debate about what precisely are the differences between the two schools, without any current clear consensus. Psychopathology (mental disturbances) Adults The various psychoses involve deficits in the autonomous ego functions (see above) of integration (organization) of thought, in abstraction ability, in relationship to reality and in reality testing. In depressions with psychotic features, the self-preservation function may also be damaged (sometimes by overwhelming depressive affect). Because of the integrative deficits (often causing what general psychiatrists call "loose associations", "blocking", "flight of ideas", "verbigeration", and "thought withdrawal"), the development of self and object representations is also impaired. Clinically, therefore, psychotic individuals manifest limitations in warmth, empathy, trust, identity, closeness and/or stability in relationships (due to problems with self-object fusion anxiety) as well. In patients whose autonomous ego functions are more intact, but who still show problems with object relations, the diagnosis often falls into the category known as "borderline". Borderline patients also show deficits, often in controlling impulses, affects, or fantasies – but their ability to test reality remains more or less intact. Adults who do not experience guilt and shame, and who indulge in criminal behavior, are usually diagnosed with psychopathy or antisocial personality disorder. Neurotic symptoms—including panic, phobias, conversions, obsessions, compulsions and depressions—are not usually caused by deficits in functions. Instead, they are caused by intrapsychic conflicts. The conflicts are generally among sexual and hostile-aggressive wishes, guilt and shame, and reality factors. The conflicts may be conscious or unconscious, but create anxiety, depressive affect, and anger. Finally, the various elements are managed by defensive operations—essentially shut-off brain mechanisms that make people unaware of that element of conflict. Repression is the term given to the mechanism that shuts thoughts out of consciousness. Isolation of affect is the term used for the mechanism that shuts sensations out of consciousness. Neurotic symptoms may occur with or without deficits in ego functions, object relations, and ego strengths. Therefore, it is not uncommon to encounter people with obsessive-compulsive disorder and schizophrenia, or patients with panic disorder who also have borderline personality disorder, etc. This section above is partial to ego psychoanalytic theory autonomous ego functions. Childhood origins Freudian theories hold that adult problems can be traced to unresolved conflicts from certain phases of childhood and adolescence, caused by fantasy, stemming from their own drives. Freud, based on the data gathered from his patients early in his career, suspected that neurotic disturbances occurred when children were sexually abused in childhood (i.e. seduction theory). Later, Freud came to believe that, although child abuse occurs, neurotic symptoms were not associated with this. He believed that neurotic people often had unconscious conflicts that involved incestuous fantasies deriving from different stages of development. He found the stage from about three to six years of age (preschool years, today called the "first genital stage") to be filled with fantasies of having romantic relationships with both parents. Arguments were quickly generated in early 20th-century Vienna about whether adult seduction of children, i.e. child sexual abuse, was the basis of neurotic illness. There still is no complete agreement, although nowadays professionals recognize the negative effects of child sexual abuse on mental health. Oedipal conflicts Many psychoanalysts who work with children have studied the actual effects of child abuse, which include ego and object relations deficits and severe neurotic conflicts. Much research has been done on these types of trauma in childhood, and the adult sequelae of those. In studying the childhood factors that start neurotic symptom development, Freud found a constellation of factors that, for literary reasons, he termed the Oedipus complex, based on the play by Sophocles, Oedipus Rex, in which the protagonist unwittingly kills his father and marries his mother. The validity of the Oedipus complex is now widely disputed and rejected. The shorthand term, oedipal—later explicated by Joseph J. Sandler in "On the Concept Superego" (1960) and modified by Charles Brenner in The Mind in Conflict (1982)—refers to the powerful attachments that children make to their parents in the preschool years. These attachments involve fantasies of sexual relationships with either (or both) parent, and, therefore, competitive fantasies toward either (or both) parents. Humberto Nagera (1975) has been particularly helpful in clarifying many of the complexities of the child through these years. "Positive" and "negative" oedipal conflicts have been attached to the heterosexual and homosexual aspects, respectively. Both seem to occur in development of most children. Eventually, the developing child's concessions to reality (that they will neither marry one parent nor eliminate the other) lead to identifications with parental values. These identifications generally create a new set of mental operations regarding values and guilt, subsumed under the term superego. Besides superego development, children "resolve" their preschool oedipal conflicts through channeling wishes into something their parents approve of ("sublimation") and the development, during the school-age years ("latency") of age-appropriate obsessive-compulsive defensive maneuvers (rules, repetitive games). Treatment Using the various analytic and psychological techniques to assess mental problems, some believe that there are particular constellations of problems that are especially suited for analytic treatment (see below) whereas other problems might respond better to medicines and other interpersonal interventions. To be treated with psychoanalysis, whatever the presenting problem, the person requesting help must demonstrate a desire to start an analysis. The person wishing to start an analysis must have some capacity for speech and communication. As well, they need to be able to have or develop trust and insight within the psychoanalytic session. Potential patients must undergo a preliminary stage of treatment to assess their amenability to psychoanalysis at that time, and also to enable the analyst to form a working psychological model, which the analyst will use to direct the treatment. Psychoanalysts mainly work with neurosis and hysteria in particular; however, adapted forms of psychoanalysis are used in working with schizophrenia and other forms of psychosis or mental disorder. Finally, if a prospective patient is severely suicidal a longer preliminary stage may be employed, sometimes with sessions which have a twenty-minute break in the middle. There are numerous modifications in technique under the heading of psychoanalysis due to the individualistic nature of personality in both analyst and patient. The most common problems treatable with psychoanalysis include: phobias, conversions, compulsions, obsessions, anxiety attacks, depressions, sexual dysfunctions, a wide variety of relationship problems (such as dating and marital strife), and a wide variety of character problems (for example, painful shyness, meanness, obnoxiousness, workaholism, hyperseductiveness, hyperemotionality, hyperfastidiousness). The fact that many of such patients also demonstrate deficits above makes diagnosis and treatment selection difficult. Analytical organizations such as the IPA, APsaA and the European Federation for Psychoanalytic Psychotherapy have established procedures and models for the indication and practice of psychoanalytical therapy for trainees in analysis. The match between the analyst and the patient can be viewed as another contributing factor for the indication and contraindication for psychoanalytic treatment. The analyst decides whether the patient is suitable for psychoanalysis. This decision made by the analyst, besides made on the usual indications and pathology, is also based to a certain degree by the "fit" between analyst and patient. A person's suitability for analysis at any particular time is based on their desire to know something about where their illness has come from. Someone who is not suitable for analysis expresses no desire to know more about the root causes of their illness. An evaluation may include one or more other analysts' independent opinions and will include discussion of the patient's financial situation and insurances. Techniques The basic method of psychoanalysis is interpretation of the patient's unconscious conflicts that are interfering with current-day functioning – conflicts that are causing painful symptoms such as phobias, anxiety, depression, and compulsions. Strachey (1936) stressed that figuring out ways the patient distorted perceptions about the analyst led to understanding what may have been forgotten. In particular, unconscious hostile feelings toward the analyst could be found in symbolic, negative reactions to what Robert Langs later called the "frame" of the therapy—the setup that included times of the sessions, payment of fees, and necessity of talking. In patients who made mistakes, forgot, or showed other peculiarities regarding time, fees, and talking, the analyst can usually find various unconscious "resistances" to the flow of thoughts (aka free association). When the patient reclines on a couch with the analyst out of view, the patient tends to remember more experiences, more resistance and transference, and is able to reorganize thoughts after the development of insight – through the interpretive work of the analyst. Although fantasy life can be understood through the examination of dreams, masturbation fantasies are also important. The analyst is interested in how the patient reacts to and avoids such fantasies. Various memories of early life are generally distorted—what Freud called screen memories—and in any case, very early experiences (before age two)—cannot be remembered. Variations in technique There is what is known among psychoanalysts as classical technique, although Freud throughout his writings deviated from this considerably, depending on the problems of any given patient. Classical technique was summarized by Allan Compton as comprising: instructions: telling the patient to try to say what's on their mind, including interferences; exploration: asking questions; and clarification: rephrasing and summarizing what the patient has been describing. As well, the analyst can also use confrontation to bringing an aspect of functioning, usually a defense, to the patient's attention. The analyst then uses a variety of interpretation methods, such as: Dynamic interpretation: explaining how being too nice guards against guilt (e.g. defense vs. affect); Genetic interpretation: explaining how a past event is influencing the present; Resistance interpretation: showing the patient how they are avoiding their problems; Transference interpretation: showing the patient ways old conflicts arise in current relationships, including that with the analyst; or Dream interpretation: obtaining the patient's thoughts about their dreams and connecting this with their current problems. Analysts can also use reconstruction to estimate what may have happened in the past that created some current issue. These techniques are primarily based on conflict theory (see above). As object relations theory evolved, supplemented by the work of John Bowlby and Mary Ainsworth, techniques with patients who had more severe problems with basic trust (Erikson, 1950) and a history of maternal deprivation (see the works of Augusta Alpert) led to new techniques with adults. These have sometimes been called interpersonal, intersubjective (cf. Stolorow), relational, or corrective object relations techniques. These techniques include expressing an empathic attunement to the patient or warmth; exposing a bit of the analyst's personal life or attitudes to the patient; allowing the patient autonomy in the form of disagreement with the analyst (cf. I. H. Paul, Letters to Simon); and explaining the motivations of others which the patient misperceives. Ego psychological concepts of deficit in functioning led to refinements in supportive therapy. These techniques are particularly applicable to psychotic and near-psychotic (cf., Eric Marcus, "Psychosis and Near-psychosis") patients. These supportive therapy techniques include discussions of reality; encouragement to stay alive (including hospitalization); psychotropic medicines to relieve overwhelming depressive affect or overwhelming fantasies (hallucinations and delusions); and advice about the meanings of things (to counter abstraction failures). The notion of the "silent analyst" has been criticized. Actually, the analyst listens using Arlow's approach as set out in "The Genesis of Interpretation", using active intervention to interpret resistances, defenses creating pathology, and fantasies. Silence is not a technique of psychoanalysis (see also the studies and opinion papers of Owen Renik). "Analytic neutrality" is a concept that does not mean the analyst is silent. It refers to the analyst's position of not taking sides in the internal struggles of the patient. For example, if a patient feels guilty, the analyst might explore what the patient has been doing or thinking that causes the guilt, but not reassure the patient not to feel guilty. The analyst might also explore the identifications with parents and others that led to the guilt. Interpersonal–relational psychoanalysts emphasize the notion that it is impossible to be neutral. Sullivan introduced the term participant-observer to indicate the analyst inevitably interacts with the analysand, and suggested the detailed inquiry as an alternative to interpretation. The detailed inquiry involves noting where the analysand is leaving out important elements of an account and noting when the story is obfuscated, and asking careful questions to open up the dialogue. Group therapy and play therapy Although single-client sessions remain the norm, psychoanalytic theory has been used to develop other types of psychological treatment. Psychoanalytic group therapy was pioneered by Trigant Burrow, Joseph Pratt, Paul F. Schilder, Samuel R. Slavson, Harry Stack Sullivan, and Wolfe. Child-centered counseling for parents was instituted early in analytic history by Freud, and was later further developed by Irwin Marcus, Edith Schulhofer, and Gilbert Kliman. Psychoanalytically based couples therapy has been promulgated and explicated by Fred Sander. Techniques and tools developed in the first decade of the 21st century have made psychoanalysis available to patients who were not treatable by earlier techniques. This meant that the analytic situation was modified so that it would be more suitable and more likely to be helpful for these patients. Eagle (2007) believes that psychoanalysis cannot be a self-contained discipline but instead must be open to influence from and integration with findings and theory from other disciplines. Psychoanalytic constructs have been adapted for use with children with treatments such as play therapy, art therapy, and storytelling. Throughout her career, from the 1920s through the 1970s, Anna Freud adapted psychoanalysis for children through play. This is still used today for children, especially those who are preadolescent. Using toys and games, children are able to symbolically demonstrate their fears, fantasies, and defenses; although not identical, this technique, in children, is analogous to the aim of free association in adults. Psychoanalytic play therapy allows the child and analyst to understand children's conflicts, particularly defenses such as disobedience and withdrawal, that have been guarding against various unpleasant feelings and hostile wishes. In art therapy, the counselor may have a child draw a portrait and then tell a story about the portrait. The counselor watches for recurring themes—regardless of whether it is with art or toys. Cultural variations Psychoanalysis can be adapted to different cultures, as long as the therapist or counselor understands the client's culture. For example, Tori and Blimes found that defense mechanisms were valid in a normative sample of 2,624 Thais. The use of certain defense mechanisms was related to cultural values. For example, Thais value calmness and collectiveness (because of Buddhist beliefs), so they were low on regressive emotionality. Psychoanalysis also applies because Freud used techniques that allowed him to get the subjective perceptions of his patients. He takes an objective approach by not facing his clients during his talk therapy sessions. He met with his patients wherever they were, such as when he used free association—where clients would say whatever came to mind without self-censorship. His treatments had little to no structure for most cultures, especially Asian cultures. Therefore, it is more likely that Freudian constructs will be used in structured therapy. In addition, Corey postulates that it will be necessary for a therapist to help clients develop a cultural identity as well as an ego identity. Psychodynamic therapy Psychodynamic therapies refer therapies that draw from psychoanalytic approaches but are designed to be shorter in duration or less intensive. Cost and length of treatment The cost to the patient of psychoanalytic treatment ranges widely from place to place and between practitioners. Low-fee analysis is often available in a psychoanalytic training clinic and graduate schools. Otherwise, the fee set by each analyst varies with the analyst's training and experience. Since, in most locations in the United States, unlike in Ontario and Germany, classical analysis (which usually requires sessions three to five times per week) is not covered by health insurance, many analysts may negotiate their fees with patients whom they feel they can help, but who have financial difficulties. The modifications of analysis, which include psychodynamic therapy, brief therapies, and certain types of group therapy, are carried out on a less frequent basis—usually once, twice, or three times a week – and usually the patient sits facing the therapist. As a result of the defense mechanisms and the lack of access to the unfathomable elements of the unconscious, psychoanalysis can be an expansive process that involves 2 to 5 sessions per week for several years. This type of therapy relies on the belief that reducing the symptoms will not actually help with the root causes or irrational drives. The analyst typically is a 'blank screen', disclosing very little about themselves in order that the client can use the space in the relationship to work on their unconscious without interference from outside. The psychoanalyst uses various methods to help the patient to become more self-aware and to develop insights into their behavior and into the meanings of symptoms. First and foremost, the psychoanalyst attempts to develop a confidential atmosphere in which the patient can feel safe reporting his feelings, thoughts and fantasies. Analysands (as people in analysis are called) are asked to report whatever comes to mind without fear of reprisal. Freud called this the "fundamental rule". Analysands are asked to talk about their lives, including their early life, current life and hopes and aspirations for the future. They are encouraged to report their fantasies, "flash thoughts" and dreams. In fact, Freud believed that dreams were, "the royal road to the unconscious"; he devoted an entire volume to the interpretation of dreams. Freud had his patients lay on a couch in a dimly lit room and would sit out of sight, usually directly behind them, as to not influence the patients thoughts by his gestures or expressions. The psychoanalyst's task, in collaboration with the analysand, is to help deepen the analysand's understanding of those factors, outside of his awareness, that drive his behaviors. In the safe environment of the psychoanalytic setting, the analysand becomes attached to the analyst and pretty soon he begins to experience the same conflicts with his analyst that he experiences with key figures in his life such as his parents, his boss, his significant other, etc. It is the psychoanalyst's role to point out these conflicts and to interpret them. The transferring of these internal conflicts onto the analyst is called "transference". Many studies have also been done on briefer "dynamic" treatments; these are more expedient to measure, and shed light on the therapeutic process to some extent. Brief Relational Therapy (BRT), Brief Psychodynamic Therapy (BPT), and Time-Limited Dynamic Therapy (TLDP) limit treatment to 20–30 sessions. On average, classical analysis may last 5.7 years, but for phobias and depressions uncomplicated by ego deficits or object relations deficits, analysis may run for a shorter period of time. Longer analyses are indicated for those with more serious disturbances in object relations, more symptoms, and more ingrained character pathology. Training and research Psychoanalysis continues to be practiced by psychiatrists, social workers, and other mental health professionals; however, its practice has declined. It has been largely replaced by the similar but broader psychodynamic psychotherapy in the mid-20th century. Psychoanalytic approaches continue to be listed by the UK National Health Service as possibly helpful for depression. United States Psychoanalytic training in the United States involves a personal psychoanalysis for the trainee, approximately 600 hours of class instruction, with a standard curriculum, over a four or five-year period. Typically, this psychoanalysis must be conducted by a Supervising and Training Analyst. Most institutes (but not all) within the American Psychoanalytic Association, require that Supervising and Training Analysts become certified by the American Board of Psychoanalysts. Certification entails a blind review in which the psychoanalyst's work is vetted by psychoanalysts outside of their local community. After earning certification, these psychoanalysts undergo another hurdle in which they are specially vetted by senior members of their own institute. Supervising and Training analysts are held to the highest clinical and ethical standards. Moreover, they are required to have extensive experience conducting psychoanalyses. Similarly, class instruction for psychoanalytic candidates is rigorous. Typically classes meet several hours a week, or for a full day or two every other weekend during the academic year; this varies with the institute. Candidates generally have an hour of supervision each week, with a Supervising and Training Analyst, on each psychoanalytic case. The minimum number of cases varies between institutes, often two to four cases. Male and female cases are required. Supervision must go on for at least a few years on one or more cases. Supervision is done in the supervisor's office, where the trainee presents material from the psychoanalytic work that week. In supervision, the patient's unconscious conflicts are explored, also, transference-countertransference constellations are examined. Also, clinical technique is taught. Many psychoanalytic training centers in the United States have been accredited by special committees of the APsaA or the IPA. Because of theoretical differences, there are independent institutes, usually founded by psychologists, who until 1987 were not permitted access to psychoanalytic training institutes of the APsaA. Currently there are between 75 and 100 independent institutes in the United States. As well, other institutes are affiliated to other organizations such as the American Academy of Psychoanalysis and Dynamic Psychiatry, and the National Association for the Advancement of Psychoanalysis. At most psychoanalytic institutes in the United States, qualifications for entry include a terminal degree in a mental health field, such as Ph.D., Psy.D., M.S.W., or M.D. A few institutes restrict applicants to those already holding an M.D. or Ph.D., and most institutes in Southern California confer a Ph.D. or Psy.D. in psychoanalysis upon graduation, which involves completion of the necessary requirements for the state boards that confer that doctoral degree. The first training institute in America to educate non-medical psychoanalysts was The National Psychological Association for Psychoanalysis (1978) in New York City. It was founded by the analyst Theodor Reik. The Contemporary Freudian (originally the New York Freudian Society) an offshoot of the National Psychological Association has a branch in Washington, DC. It is a component society/institute or the IPA. Some psychoanalytic training has been set up as a post-doctoral fellowship in university settings, such as at Duke University, Yale University, New York University, Adelphi University and Columbia University. Other psychoanalytic institutes may not be directly associated with universities, but the faculty at those institutes usually hold contemporaneous faculty positions with psychology Ph.D. programs and/or with medical school psychiatry residency programs. The IPA is the world's primary accrediting and regulatory body for psychoanalysis. Their mission is to assure the continued vigor and development of psychoanalysis for the benefit of psychoanalytic patients. It works in partnership with its 70 constituent organizations in 33 countries to support 11,500 members. In the US, there are 77 psychoanalytical organizations, institutes associations in the United States, which are spread across the states of America. APSaA has 38 affiliated societies which have 10 or more active members who practice in a given geographical area. The aims of APSaA and other psychoanalytical organizations are: provide ongoing educational opportunities for its members, stimulate the development and research of psychoanalysis, provide training and organize conferences. There are eight affiliated study groups in the United States. A study group is the first level of integration of a psychoanalytical body within the IPA, followed by a provisional society and finally a member society. The Division of Psychoanalysis (39) of the American Psychological Association (APA) was established in the early 1980s by several psychologists. Until the establishment of the Division of Psychoanalysis, psychologists who had trained in independent institutes had no national organization. The Division of Psychoanalysis now has approximately 4,000 members and approximately 30 local chapters in the United States. The Division of Psychoanalysis holds two annual meetings or conferences and offers continuing education in theory, research and clinical technique, as do their affiliated local chapters. The European Psychoanalytical Federation (EPF) is the organization which consolidates all European psychoanalytic societies. This organization is affiliated with the IPA. In 2002, there were approximately 3,900 individual members in 22 countries, speaking 18 different languages. There are also 25 psychoanalytic societies. The American Association of Psychoanalysis in Clinical Social Work (AAPCSW) was established by Crayton Rowe in 1980 as a division of the Federation of Clinical Societies of Social Work and became an independent entity in 1990. Until 2007 it was known as the National Membership Committee on Psychoanalysis. The organization was founded because although social workers represented the larger number of people who were training to be psychoanalysts, they were underrepresented as supervisors and teachers at the institutes they attended. AAPCSW now has over 1000 members and has over 20 chapters. It holds a bi-annual national conference and numerous annual local conferences. Experiences of psychoanalysts and psychoanalytic psychotherapists and research into infant and child development have led to new insights. Theories have been further developed and the results of empirical research are now more integrated in the psychoanalytic theory. United Kingdom The London Psychoanalytical Society was founded by Ernest Jones on 30 October 1913. After World War I with the expansion of psychoanalysis in the United Kingdom, the Society was reconstituted and named the British Psychoanalytical Society in 1919. Soon after, the Institute of Psychoanalysis was established to administer the Society's activities. These include: the training of psychoanalysts, the development of the theory and practice of psychoanalysis, the provision of treatment through The London Clinic of Psychoanalysis, the publication of books in The New Library of Psychoanalysis and Psychoanalytic Ideas. The Institute of Psychoanalysis also publishes The International Journal of Psychoanalysis, maintains a library, furthers research, and holds public lectures. The society has a Code of Ethics and an Ethical Committee. The society, the institute and the clinic are all located at Byron House in West London. The Society is a constituent society of the International Psychoanalytical Association (IPA) a body with members on all five continents which safeguards professional and ethical practice. The Society is a member of the British Psychoanalytic Council (BPC); the BPC publishes a register of British psychoanalysts and psychoanalytical psychotherapists. All members of the British Psychoanalytic Council are required to undertake continuing professional development, CPD. Members of the Society teach and hold posts on other approved psychoanalytic courses, e.g.: British Psychotherapy Foundation and in academic departments, e.g.University College London. Members of the Society have included: Michael Balint, Wilfred Bion, John Bowlby, Ronald Fairbairn, Anna Freud, Harry Guntrip, Melanie Klein, Donald Meltzer, Joseph J. Sandler, Hanna Segal, J. D. Sutherland and Donald Winnicott. The Institute of Psychoanalysis is the foremost publisher of psychoanalytic literature. The 24-volume Standard Edition of the Complete Psychological Works of Sigmund Freud was conceived, translated, and produced under the direction of the British Psychoanalytical Society. The Society, in conjunction with Random House, will soon publish a new, revised and expanded Standard Edition. With the New Library of Psychoanalysis the Institute continues to publish the books of leading theorists and practitioners. The International Journal of Psychoanalysis is published by the Institute of Psychoanalysis. Now in its 84th year, it has one of the largest circulations of any psychoanalytic journal. India Psychoanalytical practice is emerging slowly in India, but is not yet recognised by the government. In 2016, India decriminalised suicide in its mental health bill. Psychoanalytic psychotherapy There are different forms of psychoanalysis and psychotherapies in which psychoanalytic thinking is practiced. Besides classical psychoanalysis there is for example psychoanalytic psychotherapy, a therapeutic approach which widens "the accessibility of psychoanalytic theory and clinical practices that had evolved over 100 plus years to a larger number of individuals." Other examples of well known therapies which also use insights of psychoanalysis are mentalization-based treatment (MBT), and transference focused psychotherapy (TFP). There is also a continuing influence of psychoanalytic thinking in mental health care. Research Over a hundred years of case reports and studies in the journal Modern Psychoanalysis, the Psychoanalytic Quarterly, the International Journal of Psychoanalysis and the Journal of the American Psychoanalytic Association have analyzed the efficacy of analysis in cases of neurosis and character or personality problems. Psychoanalysis modified by object relations techniques has been shown to be effective in many cases of ingrained problems of intimacy and relationship (cf. the many books of Otto Kernberg). Psychoanalytic treatment, in other situations, may run from about a year to many years, depending on the severity and complexity of the pathology. Psychoanalytic theory has, from its inception, been the subject of criticism and controversy. Freud remarked on this early in his career, when other physicians in Vienna ostracized him for his findings that hysterical conversion symptoms were not limited to women. Challenges to analytic theory began with Otto Rank and Alfred Adler (turn of the 20th century), continued with behaviorists (e.g. Wolpe) into the 1940s and '50s, and have persisted (e.g. Miller). Criticisms come from those who object to the notion that there are mechanisms, thoughts or feelings in the mind that could be unconscious. Criticisms also have been leveled against the idea of "infantile sexuality" (the recognition that children between ages two and six imagine things about procreation). Criticisms of theory have led to variations in analytic theories, such as the work of Ronald Fairbairn, Michael Balint, and John Bowlby. In the past 30 years or so, the criticisms have centered on the issue of empirical verification. Psychoanalysis has been used as a research tool into childhood development (cf. the journal The Psychoanalytic Study of the Child), and has developed into a flexible, effective treatment for certain mental disturbances. In the 1960s, Freud's early (1905) thoughts on the childhood development of female sexuality were challenged; this challenge led to major research in the 1970s and 80s, and then to a reformulation of female sexual development that corrected some of Freud's concepts. Also see the various works of Eleanor Galenson, Nancy Chodorow, Karen Horney, Françoise Dolto, Melanie Klein, Selma Fraiberg, and others. Most recently, psychoanalytic researchers who have integrated attachment theory into their work, including Alicia Lieberman, Susan Coates, and Daniel Schechter have explored the role of parental traumatization in the development of young children's mental representations of self and others. Effectiveness The psychoanalytic profession has been resistant to researching efficacy. Evaluations of effectiveness based on the interpretation of the therapist alone cannot be proven. Research results Numerous studies have shown that the efficacy of therapy is primarily related to the quality of the therapist, rather than
defensive operations—essentially shut-off brain mechanisms that make people unaware of that element of conflict. Repression is the term given to the mechanism that shuts thoughts out of consciousness. Isolation of affect is the term used for the mechanism that shuts sensations out of consciousness. Neurotic symptoms may occur with or without deficits in ego functions, object relations, and ego strengths. Therefore, it is not uncommon to encounter people with obsessive-compulsive disorder and schizophrenia, or patients with panic disorder who also have borderline personality disorder, etc. This section above is partial to ego psychoanalytic theory autonomous ego functions. Childhood origins Freudian theories hold that adult problems can be traced to unresolved conflicts from certain phases of childhood and adolescence, caused by fantasy, stemming from their own drives. Freud, based on the data gathered from his patients early in his career, suspected that neurotic disturbances occurred when children were sexually abused in childhood (i.e. seduction theory). Later, Freud came to believe that, although child abuse occurs, neurotic symptoms were not associated with this. He believed that neurotic people often had unconscious conflicts that involved incestuous fantasies deriving from different stages of development. He found the stage from about three to six years of age (preschool years, today called the "first genital stage") to be filled with fantasies of having romantic relationships with both parents. Arguments were quickly generated in early 20th-century Vienna about whether adult seduction of children, i.e. child sexual abuse, was the basis of neurotic illness. There still is no complete agreement, although nowadays professionals recognize the negative effects of child sexual abuse on mental health. Oedipal conflicts Many psychoanalysts who work with children have studied the actual effects of child abuse, which include ego and object relations deficits and severe neurotic conflicts. Much research has been done on these types of trauma in childhood, and the adult sequelae of those. In studying the childhood factors that start neurotic symptom development, Freud found a constellation of factors that, for literary reasons, he termed the Oedipus complex, based on the play by Sophocles, Oedipus Rex, in which the protagonist unwittingly kills his father and marries his mother. The validity of the Oedipus complex is now widely disputed and rejected. The shorthand term, oedipal—later explicated by Joseph J. Sandler in "On the Concept Superego" (1960) and modified by Charles Brenner in The Mind in Conflict (1982)—refers to the powerful attachments that children make to their parents in the preschool years. These attachments involve fantasies of sexual relationships with either (or both) parent, and, therefore, competitive fantasies toward either (or both) parents. Humberto Nagera (1975) has been particularly helpful in clarifying many of the complexities of the child through these years. "Positive" and "negative" oedipal conflicts have been attached to the heterosexual and homosexual aspects, respectively. Both seem to occur in development of most children. Eventually, the developing child's concessions to reality (that they will neither marry one parent nor eliminate the other) lead to identifications with parental values. These identifications generally create a new set of mental operations regarding values and guilt, subsumed under the term superego. Besides superego development, children "resolve" their preschool oedipal conflicts through channeling wishes into something their parents approve of ("sublimation") and the development, during the school-age years ("latency") of age-appropriate obsessive-compulsive defensive maneuvers (rules, repetitive games). Treatment Using the various analytic and psychological techniques to assess mental problems, some believe that there are particular constellations of problems that are especially suited for analytic treatment (see below) whereas other problems might respond better to medicines and other interpersonal interventions. To be treated with psychoanalysis, whatever the presenting problem, the person requesting help must demonstrate a desire to start an analysis. The person wishing to start an analysis must have some capacity for speech and communication. As well, they need to be able to have or develop trust and insight within the psychoanalytic session. Potential patients must undergo a preliminary stage of treatment to assess their amenability to psychoanalysis at that time, and also to enable the analyst to form a working psychological model, which the analyst will use to direct the treatment. Psychoanalysts mainly work with neurosis and hysteria in particular; however, adapted forms of psychoanalysis are used in working with schizophrenia and other forms of psychosis or mental disorder. Finally, if a prospective patient is severely suicidal a longer preliminary stage may be employed, sometimes with sessions which have a twenty-minute break in the middle. There are numerous modifications in technique under the heading of psychoanalysis due to the individualistic nature of personality in both analyst and patient. The most common problems treatable with psychoanalysis include: phobias, conversions, compulsions, obsessions, anxiety attacks, depressions, sexual dysfunctions, a wide variety of relationship problems (such as dating and marital strife), and a wide variety of character problems (for example, painful shyness, meanness, obnoxiousness, workaholism, hyperseductiveness, hyperemotionality, hyperfastidiousness). The fact that many of such patients also demonstrate deficits above makes diagnosis and treatment selection difficult. Analytical organizations such as the IPA, APsaA and the European Federation for Psychoanalytic Psychotherapy have established procedures and models for the indication and practice of psychoanalytical therapy for trainees in analysis. The match between the analyst and the patient can be viewed as another contributing factor for the indication and contraindication for psychoanalytic treatment. The analyst decides whether the patient is suitable for psychoanalysis. This decision made by the analyst, besides made on the usual indications and pathology, is also based to a certain degree by the "fit" between analyst and patient. A person's suitability for analysis at any particular time is based on their desire to know something about where their illness has come from. Someone who is not suitable for analysis expresses no desire to know more about the root causes of their illness. An evaluation may include one or more other analysts' independent opinions and will include discussion of the patient's financial situation and insurances. Techniques The basic method of psychoanalysis is interpretation of the patient's unconscious conflicts that are interfering with current-day functioning – conflicts that are causing painful symptoms such as phobias, anxiety, depression, and compulsions. Strachey (1936) stressed that figuring out ways the patient distorted perceptions about the analyst led to understanding what may have been forgotten. In particular, unconscious hostile feelings toward the analyst could be found in symbolic, negative reactions to what Robert Langs later called the "frame" of the therapy—the setup that included times of the sessions, payment of fees, and necessity of talking. In patients who made mistakes, forgot, or showed other peculiarities regarding time, fees, and talking, the analyst can usually find various unconscious "resistances" to the flow of thoughts (aka free association). When the patient reclines on a couch with the analyst out of view, the patient tends to remember more experiences, more resistance and transference, and is able to reorganize thoughts after the development of insight – through the interpretive work of the analyst. Although fantasy life can be understood through the examination of dreams, masturbation fantasies are also important. The analyst is interested in how the patient reacts to and avoids such fantasies. Various memories of early life are generally distorted—what Freud called screen memories—and in any case, very early experiences (before age two)—cannot be remembered. Variations in technique There is what is known among psychoanalysts as classical technique, although Freud throughout his writings deviated from this considerably, depending on the problems of any given patient. Classical technique was summarized by Allan Compton as comprising: instructions: telling the patient to try to say what's on their mind, including interferences; exploration: asking questions; and clarification: rephrasing and summarizing what the patient has been describing. As well, the analyst can also use confrontation to bringing an aspect of functioning, usually a defense, to the patient's attention. The analyst then uses a variety of interpretation methods, such as: Dynamic interpretation: explaining how being too nice guards against guilt (e.g. defense vs. affect); Genetic interpretation: explaining how a past event is influencing the present; Resistance interpretation: showing the patient how they are avoiding their problems; Transference interpretation: showing the patient ways old conflicts arise in current relationships, including that with the analyst; or Dream interpretation: obtaining the patient's thoughts about their dreams and connecting this with their current problems. Analysts can also use reconstruction to estimate what may have happened in the past that created some current issue. These techniques are primarily based on conflict theory (see above). As object relations theory evolved, supplemented by the work of John Bowlby and Mary Ainsworth, techniques with patients who had more severe problems with basic trust (Erikson, 1950) and a history of maternal deprivation (see the works of Augusta Alpert) led to new techniques with adults. These have sometimes been called interpersonal, intersubjective (cf. Stolorow), relational, or corrective object relations techniques. These techniques include expressing an empathic attunement to the patient or warmth; exposing a bit of the analyst's personal life or attitudes to the patient; allowing the patient autonomy in the form of disagreement with the analyst (cf. I. H. Paul, Letters to Simon); and explaining the motivations of others which the patient misperceives. Ego psychological concepts of deficit in functioning led to refinements in supportive therapy. These techniques are particularly applicable to psychotic and near-psychotic (cf., Eric Marcus, "Psychosis and Near-psychosis") patients. These supportive therapy techniques include discussions of reality; encouragement to stay alive (including hospitalization); psychotropic medicines to relieve overwhelming depressive affect or overwhelming fantasies (hallucinations and delusions); and advice about the meanings of things (to counter abstraction failures). The notion of the "silent analyst" has been criticized. Actually, the analyst listens using Arlow's approach as set out in "The Genesis of Interpretation", using active intervention to interpret resistances, defenses creating pathology, and fantasies. Silence is not a technique of psychoanalysis (see also the studies and opinion papers of Owen Renik). "Analytic neutrality" is a concept that does not mean the analyst is silent. It refers to the analyst's position of not taking sides in the internal struggles of the patient. For example, if a patient feels guilty, the analyst might explore what the patient has been doing or thinking that causes the guilt, but not reassure the patient not to feel guilty. The analyst might also explore the identifications with parents and others that led to the guilt. Interpersonal–relational psychoanalysts emphasize the notion that it is impossible to be neutral. Sullivan introduced the term participant-observer to indicate the analyst inevitably interacts with the analysand, and suggested the detailed inquiry as an alternative to interpretation. The detailed inquiry involves noting where the analysand is leaving out important elements of an account and noting when the story is obfuscated, and asking careful questions to open up the dialogue. Group therapy and play therapy Although single-client sessions remain the norm, psychoanalytic theory has been used to develop other types of psychological treatment. Psychoanalytic group therapy was pioneered by Trigant Burrow, Joseph Pratt, Paul F. Schilder, Samuel R. Slavson, Harry Stack Sullivan, and Wolfe. Child-centered counseling for parents was instituted early in analytic history by Freud, and was later further developed by Irwin Marcus, Edith Schulhofer, and Gilbert Kliman. Psychoanalytically based couples therapy has been promulgated and explicated by Fred Sander. Techniques and tools developed in the first decade of the 21st century have made psychoanalysis available to patients who were not treatable by earlier techniques. This meant that the analytic situation was modified so that it would be more suitable and more likely to be helpful for these patients. Eagle (2007) believes that psychoanalysis cannot be a self-contained discipline but instead must be open to influence from and integration with findings and theory from other disciplines. Psychoanalytic constructs have been adapted for use with children with treatments such as play therapy, art therapy, and storytelling. Throughout her career, from the 1920s through the 1970s, Anna Freud adapted psychoanalysis for children through play. This is still used today for children, especially those who are preadolescent. Using toys and games, children are able to symbolically demonstrate their fears, fantasies, and defenses; although not identical, this technique, in children, is analogous to the aim of free association in adults. Psychoanalytic play therapy allows the child and analyst to understand children's conflicts, particularly defenses such as disobedience and withdrawal, that have been guarding against various unpleasant feelings and hostile wishes. In art therapy, the counselor may have a child draw a portrait and then tell a story about the portrait. The counselor watches for recurring themes—regardless of whether it is with art or toys. Cultural variations Psychoanalysis can be adapted to different cultures, as long as the therapist or counselor understands the client's culture. For example, Tori and Blimes found that defense mechanisms were valid in a normative sample of 2,624 Thais. The use of certain defense mechanisms was related to cultural values. For example, Thais value calmness and collectiveness (because of Buddhist beliefs), so they were low on regressive emotionality. Psychoanalysis also applies because Freud used techniques that allowed him to get the subjective perceptions of his patients. He takes an objective approach by not facing his clients during his talk therapy sessions. He met with his patients wherever they were, such as when he used free association—where clients would say whatever came to mind without self-censorship. His treatments had little to no structure for most cultures, especially Asian cultures. Therefore, it is more likely that Freudian constructs will be used in structured therapy. In addition, Corey postulates that it will be necessary for a therapist to help clients develop a cultural identity as well as an ego identity. Psychodynamic therapy Psychodynamic therapies refer therapies that draw from psychoanalytic approaches but are designed to be shorter in duration or less intensive. Cost and length of treatment The cost to the patient of psychoanalytic treatment ranges widely from place to place and between practitioners. Low-fee analysis is often available in a psychoanalytic training clinic and graduate schools. Otherwise, the fee set by each analyst varies with the analyst's training and experience. Since, in most locations in the United States, unlike in Ontario and Germany, classical analysis (which usually requires sessions three to five times per week) is not covered by health insurance, many analysts may negotiate their fees with patients whom they feel they can help, but who have financial difficulties. The modifications of analysis, which include psychodynamic therapy, brief therapies, and certain types of group therapy, are carried out on a less frequent basis—usually once, twice, or three times a week – and usually the patient sits facing the therapist. As a result of the defense mechanisms and the lack of access to the unfathomable elements of the unconscious, psychoanalysis can be an expansive process that involves 2 to 5 sessions per week for several years. This type of therapy relies on the belief that reducing the symptoms will not actually help with the root causes or irrational drives. The analyst typically is a 'blank screen', disclosing very little about themselves in order that the client can use the space in the relationship to work on their unconscious without interference from outside. The psychoanalyst uses various methods to help the patient to become more self-aware and to develop insights into their behavior and into the meanings of symptoms. First and foremost, the psychoanalyst attempts to develop a confidential atmosphere in which the patient can feel safe reporting his feelings, thoughts and fantasies. Analysands (as people in analysis are called) are asked to report whatever comes to mind without fear of reprisal. Freud called this the "fundamental rule". Analysands are asked to talk about their lives, including their early life, current life and hopes and aspirations for the future. They are encouraged to report their fantasies, "flash thoughts" and dreams. In fact, Freud believed that dreams were, "the royal road to the unconscious"; he devoted an entire volume to the interpretation of dreams. Freud had his patients lay on a couch in a dimly lit room and would sit out of sight, usually directly behind them, as to not influence the patients thoughts by his gestures or expressions. The psychoanalyst's task, in collaboration with the analysand, is to help deepen the analysand's understanding of those factors, outside of his awareness, that drive his behaviors. In the safe environment of the psychoanalytic setting, the analysand becomes attached to the analyst and pretty soon he begins to experience the same conflicts with his analyst that he experiences with key figures in his life such as his parents, his boss, his significant other, etc. It is the psychoanalyst's role to point out these conflicts and to interpret them. The transferring of these internal conflicts onto the analyst is called "transference". Many studies have also been done on briefer "dynamic" treatments; these are more expedient to measure, and shed light on the therapeutic process to some extent. Brief Relational Therapy (BRT), Brief Psychodynamic Therapy (BPT), and Time-Limited Dynamic Therapy (TLDP) limit treatment to 20–30 sessions. On average, classical analysis may last 5.7 years, but for phobias and depressions uncomplicated by ego deficits or object relations deficits, analysis may run for a shorter period of time. Longer analyses are indicated for those with more serious disturbances in object relations, more symptoms, and more ingrained character pathology. Training and research Psychoanalysis continues to be practiced by psychiatrists, social workers, and other mental health professionals; however, its practice has declined. It has been largely replaced by the similar but broader psychodynamic psychotherapy in the mid-20th century. Psychoanalytic approaches continue to be listed by the UK National Health Service as possibly helpful for depression. United States Psychoanalytic training in the United States involves a personal psychoanalysis for the trainee, approximately 600 hours of class instruction, with a standard curriculum, over a four or five-year period. Typically, this psychoanalysis must be conducted by a Supervising and Training Analyst. Most institutes (but not all) within the American Psychoanalytic Association, require that Supervising and Training Analysts become certified by the American Board of Psychoanalysts. Certification entails a blind review in which the psychoanalyst's work is vetted by psychoanalysts outside of their local community. After earning certification, these psychoanalysts undergo another hurdle in which they are specially vetted by senior members of their own institute. Supervising and Training analysts are held to the highest clinical and ethical standards. Moreover, they are required to have extensive experience conducting psychoanalyses. Similarly, class instruction for psychoanalytic candidates is rigorous. Typically classes meet several hours a week, or for a full day or two every other weekend during the academic year; this varies with the institute. Candidates generally have an hour of supervision each week, with a Supervising and Training Analyst, on each psychoanalytic case. The minimum number of cases varies between institutes, often two to four cases. Male and female cases are required. Supervision must go on for at least a few years on one or more cases. Supervision is done in the supervisor's office, where the trainee presents material from the psychoanalytic work that week. In supervision, the patient's unconscious conflicts are explored, also, transference-countertransference constellations are examined. Also, clinical technique is taught. Many psychoanalytic training centers in the United States have been accredited by special committees of the APsaA or the IPA. Because of theoretical differences, there are independent institutes, usually founded by psychologists, who until 1987 were not permitted access to psychoanalytic training institutes of the APsaA. Currently there are between 75 and 100 independent institutes in the United States. As well, other institutes are affiliated to other organizations such as the American Academy of Psychoanalysis and Dynamic Psychiatry, and the National Association for the Advancement of Psychoanalysis. At most psychoanalytic institutes in the United States, qualifications for entry include a terminal degree in a mental health field, such as Ph.D., Psy.D., M.S.W., or M.D. A few institutes restrict applicants to those already holding an M.D. or Ph.D., and most institutes in Southern California confer a Ph.D. or Psy.D. in psychoanalysis upon graduation, which involves completion of the necessary requirements for the state boards that confer that doctoral degree. The first training institute in America to educate non-medical psychoanalysts was The National Psychological Association for Psychoanalysis (1978) in New York City. It was founded by the analyst Theodor Reik. The Contemporary Freudian (originally the New York Freudian Society) an offshoot of the National Psychological Association has a branch in Washington, DC. It is a component society/institute or the IPA. Some psychoanalytic training has been set up as a post-doctoral fellowship in university settings, such as at Duke University, Yale University, New York University, Adelphi University and Columbia University. Other psychoanalytic institutes may not be directly associated with universities, but the faculty at those institutes usually hold contemporaneous faculty positions with psychology Ph.D. programs and/or with medical school psychiatry residency programs. The IPA is the world's primary accrediting and regulatory body for psychoanalysis. Their mission is to assure the continued vigor and development of psychoanalysis for the benefit of psychoanalytic patients. It works in partnership with its 70 constituent organizations in 33 countries to support 11,500 members. In the US, there are 77 psychoanalytical organizations, institutes associations in the United States, which are spread across the states of America. APSaA has 38 affiliated societies which have 10 or more active members who practice in a given geographical area. The aims of APSaA and other psychoanalytical organizations are: provide ongoing educational opportunities for its members, stimulate the development and research of psychoanalysis, provide training and organize conferences. There are eight affiliated study groups in the United States. A study group is the first level of integration of a psychoanalytical body within the IPA, followed by a provisional society and finally a member society. The Division of Psychoanalysis (39) of the American Psychological Association (APA) was established in the early 1980s by several psychologists. Until the establishment of the Division of Psychoanalysis, psychologists who had trained in independent institutes had no national organization. The Division of Psychoanalysis now has approximately 4,000 members and approximately 30 local chapters in the United States. The Division of Psychoanalysis holds two annual meetings or conferences and offers continuing education in theory, research and clinical technique, as do their affiliated local chapters. The European Psychoanalytical Federation (EPF) is the organization which consolidates all European psychoanalytic societies. This organization is affiliated with the IPA. In 2002, there were approximately 3,900 individual members in 22 countries, speaking 18 different languages. There are also 25 psychoanalytic societies. The American Association of Psychoanalysis in Clinical Social Work (AAPCSW) was established by Crayton Rowe in 1980 as a division of the Federation of Clinical Societies of Social Work and became an independent entity in 1990. Until 2007 it was known as the National Membership Committee on Psychoanalysis. The organization was founded because although social workers represented the larger number of people who were training to be psychoanalysts, they were underrepresented as supervisors and teachers at the institutes they attended. AAPCSW now has over 1000 members and has over 20 chapters. It holds a bi-annual national conference and numerous annual local conferences. Experiences of psychoanalysts and psychoanalytic psychotherapists and research into infant and child development have led to new insights. Theories have been further developed and the results of empirical research are now more integrated in the psychoanalytic theory. United Kingdom The London Psychoanalytical Society was founded by Ernest Jones on 30 October 1913. After World War I with the expansion of psychoanalysis in the United Kingdom, the Society was reconstituted and named the British Psychoanalytical Society in 1919. Soon after, the Institute of Psychoanalysis was established to administer the Society's activities. These include: the training of psychoanalysts, the development of the theory and practice of psychoanalysis, the provision of treatment through The London Clinic of Psychoanalysis, the publication of books in The New Library of Psychoanalysis and Psychoanalytic Ideas. The Institute of Psychoanalysis also publishes The International Journal of Psychoanalysis, maintains a library, furthers research, and holds public lectures. The society has a Code of Ethics and an Ethical Committee. The society, the institute and the clinic are all located at Byron House in West London. The Society is a constituent society of the International Psychoanalytical Association (IPA) a body with members on all five continents which safeguards professional and ethical practice. The Society is a member of the British Psychoanalytic Council (BPC); the BPC publishes a register of British psychoanalysts and psychoanalytical psychotherapists. All members of the British Psychoanalytic Council are required to undertake continuing professional development, CPD. Members of the Society teach and hold posts on other approved psychoanalytic courses, e.g.: British Psychotherapy Foundation and in academic departments, e.g.University College London. Members of the Society have included: Michael Balint, Wilfred Bion, John Bowlby, Ronald Fairbairn, Anna Freud, Harry Guntrip, Melanie Klein, Donald Meltzer, Joseph J. Sandler, Hanna Segal, J. D. Sutherland and Donald Winnicott. The Institute of Psychoanalysis is the foremost publisher of psychoanalytic literature. The 24-volume Standard Edition of the Complete Psychological Works of Sigmund Freud was conceived, translated, and produced under the direction of the British Psychoanalytical Society. The Society, in conjunction with Random House, will soon publish a new, revised and expanded Standard Edition. With the New Library of Psychoanalysis the Institute continues to publish the books of leading theorists and practitioners. The International Journal of Psychoanalysis is published by the Institute of Psychoanalysis. Now in its 84th year, it has one of the largest circulations of any psychoanalytic journal. India Psychoanalytical practice is emerging slowly in India, but is not yet recognised by the government. In 2016, India decriminalised suicide in its mental health bill. Psychoanalytic psychotherapy There are different forms of psychoanalysis and psychotherapies in which psychoanalytic thinking is practiced. Besides classical psychoanalysis there is for example psychoanalytic psychotherapy, a therapeutic approach which widens "the accessibility of psychoanalytic theory and clinical practices that had evolved over 100 plus years to a larger number of individuals." Other examples of well known therapies which also use insights of psychoanalysis are mentalization-based treatment (MBT), and transference focused psychotherapy (TFP). There is also a continuing influence of psychoanalytic thinking in mental health care. Research Over a hundred years of case reports and studies in the journal Modern Psychoanalysis, the Psychoanalytic Quarterly, the International Journal of Psychoanalysis and the Journal of the American Psychoanalytic Association have analyzed the efficacy of analysis in cases of neurosis and character or personality problems. Psychoanalysis modified by object relations techniques has been shown to be effective in many cases of ingrained problems of intimacy and relationship (cf. the many books of Otto Kernberg). Psychoanalytic treatment, in other situations, may run from about a year to many years, depending on the severity and complexity of the pathology. Psychoanalytic theory has, from its inception, been the subject of criticism and controversy. Freud remarked on this early in his career, when other physicians in Vienna ostracized him for his findings that hysterical conversion symptoms were not limited to women. Challenges to analytic theory began with Otto Rank and Alfred Adler (turn of the 20th century), continued with behaviorists (e.g. Wolpe) into the 1940s and '50s, and have persisted (e.g. Miller). Criticisms come from those who object to the notion that there are mechanisms, thoughts or feelings in the mind that could be unconscious. Criticisms also have
of Peking, an 1874 American iron steamship built Peking Plan, a 1939 operation in which three Polish Navy destroyers evacuated to the United Kingdom Peking Man, formally Homo erectus pekinensis, an ancient hominin Local nickname
SS City of Peking, an 1874 American iron steamship built Peking Plan, a 1939 operation in which three Polish Navy destroyers evacuated to the United Kingdom Peking Man, formally Homo erectus pekinensis, an ancient hominin Local
as "Xī'ān".) Eh alone is written as ê; elsewhere as e. Schwa is always written as e. Zh, ch, and sh can be abbreviated as ẑ, ĉ, and ŝ (z, c, s with a circumflex). However, the shorthands are rarely used because of the difficulty of entering them on computers and are confined mainly to Esperanto keyboard layouts. Early drafts and some published material used diacritic hooks below instead: (/), , (). Ng has the uncommon shorthand of ŋ, which was also used in early drafts. Early drafts also contained the symbol ɥ or the letter ч borrowed from the Cyrillic script, in place of later j for the voiceless alveolo-palatal sibilant affricate. The letter v is unused, except in spelling foreign languages, languages of minority nationalities, and some dialects, despite a conscious effort to distribute letters more evenly than in Western languages. However, the ease of typing into a computer causes the v to be sometimes used to replace ü. (The Scheme table above maps the letter to bopomofo ㄪ, which typically maps to .) Most of the above are used to avoid ambiguity when words of more than one syllable are written in pinyin. For example, uenian is written as wenyan because it is not clear which syllables make up uenian; uen-ian, uen-i-an, u-en-i-an, u-e-nian, and u-e-ni-an are all possible combinations, but wenyan is unambiguous since we, nya, etc. do not exist in pinyin. See the pinyin table article for a summary of possible pinyin syllables (not including tones). Words, capitalization, initialisms and punctuation Although Chinese characters represent single syllables, Mandarin Chinese is a polysyllabic language. Spacing in pinyin is usually based on words, and not on single syllables. However, there are often ambiguities in partitioning a word. The Basic Rules of the Chinese Phonetic Alphabet Orthography () were put into effect in 1988 by the National Educational Commission () and the National Language Commission (). These rules became a Guóbiāo recommendation in 1996 and were updated in 2012. General Single meaning: Words with a single meaning, which are usually set up of two characters (sometimes one, seldom three), are written together and not capitalized: rén (, person); péngyou (, friend); qiǎokèlì (, chocolate) Combined meaning (2 or 3 characters): Same goes for words combined of two words to one meaning: hǎifēng (, sea breeze); wèndá (, question and answer); quánguó (, nationwide); chángyòngcí (, common words) Combined meaning (4 or more characters): Words with four or more characters having one meaning are split up with their original meaning if possible: wúfèng gāngguǎn (, seamless steel-tube); huánjìng bǎohù guīhuà (, environmental protection planning); gāoměngsuānjiǎ (, potassium permanganate) Duplicated words AA: Duplicated characters (AA) are written together: rénrén (, everybody), kànkan (, to have a look), niánnián (, every year) ABAB: Two characters duplicated (ABAB) are written separated: yánjiū yánjiū (, to study, to research), xuěbái xuěbái (, white as snow) AABB: Characters in the AABB schema are written together: láiláiwǎngwǎng (, come and go), qiānqiānwànwàn (, numerous) Prefixes () and Suffixes (): Words accompanied by prefixes such as fù (, vice), zǒng (, chief), fēi (, non-), fǎn (, anti-), chāo (, ultra-), lǎo (, old), ā (, used before names to indicate familiarity), kě (, -able), wú (, -less) and bàn (, semi-) and suffixes such as zi (, noun suffix), r (, diminutive suffix), tou (, noun suffix), xìng (, -ness, -ity), zhě (, -er, -ist), yuán (, person), jiā (, -er, -ist), shǒu (, person skilled in a field), huà (, -ize) and men (, plural marker) are written together: fùbùzhǎng (, vice minister), chéngwùyuán (, conductor), háizimen (, children) Nouns and names () Words of position are separated: mén wài (, outdoor), hé li (, under the river), huǒchē shàngmian (, on the train), Huáng Hé yǐnán (, south of the Yellow River) Exceptions are words traditionally connected: tiānshang (, in the sky or outerspace), dìxia (, on the ground), kōngzhōng (, in the air), hǎiwài (, overseas) Surnames are separated from the given names, each capitalized: Lǐ Huá (), Zhāng Sān (). If the surname and/or given name consists of two syllables, it should be written as one: Zhūgě Kǒngmíng (). Titles following the name are separated and are not capitalized: Wáng bùzhǎng (, Minister Wang), Lǐ xiānsheng (, Mr. Li), Tián zhǔrèn (, Director Tian), Zhào tóngzhì (, Comrade Zhao). The forms of addressing people with prefixes such as Lǎo (), Xiǎo (), Dà () and Ā () are capitalized: Xiǎo Liú (, [young] Ms./Mr. Liu), Dà Lǐ (, [great; elder] Mr. Li), Ā Sān (, Ah San), Lǎo Qián (, [senior] Mr. Qian), Lǎo Wú (, [senior] Mr. Wu) Exceptions include Kǒngzǐ (, Confucius), Bāogōng (, Judge Bao), Xīshī (, Xishi), Mèngchángjūn (, Lord Mengchang) Geographical names of China: Běijīng Shì (, city of Beijing), Héběi Shěng (, province of Hebei), Yālù Jiāng (, Yalu River), Tài Shān (, Mount Tai), Dòngtíng Hú (, Dongting Lake), Qióngzhōu Hǎixiá (, Qiongzhou Strait) Monosyllabic prefixes and suffixes are written together with their related part: Dōngsì Shítiáo (, Dongsi 10th Alley) Common geographical nouns that have become part of proper nouns are written together: Hēilóngjiāng (, Heilongjiang) Non-Chinese names are written in Hanyu Pinyin: Āpèi Āwàngjìnměi (, Ngapoi Ngawang Jigme); Dōngjīng (, Tokyo) Verbs (): Verbs and their suffixes -zhe (), -le () or -guo (() are written as one: kànzhe (, seeing), jìnxíngguo (, have been implemented). Le as it appears in the end of a sentence is separated though: Huǒchē dào le. (, The train [has] arrived). Verbs and their objects are separated: kàn xìn (, read a letter), chī yú (, eat fish), kāi wánxiào (, to be kidding). If verbs and their complements are each monosyllabic, they are written together; if not, they are separated: gǎohuài (, to make broken), dǎsǐ (, hit to death), huàwéi (, to become), zhěnglǐ hǎo (, to sort out), gǎixiě wéi (, to rewrite as) Adjectives (): A monosyllabic adjective and its reduplication are written as one: mēngmēngliàng (, dim), liàngtángtáng (, shining bright) Complements of size or degree such as xiē (), yīxiē (), diǎnr () and yīdiǎnr () are written separated: dà xiē (), a little bigger), kuài yīdiǎnr (, a bit faster) Pronouns () Personal pronouns and interrogative pronouns are separated from other words: Wǒ ài Zhōngguó. (, I love China); Shéi shuō de? (, Who said it?) The demonstrative pronoun zhè (, this), nà (, that) and the question pronoun nǎ (, which) are separated: zhè rén (, this person), nà cì huìyì (, that meeting), nǎ zhāng bàozhǐ (, which newspaper) Exception—If zhè, nà or nǎ are followed by diǎnr (), bān (), biān (), shí (), huìr (), lǐ (), me () or the general classifier ge (), they are written together: nàlǐ (, there), zhèbiān (, over here), zhège (, this) Numerals () and measure words () Numbers and words like gè (, each), měi (, each), mǒu (, any), běn (, this), gāi (, that), wǒ (, my, our) and nǐ (, your) are separated from the measure words following them: liǎng gè rén (, two people), gè guó (, every nation), měi nián (, every year), mǒu gōngchǎng (, a certain factory), wǒ xiào (, our school) Numbers up to 100 are written as single words: sānshísān (, thirty-three). Above that, the hundreds, thousands, etc. are written as separate words: jiǔyì qīwàn èrqiān sānbǎi wǔshíliù (, nine hundred million, seventy-two thousand, three hundred fifty-six). Arabic numerals are kept as Arabic numerals: 635 fēnjī (, extension 635) According to 6.1.5.4, the dì () used in ordinal numerals is followed by a hyphen: dì-yī (, first), dì-356 (, 356th). The hyphen should not be used if the word in which dì () and the numeral appear does not refer to an ordinal number in the context. For example: Dìwǔ (, a Chinese compound surname). The chū () in front of numbers one to ten is written together with the number: chūshí (, tenth day) Numbers representing month and day are hyphenated: wǔ-sì (, May fourth), yīèr-jiǔ (, December ninth) Words of approximations such as duō (), lái () and jǐ () are separated from numerals and measure words: yībǎi duō gè (, around a hundred); shí lái wàn gè (, around a hundred thousand); jǐ jiā rén (, a few families) Shíjǐ (, more than ten) and jǐshí (, tens) are written together: shíjǐ gè rén (, more than ten people); jǐshí (, tens of steel pipes) Approximations with numbers or units that are close together are hyphenated: sān-wǔ tiān (, three to five days), qiān-bǎi cì (, thousands of times) Other function words () are separated from other words Adverbs (): hěn hǎo (, very good), zuì kuài (, fastest), fēicháng dà (, extremely big) Prepositions (): zài qiánmiàn (, in front) Conjunctions (): nǐ hé wǒ (, you and I/me), Nǐ lái háishi bù lái? (, Are you coming or not?) "Constructive auxiliaries" () such as de (), zhī () and suǒ (): mànmàn de zou (), go slowly) A monosyllabic word can also be written together with de (): wǒ de shū / wǒde shū (, my book) Modal auxiliaries at the end of a sentence: Nǐ zhīdào ma? (, Do you know?), Kuài qù ba! (, Go quickly!) Exclamations and interjections: À! Zhēn měi! (), Oh, it's so beautiful!) Onomatopoeia: mó dāo huòhuò (, honing a knife), hōnglōng yī shēng (, rumbling) Capitalization The first letter of the first word in a sentence is capitalized: Chūntiān lái le. (, Spring has arrived.) The first letter of each line in a poem is capitalized. The first letter of a proper noun is capitalized: Běijīng (, Beijing), Guójì Shūdiàn (, International Bookstore), Guójiā Yǔyán Wénzì Gōngzuò Wěiyuánhuì (, National Language Commission) On some occasions, proper nouns can be written in all caps: BĚIJĪNG, GUÓJÌ SHŪDIÀN, GUÓJIĀ YǓYÁN WÉNZÌ GŌNGZUÒ WĚIYUÁNHUÌ If a proper noun is written together with a common noun to make a proper noun, it is capitalized. If not, it is not capitalized: Fójiào (, Buddhism), Tángcháo (, Tang dynasty), jīngjù (, Beijing opera), chuānxiōng (, Szechuan lovage) Initialisms Single words are abbreviated by taking the first letter of each character of the word: Beǐjīng (, Beijing) → BJ A group of words are abbreviated by taking the first letter of each word in the group: guójiā biāozhǔn (, Guóbiāo standard) → GB Initials can also be indicated using full stops: Beǐjīng → B.J., guójiā biāozhǔn → G.B. When abbreviating names, the surname is written fully (first letter capitalized or in all caps), but only the first letter of each character in the given name is taken, with full stops after each initial: Lǐ Huá () → Lǐ H. or LǏ H., Zhūgě Kǒngmíng () → Zhūgě K. M. or ZHŪGĚ K. M. Line wrapping Words can only be split by the character:guāngmíng (, bright) → guāng-míng, not gu-āngmíng Initials cannot be split:Wáng J. G. () → WángJ. G., not Wáng J.-G. Apostrophes are removed in line wrapping:Xī'ān (, Xi'an) → Xī-ān, not Xī-'ān When the original word has a hyphen, the hyphen is added at the beginning of the new line:chēshuǐ-mǎlóng (, heavy traffic: "carriage, water, horse, dragon") → chēshuǐ--mǎlóng Hyphenation: In addition to the situations mentioned above, there are four situations where hyphens are used. Coordinate and disjunctive compound words, where the two elements are conjoined or opposed, but retain their individual meaning: gōng-jiàn (, bow and arrow), kuài-màn (, speed: "fast-slow"), shíqī-bā suì (, 17–18 years old), dǎ-mà (, beat and scold), Yīng-Hàn (, English-Chinese [dictionary]), Jīng-Jīn (, Beijing-Tianjin), lù-hǎi-kōngjūn (, army-navy-airforce). Abbreviated compounds (): gōnggòng guānxì (, public relations) → gōng-guān (, PR), chángtú diànhuà (, long-distance calling) → cháng-huà (, LDC). Exceptions are made when the abbreviated term has become established as a word in its own right, as in chūzhōng () for chūjí zhōngxué (, junior high school). Abbreviations of proper-name compounds, however, should always be hyphenated: Běijīng Dàxué (, Peking University) → Běi-Dà (, PKU). Four-syllable idioms: fēngpíng-làngjìng (), calm and tranquil: "wind calm, waves down"), huījīn-rútǔ (, spend money like water: "throw gold like dirt"), zhǐ-bǐ-mò-yàn (, paper-brush-ink-inkstone [four coordinate words]). Other idioms are separated according to the words that make up the idiom: bēi hēiguō (, to be made a scapegoat: "to carry a black pot"), zhǐ xǔ zhōuguān fànghuǒ, bù xǔ bǎixìng diǎndēng (, Gods may do what cattle may not: "only the official is allowed to light the fire; the
by voicing), but not to that of French. Letters z and c also have that distinction, pronounced as and (which is reminiscent of these letters being used to represent the phoneme in the German language and Latin-script-using Slavic languages, respectively). From s, z, c come the digraphs sh, zh, ch by analogy with English sh, ch. Although this introduces the novel combination zh, it is internally consistent in how the two series are related. In the x, j, q series, the pinyin use of x is similar to its use in Portuguese, Galician, Catalan, Basque and Maltese and the pinyin q is akin to its value in Albanian; both pinyin and Albanian pronunciations may sound similar to the ch to the untrained ear. Pinyin vowels are pronounced in a similar way to vowels in Romance languages. The pronunciation and spelling of Chinese words are generally given in terms of initials and finals, which represent the segmental phonemic portion of the language, rather than letter by letter. Initials are initial consonants, while finals are all possible combinations of medials (semivowels coming before the vowel), a nucleus vowel and coda (final vowel or consonant). History Background: romanization of Chinese before 1949 In 1605, the Jesuit missionary Matteo Ricci published Xizi Qiji () in Beijing. This was the first book to use the Roman alphabet to write the Chinese language. Twenty years later, another Jesuit in China, Nicolas Trigault, issued his () at Hangzhou. Neither book had much immediate impact on the way in which Chinese thought about their writing system, and the romanizations they described were intended more for Westerners than for the Chinese. One of the earliest Chinese thinkers to relate Western alphabets to Chinese was late Ming to early Qing dynasty scholar-official, Fang Yizhi (; 1611–1671). The first late Qing reformer to propose that China adopt a system of spelling was Song Shu (1862–1910). A student of the great scholars Yu Yue and Zhang Taiyan, Song had been to Japan and observed the stunning effect of the kana syllabaries and Western learning there. This galvanized him into activity on a number of fronts, one of the most important being reform of the script. While Song did not himself actually create a system for spelling Sinitic languages, his discussion proved fertile and led to a proliferation of schemes for phonetic scripts. Wade–Giles The Wade–Giles system was produced by Thomas Wade in 1859, and further improved by Herbert Giles in the Chinese–English Dictionary of 1892. It was popular and used in English-language publications outside China until 1979. Sin Wenz In the early 1930s, Communist Party of China leaders trained in Moscow introduced a phonetic alphabet using Roman letters which had been developed in the Soviet Oriental Institute of Leningrad and was originally intended to improve literacy in the Russian Far East. This Sin Wenz or "New Writing" was much more linguistically sophisticated than earlier alphabets, but with the major exception that it did not indicate tones of Chinese. In 1940, several thousand members attended a Border Region Sin Wenz Society convention. Mao Zedong and Zhu De, head of the army, both contributed their calligraphy (in characters) for the masthead of the Sin Wenz Society's new journal. Outside the CCP, other prominent supporters included Sun Yat-sen's son, Sun Fo; Cai Yuanpei, the country's most prestigious educator; Tao Xingzhi, a leading educational reformer; and Lu Xun. Over thirty journals soon appeared written in Sin Wenz, plus large numbers of translations, biographies (including Lincoln, Franklin, Edison, Ford, and Charlie Chaplin), some contemporary Chinese literature, and a spectrum of textbooks. In 1940, the movement reached an apex when Mao's Border Region Government declared that the Sin Wenz had the same legal status as traditional characters in government and public documents. Many educators and political leaders looked forward to the day when they would be universally accepted and completely replace Chinese characters. Opposition arose, however, because the system was less well adapted to writing regional languages, and therefore would require learning Mandarin. Sin Wenz fell into relative disuse during the following years. Yale romanization In 1943, the U.S. military engaged Yale University to develop a romanization of Mandarin Chinese for its pilots flying over China. The resulting system is very close to pinyin, but does not use English letters in unfamiliar ways; for example, pinyin x for is written as sy in the Yale system. Medial semivowels are written with y and w (instead of pinyin i and u), and apical vowels (syllabic consonants) with r or z. Accent marks are used to indicate tone. Emergence and history of Hanyu Pinyin Pinyin was created by a group of Chinese linguists, including Zhou Youguang who was an economist, as part of a Chinese government project in the 1950s. Zhou, often called "the father of pinyin," worked as a banker in New York when he decided to return to China to help rebuild the country after the establishment of the People's Republic of China in 1949. He became an economics professor in Shanghai, and in 1955, when China's Ministry of Education created a Committee for the Reform of the Chinese Written Language, Premier Zhou Enlai assigned Zhou Youguang the task of developing a new romanization system, despite the fact that he was not a professional linguist. Hanyu Pinyin was based on several existing systems: Gwoyeu Romatzyh of 1928, Latinxua Sin Wenz of 1931, and the diacritic markings from zhuyin (bopomofo). "I'm not the father of pinyin," Zhou said years later; "I'm the son of pinyin. It's [the result of] a long tradition from the later years of the Qing dynasty down to today. But we restudied the problem and revisited it and made it more perfect." A draft was published on February 12, 1956. The first edition of Hanyu Pinyin was approved and adopted at the Fifth Session of the 1st National People's Congress on February 11, 1958. It was then introduced to primary schools as a way to teach Standard Chinese pronunciation and used to improve the literacy rate among adults. During the height of the Cold War, the use of pinyin system over the Yale romanization outside of China was regarded as a political statement or identification with the communist Chinese regime. Beginning in the early 1980s, Western publications addressing Mainland China began using the Hanyu Pinyin romanization system instead of earlier romanization systems; this change followed the normalization of diplomatic relations between the United States and the PRC in 1979. In 2001, the PRC Government issued the National Common Language Law, providing a legal basis for applying pinyin. The current specification of the orthographic rules is laid down in the National Standard GB/T 16159–2012. Initials and finals Unlike European languages, clusters of letters — initials () and finals () — and not consonant and vowel letters, form the fundamental elements in pinyin (and most other phonetic systems used to describe the Han language). Every Mandarin syllable can be spelled with exactly one initial followed by one final, except for the special syllable er or when a trailing -r is considered part of a syllable (see below, and see erhua). The latter case, though a common practice in some sub-dialects, is rarely used in official publications. Even though most initials contain a consonant, finals are not always simple vowels, especially in compound finals (), i.e. when a "medial" is placed in front of the final. For example, the medials and are pronounced with such tight openings at the beginning of a final that some native Chinese speakers (especially when singing) pronounce yī (, clothes, officially pronounced ) as and wéi (, to enclose, officially pronounced ) as or . Often these medials are treated as separate from the finals rather than as part of them; this convention is followed in the chart of finals below. Initials In each cell below, the bold letters indicate pinyin and the brackets enclose the symbol in the International Phonetic Alphabet. 1 y is pronounced (a labial-palatal approximant) before u.2 The letters w and y are not included in the table of initials in the official pinyin system. They are an orthographic convention for the medials i, u and ü when no initial is present. When i, u, or ü are finals and no initial is present, they are spelled yi, wu, and yu, respectively. The conventional lexicographical order (excluding w and y), derived from the zhuyin system ("bopomofo"), is: {|cellspacing="0" cellpadding="3" |style="background: #ccf;"|b p m f |style="background: #cfc;"| d t n l |style="background: #fcc;"| g k h |style="background: #fcf;"| j q x |style="background: #cff;"| zh ch sh r |style="background: #ffc;"| z c s |} According to Scheme for the Chinese Phonetic Alphabet, zh, ch, and sh can be abbreviated as ẑ, ĉ, and ŝ (z, c, s with a circumflex). However, the shorthands are rarely used due to difficulty of entering them on computers and are confined mainly to Esperanto keyboard layouts. Finals In each cell below, the first line indicates IPA, the second indicates pinyin for a standalone (no-initial) form, and the third indicates pinyin for a combination with an initial. Other than finals modified by an -r, which are omitted, the following is an exhaustive table of all possible finals.1 The only syllable-final consonants in Standard Chinese are -n and -ng, and -r, the last of which is attached as a grammatical suffix. A Chinese syllable ending with any other consonant either is from a non-Mandarin language (a southern Chinese language such as Cantonese, or a minority language of China; possibly reflecting final consonants in Old Chinese), or indicates the use of a non-pinyin romanization system (where final consonants may be used to indicate tones). 1 For other finals formed by the suffix -r, pinyin does not use special orthography; one simply appends r to the final that it is added to, without regard for any sound changes that may take place along the way. For information on sound changes related to final r, please see Erhua#Rules. 2 ü is written as u after y, j, q, or x. 3 uo is written as o after b, p, m, f, or w. Technically, i, u, ü without a following vowel are finals, not medials, and therefore take the tone marks, but they are more concisely displayed as above. In addition, ê () and syllabic nasals m (, ), n (, ), ng (, ) are used as interjections. According to Scheme for the Chinese Phonetic Alphabet, ng can be abbreviated with a shorthand of ŋ. However, this shorthand is rarely used due to difficulty of entering them on computers. The ü sound An umlaut is placed over the letter u when it occurs after the initials l and n when necessary in order to represent the sound [y]. This is necessary in order to distinguish the front high rounded vowel in lü (e.g. ) from the back high rounded vowel in lu (e.g. ). Tonal markers are added on top of the umlaut, as in lǘ. However, the ü is not used in the other contexts where it could represent a front high rounded vowel, namely after the letters j, q, x, and y. For example, the sound of the word / (fish) is transcribed in pinyin simply as yú, not as yǘ. This practice is opposed to Wade–Giles, which always uses ü, and Tongyong Pinyin, which always uses yu. Whereas Wade–Giles needs the umlaut to distinguish between chü (pinyin ju) and chu (pinyin zhu), this ambiguity does not arise with pinyin, so the more convenient form ju is used instead of jü. Genuine ambiguities only happen with nu/nü and lu/lü, which are then distinguished by an umlaut. Many fonts or output methods do not support an umlaut for ü or cannot place tone marks on top of ü. Likewise, using ü in input methods is difficult because it is not present as a simple key on many keyboard layouts. For these reasons v is sometimes used instead by convention. For example, it is common for cellphones to use v instead of ü. Additionally, some stores in China use v instead of ü in the transliteration of their names. The drawback is that there are no tone marks for the letter v. This also presents a problem in transcribing names for use on passports, affecting people with names that consist of the sound lü or nü, particularly people with the surname (Lǚ), a fairly common surname, particularly compared to the surnames (Lù), (Lǔ), (Lú) and (Lù). Previously, the practice varied among different passport issuing offices, with some transcribing as "LV" and "NV" while others used "LU" and "NU". On 10 July 2012, the Ministry of Public Security standardized the practice to use "LYU" and "NYU" in passports. Although nüe written as nue, and lüe written as lue are not ambiguous, nue or lue are not correct according to the rules; nüe and lüe should be used instead. However, some Chinese input methods (e.g. Microsoft Pinyin IME) support both nve/lve (typing v for ü) and nue/lue. Approximation from English pronunciation Most rules given here in terms of English pronunciation are approximations, as several of these sounds do not correspond directly to sounds in English. Pronunciation of initials * Note on y and w Y and w are equivalent to the semivowel medials i, u, and ü (see below). They are spelled differently when there is no initial consonant in order to mark a new syllable: fanguan is fan-guan, while fangwan is fang-wan (and equivalent to *fang-uan). With this convention, an apostrophe only needs to be used to mark an initial a, e, or o: Xi'an (two syllables: ) vs. xian (one syllable: ). In addition, y and w are added to fully vocalic i, u, and ü when these occur without an initial consonant, so that they are written yi, wu, and yu. Some Mandarin speakers do pronounce a or sound at the beginning of such words—that is, yi or , wu or , yu or ,—so this is an intuitive convention. See below for a few finals which are abbreviated after a consonant plus w/u or y/i medial: wen → C+un, wei → C+ui, weng → C+ong, and you → C+iu. ** Note on the apostrophe The apostrophe (') () is used before a syllable starting with a vowel (, , or ) in a multiple-syllable word when the syllable does not start the word, unless the syllable immediately follows a hyphen or other dash. For example, is written as Xi'an or Xī'ān, and is written as Tian'e or Tiān'é, but is written "dì-èr", without an apostrophe. This apostrophe is not used in the Taipei Metro names. Apostrophes (as well as hyphens and tone marks) are omitted on Chinese passports. Pronunciation of finals The following is a list of finals in Standard Chinese, excepting most of those ending with r. To find a given final: Remove the initial consonant. Zh, ch, and sh count as initial consonants. Change initial w to u and initial y to i. For weng, wen, wei, you, look under ong, un, ui, iu. For u after j, q, x, or y, look under ü. Tones The pinyin system also uses diacritics to mark the four tones of Mandarin. The diacritic is placed over the letter that represents the syllable nucleus, unless that letter is missing (see below). Many books printed in China use a mix of fonts, with vowels and tone marks rendered in a different font from the surrounding text, tending to give such pinyin texts a typographically ungainly appearance. This style, most likely rooted in early technical limitations, has led many to believe that pinyin's rules call for this practice, e.g. the use of a Latin alpha (ɑ) rather than the standard style (a) found in most fonts, or g often written with a single-storey ɡ. The rules of Hanyu Pinyin, however, specify no such practice. The first tone (flat or high-level tone) is represented by a macron (ˉ) added to the pinyin vowel: ā ē ī ō ū ǖ Ā Ē Ī Ō Ū Ǖ The second tone (rising or high-rising tone) is denoted by an acute accent (ˊ): á é í ó ú ǘ Á É Í Ó Ú Ǘ The third tone (falling-rising or low tone) is marked by a caron/háček (ˇ). It is not the rounded breve (˘), though a breve is sometimes substituted due to ignorance or font limitations. ǎ ě ǐ ǒ ǔ ǚ Ǎ Ě Ǐ Ǒ Ǔ Ǚ The fourth tone (falling or high-falling tone) is represented by a grave accent (ˋ): à è ì ò ù ǜ À È Ì Ò Ù Ǜ The fifth tone (neutral tone) is represented by a normal vowel without any accent mark: a e i o u ü A E I O U Ü In dictionaries, neutral tone may be indicated by a dot preceding the syllable; for example, ·ma. When a neutral tone syllable has an alternative pronunciation in another tone, a combination of tone marks may be used: zhī·dào (). These tone marks normally are only used in Mandarin textbooks or in foreign learning texts, but they are essential for correct pronunciation of Mandarin syllables, as exemplified by the following classic example of five characters whose pronunciations differ only in their tones: The words are "mother", "hemp", "horse", "scold", and a question particle, respectively. Numerals in place of tone marks Before the advent of computers, many typewriter fonts did not contain vowels with macron or caron diacritics. Tones were thus represented by placing a tone number at the end of individual syllables. For example, tóng is written tong². The number used for each tone is as the order listed above, except the neutral tone, which is either not numbered, or given the number 0 or 5, e.g. ma⁵ for /, an interrogative marker. Rules for placing the tone mark Briefly, the tone mark should always be placed by the order—a, o, e, i, u, ü, with the only exception being iu, where the tone mark is placed on the u instead. Pinyin tone marks appear primarily above the nucleus of the syllable, for example as in kuài, where k is the initial, u the medial, a the nucleus, and i the coda. The exception is syllabic nasals like /m/, where the nucleus of the syllable is a consonant, the diacritic will be carried by a written dummy vowel. When the nucleus is /ə/ (written e or o), and there is both a medial and a coda, the nucleus may be dropped from writing. In this case, when the coda is a consonant n or ng, the only vowel left is the medial i, u, or ü, and so this takes the diacritic. However, when the coda is a vowel, it is the coda rather than the medial which takes the diacritic in the absence of a written nucleus. This occurs with syllables ending in -ui (from wei: (wèi → -uì) and in -iu (from you: yòu → -iù.) That is, in the absence of a written nucleus the finals have priority for receiving the tone marker, as long as they are vowels: if not, the medial takes the diacritic. An algorithm to find the correct vowel letter (when there is more than one) is as follows: If there is an a or an e, it will take the tone mark If there is an ou, then the o takes the tone mark Otherwise, the second vowel takes the tone mark Worded differently, If there is an a, e, or o, it will take the tone mark; in the case of ao, the mark goes on the a Otherwise, the vowels are -iu or -ui, in which case the second vowel takes the tone mark If the tone is written over an i, the tittle above the i is omitted, as in yī. Phonological intuition The placement of the tone marker, when more than one of the written letters a, e, i, o, and u appears, can also be inferred from the nature of the vowel sound in the medial and final. The rule is that the tone marker goes on the spelled vowel that is not a (near-)semi-vowel. The exception is that, for triphthongs that are spelled with only two vowel letters, both of which are the semi-vowels, the tone marker goes on the second spelled vowel. Specifically, if the spelling of a diphthong begins with i (as in ia) or u (as in ua), which serves as a near-semi-vowel, this letter does not take the tone marker. Likewise, if the spelling of a diphthong ends with o or u representing a near-semi-vowel (as in ao or ou), this letter does not receive a tone marker. In a triphthong spelled with three of a, e, i, o, and u (with i or u replaced by y or w at the start of a syllable), the first and third letters coincide with near-semi-vowels and hence do not receive the tone marker (as in iao or uai or iou). But if no letter is written to represent a triphthong's middle (non-semi-vowel) sound (as in ui or iu), then the tone marker goes on the final (second) vowel letter. Using tone colors In addition to tone number and mark, tone color has been suggested as a visual aid for learning. Although there are no formal standards, there are a number of different color schemes in use, Dummitt's being one of the first. Third tone exceptions In spoken Chinese, the third tone is often pronounced as a "half third tone", in which the pitch does not rise. Additionally, when two third tones appear consecutively, such as in (nǐhǎo, hello), the first syllable is pronounced with the second tone — this is called tone sandhi. In pinyin, words like "hello" are still written with two third tones (nǐhǎo). Orthographic rules Letters The Scheme for the Chinese Phonetic Alphabet lists the letters of pinyin, along with their pronunciations, as: Pinyin differs from other romanizations in several aspects, such as the following: Syllables starting with u are written as w in place of u (e.g., *uan is written as wan). Standalone u is written as wu. Syllables starting with i are written as y in place of i (e.g., *ian is written as yan). Standalone i is written as yi. Syllables starting with ü are written as yu in place of ü (e.g., *üe is written as yue). Standalone ü is written as yu. ü is written as u when there is no ambiguity (such as ju, qu, and xu) but as ü when there are corresponding u syllables (such as lü and nü). If there are corresponding u syllables, it is often replaced with v on a computer to make it easier to type on a standard keyboard. After by a consonant, iou, uei, and uen are simplified as iu, ui, and un, which do not represent the actual pronunciation. As in zhuyin, syllables that are actually pronounced as buo, puo, muo, and fuo are given a separate representation: bo, po, mo, and fo. The apostrophe (') is used before a syllable starting with a vowel (a, o, or e) in a syllable other than the first of a word, the syllable being most commonly realized as unless it immediately follows a hyphen or other dash. That is done to remove ambiguity that could arise, as in Xi'an, which consists of the two syllables xi () an (), compared to such words as xian (). (The ambiguity does not occur when tone marks are used since
preaching not only offers manifold gain as a treasure, but is precious as a pearl; wherefore after the parable concerning the treasure, He gives that concerning the pearl. And in preaching, two things are required, namely, to be detached from the business of this life, and to be watchful, which are denoted by this merchantman. Truth moreover is one, and not manifold, and for this reason it is one pearl that is said to be found. And as one who is possessed of a pearl, himself indeed knows of his wealth, but is not known to others, ofttimes concealing it in his hand because of its small bulk, so it is in the preaching of the Gospel; they who possess it know that they are rich, the unbelievers, not knowing of this treasure, know not of our wealth. Jerome: By the goodly pearls may be understood the Law and the Prophets. Hear then Marcion and Manichæus; the good pearls are the Law and the Prophets. One pearl, the most precious of all, is the knowledge of the Saviour and the sacrament of His passion and resurrection, which when the merchantman has found, like Paul the Apostle, he straightway despises all the mysteries of the Law and the Prophets and the old observances in which he had lived blameless, counting them as dung that he may win Christ. (Phil. 3:8.) Not that the finding of a new pearl is the condemnation of the old pearls, but that in comparison of that, all other pearls are worthless." Gregory the Great: " Or by the pearl of price is to be understood the sweetness of the heavenly kingdom, which, he that hath found it, selleth all and buyeth. For he that, as far as is permitted, has had perfect knowledge of the sweetness of the heavenly life, readily leaves all things that he has loved on earth; all that once pleased him among earthly possessions now appears to have lost its beauty, for the splendour of that precious pearl is alone seen in his mind." Augustine: " Or, A man seeking goodly pearls has found one pearl of great price; that is, he who is seeking good men with whom he may live profitably, finds one alone, Christ Jesus, without sin; or, seeking precepts of life, by aid of which he may dwell righteously among men, finds
Latter Day Saint denominations. Commentary from the Church Fathers Chrysostom: "The Gospel preaching not only offers manifold gain as a treasure, but is precious as a pearl; wherefore after the parable concerning the treasure, He gives that concerning the pearl. And in preaching, two things are required, namely, to be detached from the business of this life, and to be watchful, which are denoted by this merchantman. Truth moreover is one, and not manifold, and for this reason it is one pearl that is said to be found. And as one who is possessed of a pearl, himself indeed knows of his wealth, but is not known to others, ofttimes concealing it in his hand because of its small bulk, so it is in the preaching of the Gospel; they who possess it know that they are rich, the unbelievers, not knowing of this treasure, know not of our wealth. Jerome: By the goodly pearls may be understood the Law and the Prophets. Hear then Marcion and Manichæus; the good pearls are the Law and the Prophets. One pearl, the most precious of all, is the knowledge of the Saviour and the sacrament of His passion and resurrection, which when the merchantman has found, like Paul the Apostle, he straightway despises all the mysteries of the Law and the Prophets and the old observances in which he had lived blameless, counting them as dung that he may win Christ. (Phil. 3:8.) Not that the finding of a new pearl is the condemnation of the old pearls, but that in comparison of that, all other pearls are worthless." Gregory the Great: " Or by the pearl of price is to be understood the sweetness of the heavenly kingdom, which, he that hath found it, selleth all and buyeth. For he that, as far as is permitted, has had perfect knowledge of the sweetness of the heavenly life, readily leaves all things that he has loved on earth; all that once pleased him among earthly possessions now appears to have lost its beauty, for the splendour of that precious pearl is alone seen in his mind." Augustine: " Or, A man seeking goodly pearls has found one pearl of great price; that is, he who is seeking good men with whom he may live profitably, finds one alone, Christ Jesus, without sin; or, seeking precepts of life, by aid of which he may dwell righteously among men, finds love of his neighbour, in which one rule, the Apostle says, (Rom. 13:9.) are comprehended all things; or, seeking good thoughts, he finds that Word in which all things
the history of the human mind, while Spinoza is one of its most potent forces. Bruno was a rhapsodist and a poet, who was overwhelmed with artistic emotions; Spinoza, however, was spiritus purus and in his method the prototype of the philosopher." 18th century The first known use of the term "pantheism" was in Latin ("pantheismus" ) by the English mathematician Joseph Raphson in his work De Spatio Reali seu Ente Infinito, published in 1697. Raphson begins with a distinction between atheistic "panhylists" (from the Greek roots pan, "all", and hyle, "matter"), who believe everything is matter, and Spinozan "pantheists" who believe in "a certain universal substance, material as well as intelligence, that fashions all things that exist out of its own essence." Raphson thought that the universe was immeasurable in respect to a human's capacity of understanding, and believed that humans would never be able to comprehend it. He referred to the pantheism of the Ancient Egyptians, Persians, Syrians, Assyrians, Greek, Indians, and Jewish Kabbalists, specifically referring to Spinoza. The term was first used in English by a translation of Raphson's work in 1702. It was later used and popularized by Irish writer John Toland in his work of 1705 Socinianism Truly Stated, by a pantheist. Toland was influenced by both Spinoza and Bruno, and had read Joseph Raphson's De Spatio Reali, referring to it as "the ingenious Mr. Ralphson's (sic) Book of Real Space". Like Raphson, he used the terms "pantheist" and "Spinozist" interchangeably. In 1720 he wrote the Pantheisticon: or The Form of Celebrating the Socratic-Society in Latin, envisioning a pantheist society that believed, "All things in the world are one, and one is all in all things ... what is all in all things is God, eternal and immense, neither born nor ever to perish." He clarified his idea of pantheism in a letter to Gottfried Leibniz in 1710 when he referred to "the pantheistic opinion of those who believe in no other eternal being but the universe". In the mid-eighteenth century, the English theologian Daniel Waterland defined pantheism this way: "It supposes God and nature, or God and the whole universe, to be one and the same substance—one universal being; insomuch that men's souls are only modifications of the divine substance." In the early nineteenth century, the German theologian Julius Wegscheider defined pantheism as the belief that God and the world established by God are one and the same. Pantheism controversy Between 1785–89, a major controversy about Spinoza's philosophy arose between the German philosophers Friedrich Heinrich Jacobi (a critic) and Moses Mendelssohn (a defender). Known in German as the Pantheismusstreit (pantheism controversy), it helped spread pantheism to many German thinkers. A 1780 conversation with the German dramatist Gotthold Ephraim Lessing led Jacobi to a protracted study of Spinoza's works. Lessing stated that he knew no other philosophy than Spinozism. Jacobi's Über die Lehre des Spinozas (1st ed. 1785, 2nd ed. 1789) expressed his strenuous objection to a dogmatic system in philosophy, and drew upon him the enmity of the Berlin group, led by Mendelssohn. Jacobi claimed that Spinoza's doctrine was pure materialism, because all Nature and God are said to be nothing but extended substance. This, for Jacobi, was the result of Enlightenment rationalism and it would finally end in absolute atheism. Mendelssohn disagreed with Jacobi, saying that pantheism shares more characteristics of theism than of atheism. The entire issue became a major intellectual and religious concern for European civilization at the time. Willi Goetschel argues that Jacobi's publication significantly shaped Spinoza's wide reception for centuries following its publication, obscuring the nuance of Spinoza's philosophic work. 19th century Growing influence During the beginning of the 19th century, pantheism was the viewpoint of many leading writers and philosophers, attracting figures such as William Wordsworth and Samuel Coleridge in Britain; Johann Gottlieb Fichte, Schelling and Hegel in Germany; Knut Hamsun in Norway; and Walt Whitman, Ralph Waldo Emerson and Henry David Thoreau in the United States. Seen as a growing threat by the Vatican, in 1864 it was formally condemned by Pope Pius IX in the Syllabus of Errors. A letter written in 1886 by William Herndon, Abraham Lincoln's law partner, was sold at auction for US$30,000 in 2011. In it, Herndon writes of the U.S. President's evolving religious views, which included pantheism. The subject is understandably controversial, but the content of the letter is consistent with Lincoln's fairly lukewarm approach to organized religion. Comparison with non-Christian religions Some 19th-century theologians thought that various pre-Christian religions and philosophies were pantheistic. They thought Pantheism was similar to the ancient Hindu philosophy of Advaita (non-dualism) to the extent that the 19th-century German Sanskritist Theodore Goldstücker remarked that Spinoza's thought was "... a western system of philosophy which occupies a foremost rank amongst the philosophies of all nations and ages, and which is so exact a representation of the ideas of the Vedanta, that we might have suspected its founder to have borrowed the fundamental principles of his system from the Hindus." 19th-century European theologians also considered Ancient Egyptian religion to contain pantheistic elements and pointed to Egyptian philosophy as a source of Greek Pantheism. The latter included some of the Presocratics, such as Heraclitus and Anaximander. The Stoics were pantheists, beginning with Zeno of Citium and culminating in the emperor-philosopher Marcus Aurelius. During the pre-Christian Roman Empire, Stoicism was one of the three dominant schools of philosophy, along with Epicureanism and Neoplatonism. The early Taoism of Laozi and Zhuangzi is also sometimes considered pantheistic, although it could be more similar to Panentheism. Cheondoism, which arose in the Joseon Dynasty of Korea, and Won Buddhism are also considered pantheistic. The Realist Society of Canada believes that the consciousness of the self-aware universe is reality, which is an alternative view of Pantheism. 20th century In a letter written to Eduard Büsching (25 October 1929), after Büsching sent Albert Einstein a copy of his book Es gibt keinen Gott ("There is no God"), Einstein wrote, "We followers of Spinoza see our God in the wonderful order and lawfulness of all that exists and in its soul [Beseeltheit] as it reveals itself in man and animal." According to Einstein, the book only dealt with the concept of a personal god and not the impersonal God of pantheism. In a letter written in 1954 to philosopher Eric Gutkind, Einstein wrote "the word God is for me nothing more than the expression and product of human weaknesses." In another letter written in 1954 he wrote "I do not believe in a personal God and I have never denied this but have expressed it clearly." In Ideas And Opinions, published a year before his death, Einstein stated his precise conception of the word God: Scientific research can reduce superstition by encouraging people to think and view things in terms of cause and effect. Certain it is that a conviction, akin to religious feeling, of the rationality and intelligibility of the world lies behind all scientific work of a higher order. [...] This firm belief, a belief bound up with a deep feeling, in a superior mind that reveals itself in the world of experience, represents my conception of God. In common parlance this may be described as "pantheistic" (Spinoza). In the late 20th century, some declared that pantheism was an underlying theology of Neopaganism, and pantheists began forming organizations devoted specifically to pantheism and treating it as a separate religion. 21st century Dorion Sagan, son of scientist and science communicator Carl Sagan, published the 2007 book Dazzle Gradually: Reflections on the Nature of Nature, co-written with his mother Lynn Margulis. In the chapter "Truth of My Father", Sagan writes that his "father believed in the God of Spinoza and Einstein, God not behind nature, but as nature, equivalent to it." In 2009, pantheism was mentioned in a Papal encyclical and in a statement on New Year's Day, 2010, criticizing pantheism for denying the superiority of humans over nature and seeing the source of man salvation in nature. In a 2009 review of the film Avatar, Ross Douthat described pantheism as "Hollywood's religion of choice for a generation now". In 2015 The Paradise Project, an organization "dedicated to celebrating and spreading awareness about pantheism," commissioned Los Angeles muralist Levi Ponce to paint the 75-foot mural in Venice, California near the organization's offices. The mural depicts Albert Einstein, Alan Watts, Baruch Spinoza, Terence McKenna, Carl Jung, Carl Sagan, Emily Dickinson, Nikola Tesla, Friedrich Nietzsche, Ralph Waldo Emerson, W.E.B. Du Bois, Henry David Thoreau, Elizabeth Cady Stanton, Rumi, Adi Shankara, and Laozi. Categorizations There are multiple varieties of pantheism and various systems of classifying them relying upon one or more spectra or in discrete categories. Degree of determinism The philosopher Charles Hartshorne used the term Classical Pantheism to describe the deterministic philosophies of Baruch Spinoza, the Stoics, and other like-minded figures. Pantheism (All-is-God) is often associated with monism (All-is-One) and some have suggested that it logically implies determinism (All-is-Now). Albert Einstein explained theological determinism by stating, "the past, present, and future are an 'illusion'". This form of pantheism has been referred to as "extreme monism", in which in the words of one commentator "God decides or determines everything, including our supposed decisions." Other examples of determinism-inclined pantheisms include those of Ralph Waldo Emerson, and Hegel. However, some have argued against treating every meaning of "unity" as an aspect of pantheism, and there exist versions of pantheism that regard determinism as an inaccurate or incomplete view of nature. Examples include the beliefs of John Scotus Eriugena, Friedrich Wilhelm Joseph Schelling and William James. Degree of belief It may also be possible to distinguish two types of pantheism, one being more religious and the other being more philosophical. The Columbia Encyclopedia writes of the distinction: "If the pantheist starts with the belief that the one great reality, eternal and infinite, is God, he sees everything finite and temporal as but some part of God. There is nothing separate or distinct from God, for God is the universe. If, on the other hand, the conception taken as the foundation of the system is that the great inclusive unity is the world itself, or the universe, God is swallowed up in that unity, which may be designated nature." Form of monism Philosophers and theologians have often suggested that pantheism implies monism. Different types of monism include: Substance monism, "the view that the apparent plurality of substances is due to different states or appearances of a single substance" Attributive monism, "the view that whatever the number of substances, they are of a single ultimate kind" Partial monism, "within a given realm of being (however many there may be) there is only one substance" Existence monism, the view that there is only one concrete object token (The One, "Τὸ Ἕν" or the Monad). Priority monism, "the whole is prior to its parts" or "the world has parts, but the parts are dependent fragments of an integrated whole." Property monism: the view that all properties are of a single type (e.g. only physical properties exist) Genus monism: "the doctrine that there is a highest category; e.g., being" Views contrasting with monism are: Metaphysical dualism, which asserts that there are two ultimately irreconcilable substances or realities such as Good and Evil, for example, Manichaeism, Metaphysical pluralism, which asserts three or more fundamental substances or realities. Nihilism, negates any of the above categories (substances, properties, concrete objects, etc.). Monism in modern philosophy of mind
it attained supremacy among us as a philosophic theory." Johann Wolfgang von Goethe rejected Jacobi’s personal belief in God as the "hollow sentiment of a child’s brain" (Goethe 15/1: 446) and, in the "Studie nach Spinoza" (1785/86), proclaimed the identity of existence and wholeness. When Jacobi speaks of Spinoza’s "fundamentally stupid universe" (Jacobi [31819] 2000: 312), Goethe praises nature as his "idol" (Goethe 14: 535). In their The Holy Family (1844) Karl Marx and Friedrich Engels note, "Spinozism dominated the eighteenth century both in its later French variety, which made matter into substance, and in deism, which conferred on matter a more spiritual name.... Spinoza's French school and the supporters of deism were but two sects disputing over the true meaning of his system...." In George Henry Lewes's words (1846), "Pantheism is as old as philosophy. It was taught in the old Greek schools — by Plato, by St. Augustine, and by the Jews. Indeed, one may say that Pantheism, under one of its various shapes, is the necessary consequence of all metaphysical inquiry, when pushed to its logical limits; and from this reason do we find it in every age and nation. The dreamy contemplative Indian, the quick versatile Greek, the practical Roman, the quibbling Scholastic, the ardent Italian, the lively Frenchman, and the bold Englishman, have all pronounced it as the final truth of philosophy. Wherein consists Spinoza's originality? — what is his merit? — are natural questions, when we see him only lead to the same result as others had before proclaimed. His merit and originality consist in the systematic exposition and development of that doctrine — in his hands, for the first time, it assumes the aspect of a science. The Greek and Indian Pantheism is a vague fanciful doctrine, carrying with it no scientific conviction; it may be true — it looks true — but the proof is wanting. But with Spinoza there is no choice: if you understand his terms, admit the possibility of his science, and seize his meaning; you can no more doubt his conclusions than you can doubt Euclid; no mere opinion is possible, conviction only is possible." S. M. Melamed (1933) noted, "It may be observed, however, that Spinoza was not the first prominent monist and pantheist in modern Europe. A generation before him Bruno conveyed a similar message to humanity. Yet Bruno is merely a beautiful episode in the history of the human mind, while Spinoza is one of its most potent forces. Bruno was a rhapsodist and a poet, who was overwhelmed with artistic emotions; Spinoza, however, was spiritus purus and in his method the prototype of the philosopher." 18th century The first known use of the term "pantheism" was in Latin ("pantheismus" ) by the English mathematician Joseph Raphson in his work De Spatio Reali seu Ente Infinito, published in 1697. Raphson begins with a distinction between atheistic "panhylists" (from the Greek roots pan, "all", and hyle, "matter"), who believe everything is matter, and Spinozan "pantheists" who believe in "a certain universal substance, material as well as intelligence, that fashions all things that exist out of its own essence." Raphson thought that the universe was immeasurable in respect to a human's capacity of understanding, and believed that humans would never be able to comprehend it. He referred to the pantheism of the Ancient Egyptians, Persians, Syrians, Assyrians, Greek, Indians, and Jewish Kabbalists, specifically referring to Spinoza. The term was first used in English by a translation of Raphson's work in 1702. It was later used and popularized by Irish writer John Toland in his work of 1705 Socinianism Truly Stated, by a pantheist. Toland was influenced by both Spinoza and Bruno, and had read Joseph Raphson's De Spatio Reali, referring to it as "the ingenious Mr. Ralphson's (sic) Book of Real Space". Like Raphson, he used the terms "pantheist" and "Spinozist" interchangeably. In 1720 he wrote the Pantheisticon: or The Form of Celebrating the Socratic-Society in Latin, envisioning a pantheist society that believed, "All things in the world are one, and one is all in all things ... what is all in all things is God, eternal and immense, neither born nor ever to perish." He clarified his idea of pantheism in a letter to Gottfried Leibniz in 1710 when he referred to "the pantheistic opinion of those who believe in no other eternal being but the universe". In the mid-eighteenth century, the English theologian Daniel Waterland defined pantheism this way: "It supposes God and nature, or God and the whole universe, to be one and the same substance—one universal being; insomuch that men's souls are only modifications of the divine substance." In the early nineteenth century, the German theologian Julius Wegscheider defined pantheism as the belief that God and the world established by God are one and the same. Pantheism controversy Between 1785–89, a major controversy about Spinoza's philosophy arose between the German philosophers Friedrich Heinrich Jacobi (a critic) and Moses Mendelssohn (a defender). Known in German as the Pantheismusstreit (pantheism controversy), it helped spread pantheism to many German thinkers. A 1780 conversation with the German dramatist Gotthold Ephraim Lessing led Jacobi to a protracted study of Spinoza's works. Lessing stated that he knew no other philosophy than Spinozism. Jacobi's Über die Lehre des Spinozas (1st ed. 1785, 2nd ed. 1789) expressed his strenuous objection to a dogmatic system in philosophy, and drew upon him the enmity of the Berlin group, led by Mendelssohn. Jacobi claimed that Spinoza's doctrine was pure materialism, because all Nature and God are said to be nothing but extended substance. This, for Jacobi, was the result of Enlightenment rationalism and it would finally end in absolute atheism. Mendelssohn disagreed with Jacobi, saying that pantheism shares more characteristics of theism than of atheism. The entire issue became a major intellectual and religious concern for European civilization at the time. Willi Goetschel argues that Jacobi's publication significantly shaped Spinoza's wide reception for centuries following its publication, obscuring the nuance of Spinoza's philosophic work. 19th century Growing influence During the beginning of the 19th century, pantheism was the viewpoint of many leading writers and philosophers, attracting figures such as William Wordsworth and Samuel Coleridge in Britain; Johann Gottlieb Fichte, Schelling and Hegel in Germany; Knut Hamsun in Norway; and Walt Whitman, Ralph Waldo Emerson and Henry David Thoreau in the United States. Seen as a growing threat by the Vatican, in 1864 it was formally condemned by Pope Pius IX in the Syllabus of Errors. A letter written in 1886 by William Herndon, Abraham Lincoln's law partner, was sold at auction for US$30,000 in 2011. In it, Herndon writes of the U.S. President's evolving religious views, which included pantheism. The subject is understandably controversial, but the content of the letter is consistent with Lincoln's fairly lukewarm approach to organized religion. Comparison with non-Christian religions Some 19th-century theologians thought that various pre-Christian religions and philosophies were pantheistic. They thought Pantheism was similar to the ancient Hindu philosophy of Advaita (non-dualism) to the extent that the 19th-century German Sanskritist Theodore Goldstücker remarked that Spinoza's thought was "... a western system of philosophy which occupies a foremost rank amongst the philosophies of all nations
description of the spiritual unity of the cosmos. It presents the nature of Purusha or the cosmic being as both immanent in the manifested world and yet transcendent to it. From this being the sukta holds, the original creative will proceeds, by which this vast universe is projected in space and time. The most influential and dominant school of Indian philosophy, Advaita Vedanta, rejects theism and dualism by insisting that "Brahman [ultimate reality] is without parts or attributes...one without a second." Since Brahman has no properties, contains no internal diversity and is identical with the whole reality it cannot be understood as an anthropomorphic personal God. The relationship between Brahman and the creation is often thought to be panentheistic. Panentheism is also expressed in the Bhagavad Gita. In verse IX.4, Krishna states: Many schools of Hindu thought espouse monistic theism, which is thought to be similar to a panentheistic viewpoint. Nimbarka's school of differential monism (Dvaitadvaita), Ramanuja's school of qualified monism (Vishistadvaita) and Saiva Siddhanta and Kashmir Shaivism are all considered to be panentheistic. Chaitanya Mahaprabhu's Gaudiya Vaishnavism, which elucidates the doctrine of Achintya Bheda Abheda (inconceivable oneness and difference), is also thought to be panentheistic. In Kashmir Shaivism, all things are believed to be a manifestation of Universal Consciousness (Cit or Brahman). So from the point of view of this school, the phenomenal world (Śakti) is real, and it exists and has its being in Consciousness (Cit). Thus, Kashmir Shaivism is also propounding of theistic monism or panentheism. Shaktism, or Tantra, is regarded as an Indian prototype of Panentheism. Shakti is considered to be the cosmos itself – she is the embodiment of energy and dynamism, and the motivating force behind all action and existence in the material universe. Shiva is her transcendent masculine aspect, providing the divine ground of all being. "There is no Shiva without Shakti, or Shakti without Shiva. The two ... in themselves are One." Thus, it is She who becomes the time and space, the cosmos, it is She who becomes the five elements, and thus all animate life and inanimate forms. She is the primordial energy that holds all creation and destruction, all cycles of birth and death, all laws of cause and effect within Herself, and yet is greater than the sum total of all these. She is transcendent, but becomes immanent as the cosmos (Mula Prakriti). She, the Primordial Energy, directly becomes Matter. Buddhism The Reverend Zen Master Soyen Shaku was the first Zen Buddhist Abbot to tour the United States in 1905–6. He wrote a series of essays collected into the book Zen For Americans. In the essay titled "The God Conception of Buddhism" he attempts to explain how a Buddhist looks at the ultimate without an anthropomorphic God figure while still being able to relate to the term God in a Buddhist sense: At the outset, let me state that Buddhism is not atheistic as the term is ordinarily understood. It has certainly a God, the highest reality and truth, through which and in which this universe exists. However, the followers of Buddhism usually avoid the term God, for it savors so much of Christianity, whose spirit is not always exactly in accord with the Buddhist interpretation of religious experience. Again, Buddhism is not pantheistic in the sense that it identifies the universe with God. On the other hand, the Buddhist God is absolute and transcendent; this world, being merely its manifestation, is necessarily fragmental and imperfect. To define more exactly the Buddhist notion of the highest being, it may be convenient to borrow the term very happily coined by a modern German scholar, "panentheism," according to which God is πᾶν καὶ ἕν (all and one) and more than the totality of existence. The essay then goes on to explain first utilizing the term "God" for the American audience to get an initial understanding of what he means by "panentheism," and then discusses the terms that Buddhism uses in place of "God" such as Dharmakaya, Buddha or Adi-Buddha, and Tathagata. Christianity Panentheism is also a feature of some Christian philosophical theologies and resonates strongly within the theological tradition of the Eastern Orthodox Church. It also appears in process theology. Process theological thinkers are generally regarded in the Christian West as unorthodox. Furthermore, process philosophical thought is widely believed to have paved the way for open theism, a movement that tends to associate itself primarily with the Evangelical branch of Protestantism, but is also generally considered unorthodox by most Evangelicals. Panentheism in other Christian confessions Panentheistic conceptions of God occur amongst some modern theologians. Process theology and Creation Spirituality, two recent developments in Christian theology, contain panentheistic ideas. Charles Hartshorne (1897–2000), who conjoined process theology with panentheism, maintained a lifelong membership in the Methodist church but was also a Unitarian. In later years he joined the Austin, Texas, Unitarian Universalist congregation and was an active participant in that church. Referring to the ideas such as Thomas Oord's ‘theocosmocentrism’ (2010), the soft panentheism of open theism, Keith Ward's comparative theology and John Polkinghorne's critical realism (2009), Raymond Potgieter observes distinctions such as dipolar and bipolar: The former suggests two poles separated such as God influencing creation and it in turn its creator (Bangert 2006:168), whereas bipolarity completes God’s being implying interdependence between temporal and eternal poles. (Marbaniang 2011:133), in dealing with Whitehead’s approach, does not make this distinction. I use the term bipolar as a generic term to include suggestions of the structural definition of God’s transcendence and immanence; to for instance accommodate a present and future reality into which deity must reasonably fit and function, and yet maintain separation from this world and evil whilst remaining within it. Some argue that panentheism should also include the notion that God has always been related to some world or another, which denies the idea of creation out of nothing (creatio ex nihilo). Nazarene Methodist theologian Thomas Jay Oord (* 1965) advocates panentheism, but he uses the word "theocosmocentrism" to highlight the notion that God and some world or another are the primary conceptual starting blocks for eminently fruitful theology. This form of panentheism helps in overcoming the problem of evil and in proposing that God's love for the world is essential to who God is. The Latter Day Saint movement teaches that the Light of Christ "proceeds from God through Christ and gives life and light to all things." Gnosticism Manichaeism, being another gnostic sect, preached a very different doctrine in positioning the true Manichaean God against matter as well as other deities, that it described as enmeshed with the world, namely the gods of Jews, Christians and pagans. Nevertheless, this dualistic teaching included an elaborate cosmological myth that narrates the defeat of primal man by the powers of darkness that devoured and imprisoned the particles of light. Valentinian Gnosticism taught that matter came about through emanations of the supreme being, even if to some this event is held to be more accidental than intentional. To other gnostics, these emanations were akin to the Sephirot of the Kabbalists and deliberate manifestations of a transcendent God through a complex system of intermediaries. Judaism While mainstream Rabbinic Judaism is classically monotheistic, and follows in the footsteps of Maimonides (c. 1135–1204), the panentheistic conception of God can be found among certain mystical Jewish traditions. A leading scholar of Kabbalah, Moshe Idel ascribes this doctrine to the
of the supreme being, even if to some this event is held to be more accidental than intentional. To other gnostics, these emanations were akin to the Sephirot of the Kabbalists and deliberate manifestations of a transcendent God through a complex system of intermediaries. Judaism While mainstream Rabbinic Judaism is classically monotheistic, and follows in the footsteps of Maimonides (c. 1135–1204), the panentheistic conception of God can be found among certain mystical Jewish traditions. A leading scholar of Kabbalah, Moshe Idel ascribes this doctrine to the kabbalistic system of Moses ben Jacob Cordovero (1522–1570) and in the eighteenth century to the Baal Shem Tov (c. 1700–1760), founder of the Hasidic movement, as well as his contemporaries, Rabbi Dov Ber, the Maggid of Mezeritch (died 1772), and Menahem Mendel, the Maggid of Bar. This may be said of many, if not most, subsequent Hasidic masters. There is some debate as to whether Isaac Luria (1534–1572) and Lurianic Kabbalah, with its doctrine of tzimtzum, can be regarded as panentheistic. According to Hasidism, the infinite Ein Sof is incorporeal and exists in a state that is both transcendent and immanent. This appears to be the view of non-Hasidic Rabbi Chaim of Volozhin, as well. Hasidic Judaism merges the elite ideal of nullification to a transcendent God, via the intellectual articulation of inner dimensions through Kabbalah and with emphasis on the panentheistic divine immanence in everything. Many scholars would argue that "panentheism" is the best single-word description of the philosophical theology of Baruch Spinoza. It is therefore no surprise, that aspects of panentheism are also evident in the theology of Reconstructionist Judaism as presented in the writings of Mordecai Kaplan (1881–1983), who was strongly influenced by Spinoza. Islam Several Sufi saints and thinkers, primarily Ibn Arabi, held beliefs that have been considered panentheistic. These notions later took shape in the theory of wahdat ul-wujud (the Unity of All Things). Some Sufi Orders, notably the Bektashis and the Universal Sufi movement, continue to espouse panentheistic beliefs. Nizari Ismaili follow panentheism according to Ismaili doctrine. Nevertheless, some Shia Muslims also do believe in different degrees of Panentheism. Al-Qayyuum is a Name of God in the Qur'an which translates to "The Self-Existing by Whom all subsist". In Islam the universe can not exist if Allah doesn't exist, and it is only by His power which encompasses everything and which is everywhere that the universe can exist. In Ayaẗ al-Kursii God's throne is described as "extending over the heavens and the earth" and "He feels no fatigue in guarding and preserving them". This does not mean though that the universe is God, or that a creature (like a tree or an animal) is God, because those would be respectively pantheism, which is a heresy in traditional Islam, and the worst heresy in Islam, shirk (polytheism). God is separated by His creation but His creation can not survive without Him. In Pre-Columbian America The Mesoamerican empires of the Mayas, Aztecs as well as the South American Incas (Tahuatinsuyu) have typically been characterized as polytheistic, with strong male and female deities. According to Charles C. Mann's history book 1491: New Revelations of the Americas Before Columbus, only the lower classes of Aztec society were polytheistic. Philosopher James Maffie has argued that Aztec metaphysics was pantheistic rather than panentheistic, since Teotl was considered by Aztec philosophers to be the ultimate all-encompassing yet all-transcending force defined by its inherit duality. Native American beliefs in North America have been characterized as panentheistic in that there is an emphasis on a single, unified divine spirit that is manifest in each individual entity. (North American Native writers have also translated the word for God as the Great Mystery or as the Sacred Other). This concept is referred to by many as the Great Spirit. Philosopher J. Baird Callicott has described Lakota theology as panentheistic, in that the divine both transcends and is immanent in everything. One exception can be modern Cherokee who are predominantly monotheistic but apparently not panentheistic; yet in older Cherokee traditions many observe both aspects of pantheism and panentheism, and are often not beholden to exclusivity, encompassing other spiritual traditions without contradiction, a common trait among some tribes in the Americas. In the stories of Keetoowah storytellers Sequoyah Guess and Dennis Sixkiller, God is known as ᎤᏁᎳᏅᎯ, commonly pronounced "unehlanv," and visited earth in prehistoric times, but then left earth and her people to rely on themselves. This shows a parallel to Vaishnava cosmology. Baháʼí Faith In the Baháʼí Faith, God is described as a single, imperishable God, the creator of all things, including all the creatures and forces in the universe. The connection between God and the world is that of the creator to his creation. God is understood to be independent of his creation, and that creation is dependent and contingent on God. Accordingly, the Baháʼí Faith is much more closely aligned with traditions of monotheism than panentheism. God is not seen to be part of creation as he cannot be divided and does not descend to the condition of his creatures. Instead, in the Baháʼí teachings, the world of creation emanates from God, in that all things have been realized by him and have attained to existence. Creation is seen as the expression of God's will in the contingent world, and every created thing is seen as a sign of God's sovereignty, and leading to knowledge of him; the signs of God are most particularly revealed in human beings. Konkōkyō In Konkōkyō, God is named “Tenchi Kane no Kami-Sama” which can mean
or harm to others. DSM-5 The DSM-5 adds a distinction between paraphilias and paraphilic disorders, stating that paraphilias do not require or justify psychiatric treatment in themselves, and defining paraphilic disorder as "a paraphilia that is currently causing distress or impairment to the individual or a paraphilia whose satisfaction has entailed personal harm, or risk of harm, to others". The DSM-5 Paraphilias Subworkgroup reached a "consensus that paraphilias are not ipso facto psychiatric disorders", and proposed "that the DSM-V make a distinction between paraphilias and paraphilic disorders. [...] One would ascertain a paraphilia (according to the nature of the urges, fantasies, or behaviors) but diagnose a paraphilic disorder (on the basis of distress and impairment). In this conception, having a paraphilia would be a necessary but not a sufficient condition for having a paraphilic disorder." The 'Rationale' page of any paraphilia in the electronic DSM-5 draft continues: "This approach leaves intact the distinction between normative and non-normative sexual behavior, which could be important to researchers, but without automatically labeling non-normative sexual behavior as psychopathological. It also eliminates certain logical absurdities in the DSM-IV-TR. In that version, for example, a man cannot be classified as a transvestite—however much he cross-dresses and however sexually exciting that is to him—unless he is unhappy about this activity or impaired by it. This change in viewpoint would be reflected in the diagnostic criteria sets by the addition of the word 'Disorder' to all the paraphilias. Thus, Sexual Sadism would become Sexual Sadism Disorder; Sexual Masochism would become Sexual Masochism Disorder, and so on." Bioethics professor Alice Dreger interpreted these changes as "a subtle way of saying sexual kinks are basically okay – so okay, the sub-work group doesn't actually bother to define paraphilia. But a paraphilic disorder is defined: that's when an atypical sexual interest causes distress or impairment to the individual or harm to others." Interviewed by Dreger, Ray Blanchard, the Chair of the Paraphilias Sub-Work Group, stated, "We tried to go as far as we could in depathologizing mild and harmless paraphilias, while recognizing that severe paraphilias that distress or impair people or cause them to do harm to others are validly regarded as disorders." Charles Allen Moser stated that this change is not really substantive, as the DSM-IV already acknowledged a difference between paraphilias and non-pathological but unusual sexual interests, a distinction that is virtually identical to what was being proposed for DSM-5, and it is a distinction that, in practice, has often been ignored. Linguist Andrew Clinton Hinderliter argued that "including some sexual interests—but not others—in the DSM creates a fundamental asymmetry and communicates a negative value judgment against the sexual interests included," and leaves the paraphilias in a situation similar to ego-dystonic homosexuality, which was removed from the DSM because it was no longer recognized as a mental disorder. The DSM-5 acknowledges that many dozens of paraphilias exist, but only has specific listings for eight that are forensically important and relatively common. These are voyeuristic disorder, exhibitionistic disorder, frotteuristic disorder, sexual masochism disorder, sexual sadism disorder, pedophilic disorder, fetishistic disorder, and transvestic disorder. Other paraphilias can be diagnosed under the Other Specified Paraphilic Disorder or Unspecified Paraphilic Disorder listings, if accompanied by distress or impairment. Management Most clinicians and researchers believe that paraphilic sexual interests cannot be altered, although evidence is needed to support this. Instead, the goal of therapy is normally to reduce the person's discomfort with their paraphilia and limit any criminal behavior. Both psychotherapeutic and pharmacological methods are available to these ends. Cognitive behavioral therapy, at times, can help people with paraphilias develop strategies to avoid acting on their interests. Patients are taught to identify and cope with factors that make acting on their interests more likely, such as stress. It is currently the only form of psychotherapy for paraphilias supported by randomized double-blind trials, as opposed to case studies and consensus of expert opinion. Medications Pharmacological treatments can help people control their sexual behaviors, but do not change the content of the paraphilia. They are typically combined with cognitive behavioral therapy for best effect. SSRIs Selective serotonin reuptake inhibitors (SSRIs) are used, especially with exhibitionists, non-offending pedophiles, and compulsive masturbators. They are proposed to work by reducing sexual arousal, compulsivity, and depressive symptoms. They have been well received and are considered an important pharmacological treatment of paraphilia. Antiandrogens Antiandrogens are used in more severe cases. Similar to physical castration, they work by reducing androgen levels, and have thus been described as chemical castration. The antiandrogen cyproterone acetate has been shown to substantially reduce sexual fantasies and offending behaviors. Medroxyprogesterone acetate and gonadotropin-releasing hormone agonists (such as leuprorelin) have also been used to lower sex drive. Due to the side effects, the World Federation of Societies of Biological Psychiatry recommends that hormonal treatments only be used when there is a serious risk of sexual violence, or when other methods have failed. Surgical castration has largely been abandoned because these pharmacological alternatives are similarly effective and less invasive. Epidemiology Research has shown that paraphilias are rarely observed in women. However, there have been some studies on females with paraphilias. Sexual masochism has been found to be the most commonly observed paraphilia in women, with approximately 1 in 20 cases of sexual masochism being female. Many acknowledge the scarcity of research on female paraphilias. The majority of paraphilia studies
masochism, and "other sexual deviation". No definition or examples were provided for "other sexual deviation", but the general category of sexual deviation was meant to describe the sexual preference of individuals that was "directed primarily toward objects other than people of opposite sex, toward sexual acts not usually associated with coitus, or toward coitus performed under bizarre circumstances, as in necrophilia, pedophilia, sexual sadism, and fetishism." Except for the removal of homosexuality from the DSM-III onwards, this definition provided a general standard that has guided specific definitions of paraphilias in subsequent DSM editions, up to DSM-IV-TR. DSM-III through DSM-IV The term paraphilia was introduced in the DSM-III (1980) as a subset of the new category of "psychosexual disorders." The DSM-III-R (1987) renamed the broad category to sexual disorders, renamed atypical paraphilia to paraphilia NOS (not otherwise specified), renamed transvestism as transvestic fetishism, added frotteurism, and moved zoophilia to the NOS category. It also provided seven nonexhaustive examples of NOS paraphilias, which besides zoophilia included exhibitionism, necrophilia, partialism, coprophilia, klismaphilia, and urophilia. The DSM-IV (1994) retained the sexual disorders classification for paraphilias, but added an even broader category, "sexual and gender identity disorders," which includes them. The DSM-IV retained the same types of paraphilias listed in DSM-III-R, including the NOS examples, but introduced some changes to the definitions of some specific types. DSM-IV-TR The DSM-IV-TR describes paraphilias as "recurrent, intense sexually arousing fantasies, sexual urges or behaviors generally involving nonhuman objects, the suffering or humiliation of oneself or one's partner, or children or other nonconsenting persons that occur over a period of six months" (criterion A), which "cause clinically significant distress or impairment in social, occupational, or other important areas of functioning" (criterion B). DSM-IV-TR names eight specific paraphilic disorders (exhibitionism, fetishism, frotteurism, pedophilia, sexual masochism, sexual sadism, voyeurism, and transvestic fetishism, plus a residual category, paraphilia—not otherwise specified). Criterion B differs for exhibitionism, frotteurism, and pedophilia to include acting on these urges, and for sadism, acting on these urges with a nonconsenting person. Sexual arousal in association with objects that were designed for sexual purposes is not diagnosable. Some paraphilias may interfere with the capacity for sexual activity with consenting adult partners. In the current version of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR), a paraphilia is not diagnosable as a psychiatric disorder unless it causes distress to the individual or harm to others. DSM-5 The DSM-5 adds a distinction between paraphilias and paraphilic disorders, stating that paraphilias do not require or justify psychiatric treatment in themselves, and defining paraphilic disorder as "a paraphilia that is currently causing distress or impairment to the individual or a paraphilia whose satisfaction has entailed personal harm, or risk of harm, to others". The DSM-5 Paraphilias Subworkgroup reached a "consensus that paraphilias are not ipso facto psychiatric disorders", and proposed "that the DSM-V make a distinction between paraphilias and paraphilic disorders. [...] One would ascertain a paraphilia (according to the nature of the urges, fantasies, or behaviors) but diagnose a paraphilic disorder (on the basis of distress and impairment). In this conception, having a paraphilia would be a necessary but not a sufficient condition for having a paraphilic disorder." The 'Rationale' page of any paraphilia in the electronic DSM-5 draft continues: "This approach leaves intact the distinction between normative and non-normative sexual behavior, which could be important to researchers, but without automatically labeling non-normative sexual behavior as psychopathological. It also eliminates certain logical absurdities in the DSM-IV-TR. In that version, for example, a man cannot be classified as a transvestite—however much he cross-dresses and however sexually exciting that is to him—unless he is unhappy about this activity or impaired by it. This change in viewpoint would be reflected in the diagnostic criteria sets by the addition of the word 'Disorder' to all the paraphilias. Thus, Sexual Sadism would become Sexual Sadism Disorder; Sexual Masochism would become Sexual Masochism Disorder, and so on." Bioethics professor Alice Dreger interpreted these changes as "a subtle way of saying sexual kinks are basically okay – so okay, the sub-work group doesn't actually bother to define paraphilia. But a paraphilic disorder is defined: that's when an atypical sexual interest causes distress or impairment to the individual or harm to others." Interviewed by Dreger, Ray Blanchard, the Chair of the Paraphilias Sub-Work Group, stated, "We tried to go as far as we could in depathologizing mild and harmless paraphilias, while recognizing that severe paraphilias that distress or impair people or cause them to do harm to others are validly regarded as disorders." Charles Allen Moser stated that this change is not really substantive, as the DSM-IV already acknowledged a difference between paraphilias and non-pathological but unusual sexual interests, a distinction that is virtually identical to what was being proposed for DSM-5, and it is a distinction that, in practice, has often been ignored. Linguist Andrew Clinton Hinderliter argued that "including some sexual interests—but not others—in the DSM creates a fundamental asymmetry and communicates a negative value judgment against the sexual interests included," and leaves the paraphilias in a situation similar to ego-dystonic homosexuality, which was removed from the DSM because it was no longer recognized as a mental disorder. The DSM-5 acknowledges that many dozens of paraphilias exist, but only has specific listings for eight that are forensically important and relatively common. These are voyeuristic disorder, exhibitionistic disorder, frotteuristic disorder, sexual masochism disorder, sexual sadism disorder, pedophilic disorder, fetishistic disorder, and transvestic disorder. Other paraphilias can be diagnosed under the Other Specified Paraphilic Disorder or Unspecified Paraphilic Disorder listings, if accompanied by distress or impairment. Management Most clinicians and researchers believe that paraphilic sexual interests cannot be altered, although evidence is needed to support this. Instead, the goal of therapy is normally to reduce the person's discomfort with their paraphilia and limit any criminal behavior. Both psychotherapeutic and pharmacological methods are available to these ends. Cognitive behavioral therapy, at times, can help people with paraphilias develop strategies to avoid acting on their interests. Patients are taught to identify and cope with factors that make acting on their interests more likely, such as stress. It is currently the only form of psychotherapy for paraphilias supported by randomized double-blind trials, as opposed to case studies and consensus of expert opinion. Medications Pharmacological treatments can help people control their sexual behaviors, but do not change the content of the paraphilia. They are typically combined with cognitive behavioral therapy for best effect. SSRIs Selective serotonin reuptake inhibitors (SSRIs) are used, especially with exhibitionists, non-offending pedophiles, and compulsive masturbators. They are proposed to work by reducing sexual arousal, compulsivity, and depressive symptoms. They have been well received and are considered an important pharmacological treatment of paraphilia. Antiandrogens Antiandrogens are used in more severe cases. Similar to physical castration, they work by reducing androgen levels, and have thus been described as chemical castration. The antiandrogen cyproterone acetate has been shown to substantially reduce sexual fantasies and offending behaviors. Medroxyprogesterone acetate and gonadotropin-releasing hormone agonists (such as leuprorelin) have also been used to lower sex drive. Due to the side effects, the World Federation of Societies of Biological Psychiatry recommends that hormonal treatments only be used when there is a serious risk of sexual violence, or when other methods have failed. Surgical castration has largely been abandoned because these pharmacological alternatives are similarly effective and less invasive. Epidemiology Research has shown that paraphilias are rarely observed in women. However, there have been some studies on females with paraphilias. Sexual masochism has been found to be the most commonly observed paraphilia in women, with approximately 1 in 20 cases of sexual masochism being female. Many acknowledge the scarcity of research on female paraphilias. The majority of paraphilia studies are conducted on people who have been convicted of sex crimes. Since the number of male convicted sex offenders far exceeds the number of female convicted sex offenders, research on paraphilic behavior in women is consequently lacking. Some researchers argue that an underrepresentation exists concerning pedophilia in females. Due to the low number of women in studies on pedophilia, most studies are based from "exclusively male samples". This likely underrepresentation may also be attributable to a "societal tendency to dismiss the negative impact of sexual relationships between young boys and adult women". Michele Elliott has done extensive research on child sexual abuse committed by females, publishing the book Female Sexual Abuse of Children: The Last Taboo in an attempt to challenge the gender-biased discourse surrounding sex crimes. John Hunsley states that physiological limitations in the study of female sexuality must also be acknowledged when considering research on paraphilias. He states that while a man's sexual arousal can be directly measured from his erection (see penile plethysmograph), a woman's sexual arousal cannot be measured as clearly (see vaginal photoplethysmograph), and therefore research concerning female sexuality is rarely as conclusive as research on men. Legal issues In the United States, since 1990 a significant number of states have passed sexually violent predator laws. Following a series of landmark cases in the Supreme Court of the United States, persons diagnosed with paraphilias, particularly pedophilia (Kansas v. Hendricks, 1997) and exhibitionism (Kansas v. Crane, 2002), with a history of anti-social behavior and related criminal history (that includes
basic environment for drugs that are taken by mouth. Acid is essential to degrading certain oral drugs before systemic absorption. Therefore, the absorption of these drugs in children is greater than in adults due to decreased breakdown and increased preservation in a less acidic gastric space. Children also have an extended rate of gastric emptying, which slows the rate of drug absorption. Drug absorption also depends on specific enzymes that come in contact with the oral drug as it travels through the body. Supply of these enzymes increase as children continue to develop their gastrointestinal tract. Pediatric patients have underdeveloped proteins, which leads to decreased metabolism and increased serum concentrations of specific drugs. However, prodrugs experience the opposite effect because enzymes are necessary for allowing their active form to enter systemic circulation. Distribution Percentage of total body water and extracellular fluid volume both decrease as children grow and develop with time. Pediatric patients thus have a larger volume of distribution than adults, which directly affects the dosing of hydrophilic drugs such as beta-lactam antibiotics like ampicillin. Thus, these drugs are administered at greater weight-based doses or with adjusted dosing intervals in children to account for this key difference in body composition. Infants and neonates also have fewer plasma proteins. Thus, highly protein-bound drugs have fewer opportunities for protein binding, leading to increased distribution. Metabolism Drug metabolism primarily occurs via enzymes in the liver and can vary according to which specific enzymes are affected in a specific stage of development. Phase I and Phase II enzymes have different rates of maturation and development, depending on their specific mechanism of action (i.e. oxidation, hydrolysis, acetylation, methylation, etc.). Enzyme capacity, clearance, and half-life are all factors that contribute to metabolism differences between children and adults. Drug metabolism can even differ within the pediatric population, separating neonates and infants from young children. Elimination Drug elimination is primarily facilitated via the liver and kidneys. In infants and young children, the larger relative size of their kidneys leads to increased renal clearance of medications that are eliminated through urine. In preterm neonates and infants, their kidneys are slower to mature and thus are unable to clear as much drug as fully developed kidneys. This can cause unwanted drug build-up, which is why it is important to consider lower doses and greater dosing intervals for this population. Diseases that negatively affect kidney function can also have the same effect and thus warrant similar considerations. Pediatric autonomy in healthcare A major difference between the practice of pediatric and adult medicine is that children, in most jurisdictions and with certain exceptions, cannot make decisions for themselves. The issues of guardianship, privacy, legal responsibility, and informed consent must always be considered in every pediatric procedure. Pediatricians often have to treat the parents and sometimes, the family, rather than just the child. Adolescents are in their own legal class, having rights to their own health care decisions in certain circumstances. The concept of legal consent combined with the non-legal consent (assent) of the child when considering treatment options, especially in the face of conditions with poor prognosis or complicated and painful procedures/surgeries, means the pediatrician must take into account the desires of many people, in addition to those of the patient. History of Pediatric Autonomy The term autonomy is traceable to ethical theory and law, where it states that autonomous individuals can make decisions based on their own logic. Hippocrates was the first to use the term in a medical setting. He created a code of ethics for doctors called the Hippocratic Oath that highlighted the importance of putting patients' interests first, making autonomy for patients a top priority in health care. In ancient times, society did not view pediatric medicine as essential or scientific. Experts considered professional medicine unsuitable for treating children. Children also had no rights. Fathers regarded their children as property, so their children's health decisions were entrusted to them. As a result, mothers, midwives, "wise women," and general practitioners treated the children instead of doctors. Since mothers could not rely on professional medicine to take care of their children, they developed their own methods, such as using alkaline soda ash to remove the vernix at birth and treating teething pain with opium or wine. The absence of proper pediatric care, rights, and laws in health care to prioritize children's health led to many of their deaths. Ancient Greeks and Romans sometimes even killed healthy female babies and infants with deformities since they had no adequate medical treatment and no laws prohibiting infanticide. In the twentieth century, medical experts began to put more emphasis on children's rights. In 1989, in the United Nations Rights of the Child Convention, medical experts developed the Best Interest Standard of Child to prioritize children's rights and best interests. This event marked the onset of pediatric autonomy. In 1995, the American Academy of Pediatrics (AAP) finally acknowledged the Best Interest Standard of a Child as an ethical principle for pediatric decision-making, and it is still being used today. Parental Authority and Current Medical Issues The majority of the time, parents have the authority to decide what happens to their child. Philosopher John Locke argued that it is the responsibility of parents to raise their children and that God gave them this authority. In modern society, Jeffrey Blustein, modern philosopher and author of the book Parents and Children: The Ethics of Family, argues that parental authority is granted because the child requires parents to satisfy their needs. He believes that parental autonomy is more about parents providing good care for their children and treating them with respect than parents having rights. The researcher Kyriakos Martakis, MD, MSc, explains that research shows parental influence negatively affects children's ability to form autonomy. However, involving children in the decision-making process allows children to develop their cognitive skills and create their own opinions and, thus, decisions about their health. Parental authority affects
It was in 1472, in Padua, that Paolo Bagellardo, an Italian physician, authored the first medical book entirely about childhood illnesses - "De infantium aegritudinibus ac remediis." Some of the oldest traces of pediatrics can be discovered in Ancient India where children's doctors were called kumara bhrtya. Sushruta Samhita an ayurvedic text, composed during the sixth century BC contains the text about pediatrics. Another ayurvedic text from this period is Kashyapa Samhita. A second century AD manuscript by the Greek physician and gynecologist Soranus of Ephesus dealt with neonatal pediatrics. Byzantine physicians Oribasius, Aëtius of Amida, Alexander Trallianus, and Paulus Aegineta contributed to the field. The Byzantines also built brephotrophia (crêches). Islamic Golden Age writers served as a bridge for Greco-Roman and Byzantine medicine and added ideas of their own, especially Haly Abbas, Yahya Serapion, Abulcasis, Avicenna, and Averroes. The Persian philosopher and physician al-Razi (865–925) published a monograph on pediatrics titled Diseases in Children as well as the first definite description of smallpox as a clinical entity. Also among the first books about pediatrics was Libellus [Opusculum] de aegritudinibus et remediis infantium 1472 ("Little Book on Children Diseases and Treatment"), by the Italian pediatrician Paolo Bagellardo. In sequence came Bartholomäus Metlinger's Ein Regiment der Jungerkinder 1473, Cornelius Roelans (1450–1525) no title Buchlein, or Latin compendium, 1483, and Heinrich von Louffenburg (1391–1460) Versehung des Leibs written in 1429 (published 1491), together form the Pediatric Incunabula, four great medical treatises on children's physiology and pathology. While more information about childhood diseases became available, there was little evidence that children received the same kind of medical care that adults did. It was during the seventeenth and eighteenth centuries that medical experts started offering specialized care for children. The Swedish physician Nils Rosén von Rosenstein (1706–1773) is considered to be the founder of modern pediatrics as a medical specialty, while his work The diseases of children, and their remedies (1764) is considered to be "the first modern textbook on the subject". However, it was not until the nineteenth century that medical professionals acknowledged pediatrics as a separate field of medicine. The first pediatric-specific publications appeared between the 1790s and the 1920s. The term pediatrics was first introduced in English in 1859 by Dr. Abraham Jacobi. In 1860, he became "the first dedicated professor of pediatrics in the world." Pediatrics as a specialized field of medicine continued to develop in the mid-19th century; German physician Abraham Jacobi (1830–1919) is known as the father of American pediatrics because of his many contributions to the field. He received his medical training in Germany and later practiced in New York City. The first generally accepted pediatric hospital is the Hôpital des Enfants Malades (), which opened in Paris in June 1802 on the site of a previous orphanage. From its beginning, this famous hospital accepted patients up to the age of fifteen years, and it continues to this day as the pediatric division of the Necker-Enfants Malades Hospital, created in 1920 by merging with the physically contiguous Necker Hospital, founded in 1778. In other European countries, the Charité (a hospital founded in 1710) in Berlin established a separate Pediatric Pavilion in 1830, followed by similar institutions at Saint Petersburg in 1834, and at Vienna and Breslau (now Wrocław), both in 1837. In 1852 Britain's first pediatric hospital, the Hospital for Sick Children, Great Ormond Street was founded by Charles West. The first Children's hospital in Scotland opened in 1860 in Edinburgh. In the US, the first similar institutions were the Children's Hospital of Philadelphia, which opened in 1855, and then Boston Children's Hospital (1869). Subspecialties in pediatrics were created at the Harriet Lane Home at Johns Hopkins by Edwards A. Park. Differences between adult and pediatric medicine The body size differences are paralleled by maturation changes. The smaller body of an infant or neonate is substantially different physiologically from that of an adult. Congenital defects, genetic variance, and developmental issues are of greater concern to pediatricians than they often are to adult physicians. A common adage is that children are not simply "little adults". The clinician must take into account the immature physiology of the infant or child when considering symptoms, prescribing medications, and diagnosing illnesses. Pediatric physiology directly impacts the pharmacokinetic properties of drugs that enter the body. The absorption, distribution, metabolism, and elimination of medications differ between developing children and grown adults. Despite completed studies and reviews, continual research is needed to better understand how these factors should affect the decisions of healthcare providers when prescribing and administering medications to the pediatric population. Absorption Many drug absorption differences between pediatric and adult populations revolve around the stomach. Neonates and young infants have increased stomach pH due to decreased acid secretion, thereby creating a more basic environment for drugs that are taken by mouth. Acid is essential to degrading certain oral drugs before systemic absorption. Therefore, the absorption of these drugs in children is greater than in adults due to decreased breakdown and increased preservation in a less acidic gastric space. Children also have an extended rate of gastric emptying, which slows the rate of drug absorption. Drug absorption also depends on specific enzymes that come in contact with the oral drug as it travels through the body. Supply of these enzymes increase as children continue to develop their gastrointestinal tract. Pediatric patients have underdeveloped proteins, which leads to decreased metabolism and increased serum concentrations of specific drugs. However, prodrugs experience the opposite effect because enzymes are necessary for allowing their active form to enter systemic circulation. Distribution Percentage of total body water and extracellular fluid volume both decrease as children grow and develop with time. Pediatric patients thus have a larger volume of distribution than adults, which directly affects the dosing of hydrophilic drugs such as beta-lactam antibiotics like ampicillin. Thus, these drugs are administered at greater weight-based doses or with adjusted dosing intervals in children to account for this key difference in body composition. Infants and neonates also have fewer plasma proteins. Thus, highly protein-bound drugs have fewer opportunities for protein binding, leading to increased distribution. Metabolism Drug metabolism primarily occurs via enzymes in the liver and can vary according to which specific enzymes are affected in a specific stage of development. Phase I and Phase II enzymes have different rates of maturation and development, depending on their specific mechanism of action (i.e. oxidation, hydrolysis, acetylation, methylation, etc.). Enzyme capacity, clearance, and half-life are all factors that contribute to metabolism differences between children and adults. Drug metabolism can even differ within the pediatric population, separating neonates and infants from young children. Elimination Drug elimination is primarily facilitated via the liver and kidneys. In infants and young children, the larger relative size of their kidneys leads to increased renal clearance of medications that are eliminated through urine. In preterm neonates and infants, their kidneys are slower to mature and thus are unable to clear as much drug as fully developed kidneys. This can cause unwanted drug build-up, which is why it is important to consider lower doses and greater dosing intervals for this population. Diseases
Bernard's (1813–1878) further discoveries ultimately led to his concept of milieu interieur (internal environment), which would later be taken up and championed as "homeostasis" by American physiologist Walter B. Cannon in 1929. By homeostasis, Cannon meant "the maintenance of steady states in the body and the physiological processes through which they are regulated." In other words, the body's ability to regulate its internal environment. William Beaumont was the first American to utilize the practical application of physiology. Nineteenth-century physiologists such as Michael Foster, Max Verworn, and Alfred Binet, based on Haeckel's ideas, elaborated what came to be called "general physiology", a unified science of life based on the cell actions, later renamed in the 20th century as cell biology. Late modern period In the 20th century, biologists became interested in how organisms other than human beings function, eventually spawning the fields of comparative physiology and ecophysiology. Major figures in these fields include Knut Schmidt-Nielsen and George Bartholomew. Most recently, evolutionary physiology has become a distinct subdiscipline. In 1920, August Krogh won the Nobel Prize for discovering how, in capillaries, blood flow is regulated. In 1954, Andrew Huxley and Hugh Huxley, alongside their research team, discovered the sliding filaments in skeletal muscle, known today as the sliding filament theory. Recently, there have been intense debates about the vitality of physiology as a discipline (Is it dead or alive?). If physiology is perhaps less visible nowadays than during the golden age of the 19th century, it is in large part because the field has given birth to some of the most active domains of today's biological sciences, such as neuroscience, endocrinology, and immunology. Furthermore, physiology is still often seen as an integrative discipline, which can put together into a coherent framework data coming from various different domains. Notable physiologists Women in physiology Initially, women were largely excluded from official involvement in any physiological society. The American Physiological Society, for example, was founded in 1887 and included only men in its ranks. In 1902, the American Physiological Society elected Ida Hyde as the first female member of the society. Hyde, a representative of the American Association of University Women and a global advocate for gender equality in education, attempted to promote gender equality in every aspect of science and medicine. Soon thereafter, in 1913, J.S. Haldane proposed that women be allowed to formally join The Physiological Society, which had been founded in 1876. On 3 July 1915, six women were officially admitted: Florence Buchanan, Winifred Cullis, Ruth C. Skelton, Sarah C. M. Sowton, Constance Leetham Terry, and Enid M. Tribe. The centenary of the election of women was celebrated in 2015 with the publication of the book "Women Physiologists: Centenary Celebrations And Beyond For The Physiological Society." () Prominent women physiologists include: Bodil Schmidt-Nielsen, the first woman president of the American Physiological Society in 1975. Gerty Cori, along with husband Carl Cori, received the Nobel Prize in Physiology or Medicine in 1947 for their discovery of the phosphate-containing form of glucose known as glycogen, as well as its function within eukaryotic metabolic mechanisms for energy production. Moreover, they discovered the Cori cycle, also known as the Lactic acid cycle, which describes how muscle tissue converts glycogen into lactic acid via lactic acid fermentation. Barbara McClintock was rewarded the 1983 Nobel Prize in Physiology or Medicine for the discovery of genetic transposition. McClintock is the only female recipient who has won an unshared Nobel Prize. Gertrude Elion, along with George Hitchings and Sir James Black, received the Nobel Prize for Physiology or Medicine in 1988 for their development of drugs employed in the treatment of several major diseases, such as leukemia, some autoimmune disorders, gout, malaria, and viral herpes. Linda B. Buck, along with Richard Axel, received the Nobel Prize in Physiology or Medicine in 2004 for their discovery of odorant receptors and the complex organization of the olfactory system. Françoise Barré-Sinoussi, along with Luc Montagnier, received the Nobel Prize in Physiology or Medicine in 2008 for their work on the identification of the Human Immunodeficiency Virus (HIV), the cause of Acquired Immunodeficiency Syndrome (AIDS). Elizabeth Blackburn, along with Carol W. Greider and Jack W. Szostak, was awarded the 2009 Nobel Prize for Physiology or Medicine for the discovery of the genetic composition and function of telomeres and the enzyme called telomerase. Subdisciplines There are many ways to categorize the subdisciplines of physiology: based on the taxa studied: human physiology, animal physiology, plant physiology, microbial physiology, viral physiology based on the level of organization: cell physiology, molecular physiology, systems physiology, organismal physiology, ecological physiology, integrative physiology based on the process that causes physiological variation: developmental physiology, environmental physiology, evolutionary physiology based on the ultimate goals of the research: applied physiology (e.g., medical physiology), non-applied (e.g., comparative physiology) Physiological societies Transnational physiological societies include: American Physiological Society International Union of Physiological Sciences The Physiological Society National physiological societies include: Brazilian Society of Physiology See also Outline of physiology Biochemistry Biophysics Cytoarchitecture Defense physiology Ecophysiology Exercise physiology Fish physiology Insect physiology Human body Molecular biology Metabolome Neurophysiology Pathophysiology Pharmacology Physiome References Bibliography Human physiology Widmaier, E.P., Raff, H., Strang, K.T. Vander's Human Physiology. 11th Edition, McGraw-Hill, 2009. Marieb, E.N. Essentials of Human Anatomy and Physiology. 10th Edition, Benjamin Cummings, 2012. Animal physiology Hill, R.W., Wyse, G.A., Anderson, M. Animal Physiology, 3rd ed. Sinauer Associates, Sunderland, 2012. Moyes, C.D., Schulte, P.M. Principles of Animal Physiology, second edition. Pearson/Benjamin Cummings. Boston, MA, 2008. Randall, D., Burggren, W., and French, K. Eckert Animal Physiology: Mechanism and Adaptation, 5th Edition. W.H. Freeman and Company, 2002. Schmidt-Nielsen, K. Animal Physiology: Adaptation and Environment. Cambridge & New York: Cambridge University Press, 1997. Withers, P.C. Comparative animal physiology. Saunders College Publishing, New York, 1992. Plant physiology Larcher, W. Physiological plant ecology (4th ed.). Springer, 2001. Salisbury, F.B, Ross, C.W. Plant physiology. Brooks/Cole Pub Co., 1992 Taiz, L., Zieger, E. Plant Physiology (5th ed.), Sunderland, Massachusetts: Sinauer, 2010. Fungal physiology Griffin, D.H. Fungal Physiology, Second Edition. Wiley-Liss, New York, 1994. Protistan physiology Levandowsky,
considers the diversity of functional characteristics across organisms. History The classical era The study of human physiology as a medical field originates in classical Greece, at the time of Hippocrates (late 5th century BC). Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China, India and elsewhere. Hippocrates incorporated the theory of humorism, which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece. Like Hippocrates, Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. Galen (c. 130–200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy. Galen also saw the human body consisting of three connected systems: the brain and nerves, which are responsible for thoughts and sensations; the heart and arteries, which give life; and the liver and veins, which can be attributed to nutrition and growth. Galen was also the founder of experimental physiology. And for the next 1,400 years, Galenic physiology was a powerful and influential tool in medicine. Early modern period Jean Fernel (1497–1558), a French physician, introduced the term "physiology". Galen, Ibn al-Nafis, Michael Servetus, Realdo Colombo, Amato Lusitano and William Harvey, are credited as making important discoveries in the circulation of the blood. Santorio Santorio in 1610s was the first to use a device to measure the pulse rate (the pulsilogium), and a thermoscope to measure temperature. In 1791 Luigi Galvani described the role of electricity in nerves of dissected frogs. In 1811, César Julien Jean Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata. In the same year, Charles Bell finished work on what would later become known as the Bell-Magendie law, which compared functional differences between dorsal and ventral roots of the spinal cord. In 1824, François Magendie described the sensory roots and produced the first evidence of the cerebellum's role in equilibration to complete the Bell-Magendie law. In the 1820s, the French physiologist Henri Milne-Edwards introduced the notion of physiological division of labor, which allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils). In 1858, Joseph Lister studied the cause of blood coagulation and inflammation that resulted after previous injuries and surgical wounds. He later discovered and implemented antiseptics in the operating room, and as a result, decreased death rate from surgery by
to circles, and is a consequence of the central limit theorem, discussed below. These Monte Carlo methods for approximating are very slow compared to other methods, and do not provide any information on the exact number of digits that are obtained. Thus they are never used to approximate when speed or accuracy is desired. Spigot algorithms Two algorithms were discovered in 1995 that opened up new avenues of research into . They are called spigot algorithms because, like water dripping from a spigot, they produce single digits of that are not reused after they are calculated. This is in contrast to infinite series or iterative algorithms, which retain and use all intermediate digits until the final result is produced. Mathematicians Stan Wagon and Stanley Rabinowitz produced a simple spigot algorithm in 1995. Its speed is comparable to arctan algorithms, but not as fast as iterative algorithms. Another spigot algorithm, the BBP digit extraction algorithm, was discovered in 1995 by Simon Plouffe: This formula, unlike others before it, can produce any individual hexadecimal digit of without calculating all the preceding digits. Individual binary digits may be extracted from individual hexadecimal digits, and octal digits can be extracted from one or two hexadecimal digits. Variations of the algorithm have been discovered, but no digit extraction algorithm has yet been found that rapidly produces decimal digits. An important application of digit extraction algorithms is to validate new claims of record computations: After a new record is claimed, the decimal result is converted to hexadecimal, and then a digit extraction algorithm is used to calculate several random hexadecimal digits near the end; if they match, this provides a measure of confidence that the entire computation is correct. Between 1998 and 2000, the distributed computing project PiHex used Bellard's formula (a modification of the BBP algorithm) to compute the quadrillionth (1015th) bit of , which turned out to be 0. In September 2010, a Yahoo! employee used the company's Hadoop application on one thousand computers over a 23-day period to compute 256 bits of at the two-quadrillionth (2×1015th) bit, which also happens to be zero. Role and characterizations in mathematics Because is closely related to the circle, it is found in many formulae from the fields of geometry and trigonometry, particularly those concerning circles, spheres, or ellipses. Other branches of science, such as statistics, physics, Fourier analysis, and number theory, also include in some of their important formulae. Geometry and trigonometry appears in formulae for areas and volumes of geometrical shapes based on circles, such as ellipses, spheres, cones, and tori. Below are some of the more common formulae that involve . The circumference of a circle with radius is . The area of a circle with radius is . The area of an ellipse with semi-major axis and semi-minor axis is . The volume of a sphere with radius is . The surface area of a sphere with radius is . Some of the formulae above are special cases of the volume of the n-dimensional ball and the surface area of its boundary, the (n−1)-dimensional sphere, given below. Apart from circles, there are other curves of constant width. By Barbier's theorem, every curve of constant width has perimeter times its width. The Reuleaux triangle (formed by the intersection of three circles, each centered where the other two circles cross) has the smallest possible area for its width and the circle the largest. There also exist non-circular smooth curves of constant width. Definite integrals that describe circumference, area, or volume of shapes generated by circles typically have values that involve . For example, an integral that specifies half the area of a circle of radius one is given by: In that integral the function represents the top half of a circle (the square root is a consequence of the Pythagorean theorem), and the integral computes the area between that half of a circle and the axis. Units of angle The trigonometric functions rely on angles, and mathematicians generally use radians as units of measurement. plays an important role in angles measured in radians, which are defined so that a complete circle spans an angle of 2 radians. The angle measure of 180° is equal to radians, and 1° = /180 radians. Common trigonometric functions have periods that are multiples of ; for example, sine and cosine have period 2, so for any angle and any integer , Eigenvalues Many of the appearances of in the formulas of mathematics and the sciences have to do with its close relationship with geometry. However, also appears in many natural situations having apparently nothing to do with geometry. In many applications, it plays a distinguished role as an eigenvalue. For example, an idealized vibrating string can be modelled as the graph of a function on the unit interval , with fixed ends . The modes of vibration of the string are solutions of the differential equation , or . Thus is an eigenvalue of the second derivative operator , and is constrained by Sturm–Liouville theory to take on only certain specific values. It must be positive, since the operator is negative definite, so it is convenient to write , where is called the wavenumber. Then satisfies the boundary conditions and the differential equation with . The value is, in fact, the least such value of the wavenumber, and is associated with the fundamental mode of vibration of the string. One way to show this is by estimating the energy, which satisfies Wirtinger's inequality: for a function with and , both square integrable, we have: with equality precisely when is a multiple of . Here appears as an optimal constant in Wirtinger's inequality, and it follows that it is the smallest wavenumber, using the variational characterization of the eigenvalue. As a consequence, is the smallest singular value of the derivative operator on the space of functions on vanishing at both endpoints (the Sobolev space ). Inequalities The number serves appears in similar eigenvalue problems in higher-dimensional analysis. As mentioned above, it can be characterized via its role as the best constant in the isoperimetric inequality: the area enclosed by a plane Jordan curve of perimeter satisfies the inequality and equality is clearly achieved for the circle, since in that case and . Ultimately as a consequence of the isoperimetric inequality, appears in the optimal constant for the critical Sobolev inequality in n dimensions, which thus characterizes the role of in many physical phenomena as well, for example those of classical potential theory. In two dimensions, the critical Sobolev inequality is for f a smooth function with compact support in , is the gradient of f, and and refer respectively to the and -norm. The Sobolev inequality is equivalent to the isoperimetric inequality (in any dimension), with the same best constants. Wirtinger's inequality also generalizes to higher-dimensional Poincaré inequalities that provide best constants for the Dirichlet energy of an n-dimensional membrane. Specifically, is the greatest constant such that for all convex subsets of of diameter 1, and square-integrable functions u on of mean zero. Just as Wirtinger's inequality is the variational form of the Dirichlet eigenvalue problem in one dimension, the Poincaré inequality is the variational form of the Neumann eigenvalue problem, in any dimension. Fourier transform and Heisenberg uncertainty principle The constant also appears as a critical spectral parameter in the Fourier transform. This is the integral transform, that takes a complex-valued integrable function on the real line to the function defined as: Although there are several different conventions for the Fourier transform and its inverse, any such convention must involve somewhere. The above is the most canonical definition, however, giving the unique unitary operator on that is also an algebra homomorphism of to . The Heisenberg uncertainty principle also contains the number . The uncertainty principle gives a sharp lower bound on the extent to which it is possible to localize a function both in space and in frequency: with our conventions for the Fourier transform, The physical consequence, about the uncertainty in simultaneous position and momentum observations of a quantum mechanical system, is discussed below. The appearance of in the formulae of Fourier analysis is ultimately a consequence of the Stone–von Neumann theorem, asserting the uniqueness of the Schrödinger representation of the Heisenberg group. Gaussian integrals The fields of probability and statistics frequently use the normal distribution as a simple model for complex phenomena; for example, scientists generally assume that the observational error in most experiments follows a normal distribution. The Gaussian function, which is the probability density function of the normal distribution with mean and standard deviation , naturally contains : The factor of makes the area under the graph of equal to one, as is required for a probability distribution. This follows from a change of variables in the Gaussian integral: which says that the area under the basic bell curve in the figure is equal to the square root of . The central limit theorem explains the central role of normal distributions, and thus of , in probability and statistics. This theorem is ultimately connected with the spectral characterization of as the eigenvalue associated with the Heisenberg uncertainty principle, and the fact that equality holds in the uncertainty principle only for the Gaussian function. Equivalently, is the unique constant making the Gaussian normal distribution equal to its own Fourier transform. Indeed, according to , the "whole business" of establishing the fundamental theorems of Fourier analysis reduces to the Gaussian integral. Projective geometry Let be the set of all twice differentiable real functions that satisfy the ordinary differential equation . Then is a two-dimensional real vector space, with two parameters corresponding to a pair of initial conditions for the differential equation. For any , let be the evaluation functional, which associates to each the value of the function at the real point . Then, for each t, the kernel of is a one-dimensional linear subspace of . Hence defines a function from from the real line to the real projective line. This function is periodic, and the quantity can be characterized as the period of this map. Topology The constant appears in the Gauss–Bonnet formula which relates the differential geometry of surfaces to their topology. Specifically, if a compact surface has Gauss curvature K, then where is the Euler characteristic, which is an integer. An example is the surface area of a sphere S of curvature 1 (so that its radius of curvature, which coincides with its radius, is also 1.) The Euler characteristic of a sphere can be computed from its homology groups and is found to be equal to two. Thus we have reproducing the formula for the surface area of a sphere of radius 1. The constant appears in many other integral formulae in topology, in particular, those involving characteristic classes via the Chern–Weil homomorphism. Vector calculus Vector calculus is a branch of calculus that is concerned with the properties of vector fields, and has many physical applications such as to electricity and magnetism. The Newtonian potential for a point source situated at the origin of a three-dimensional Cartesian coordinate system is which represents the potential energy of a unit mass (or charge) placed a distance from the source, and is a dimensional constant. The field, denoted here by , which may be the (Newtonian) gravitational field or the (Coulomb) electric field, is the negative gradient of the potential: Special cases include Coulomb's law and Newton's law of universal gravitation. Gauss' law states that the outward flux of the field through any smooth, simple, closed, orientable surface containing the origin is equal to : It is standard to absorb this factor of into the constant , but this argument shows why it must appear somewhere. Furthermore, is the surface area of the unit sphere, but we have not assumed that is the sphere. However, as a consequence of the divergence theorem, because the region away from the origin is vacuum (source-free) it is only the homology class of the surface in that matters in computing the integral, so it can be replaced by any convenient surface in the same homology class, in particular, a sphere, where spherical coordinates can be used to calculate the integral. A consequence of the Gauss law is that the negative Laplacian of the potential is equal to times the Dirac delta function: More general distributions of matter (or charge) are obtained from this by convolution, giving the Poisson equation where is the distribution function. The constant also plays an analogous role in four-dimensional potentials associated with Einstein's equations, a fundamental formula which forms the basis of the general theory of relativity and describes the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy: where is the Ricci curvature tensor, is the scalar curvature, is the metric tensor, is the cosmological constant, is Newton's gravitational constant, is the speed of light in vacuum, and is the stress–energy tensor. The left-hand side of Einstein's equation is a non-linear analogue of the Laplacian of the metric tensor, and reduces to that in the weak field limit, with the term playing the role of a Lagrange multiplier, and the right-hand side is the analogue of the distribution function, times . Cauchy's integral formula One of the key tools in complex analysis is contour integration of a function over a positively oriented (rectifiable) Jordan curve . A form of Cauchy's integral formula states that if a point is interior to , then Although the curve is not a circle, and hence does not have any obvious connection to the constant , a standard proof of this result uses Morera's theorem, which implies that the integral is invariant under homotopy of the curve, so that it can be deformed to a circle and then integrated explicitly in polar coordinates. More generally, it is true that if a rectifiable closed curve does not contain , then the above integral is times the winding number of the curve. The general form of Cauchy's integral formula establishes the relationship between the values of a complex analytic function on the Jordan curve and the value of at any interior point of : provided is analytic in the region enclosed by and extends continuously to . Cauchy's integral formula is a special case of the residue theorem, that if is a meromorphic function the region enclosed by and is continuous in a neighbourhood of , then where the sum is of the residues at the poles of . The gamma function and Stirling's approximation The factorial function is the product of all of the positive integers through . The gamma function extends the concept of factorial (normally defined only for non-negative integers) to all complex numbers, except the negative real integers. When the gamma function is evaluated at half-integers, the result contains ; for example and . The gamma function is defined by its Weierstrass product development: where is the Euler–Mascheroni constant. Evaluated at and squared, the equation reduces to the Wallis product formula. The gamma function is also connected to the Riemann zeta function and identities for the functional determinant, in which the constant plays an important role. The gamma function is used to calculate the volume of the n-dimensional ball of radius r in Euclidean n-dimensional space, and the surface area of its boundary, the (n−1)-dimensional sphere: Further, it follows from the functional equation that The gamma function can be used to create a simple approximation to the factorial function for large : which is known as Stirling's approximation. Equivalently, As a geometrical application of Stirling's approximation, let denote the standard simplex in n-dimensional Euclidean space, and denote the simplex having all of its sides scaled up by a factor of . Then Ehrhart's volume conjecture is that this is the (optimal) upper bound on the volume of a convex body containing only one lattice point. Number theory and Riemann zeta function The Riemann zeta function is used in many areas of mathematics. When evaluated at it can be written as Finding a simple solution for this infinite series was a famous problem in mathematics called the Basel problem. Leonhard Euler solved it in 1735 when he showed it was equal to . Euler's result leads to the number theory result that the probability of two random numbers being relatively prime (that is, having no shared factors) is equal to . This probability is based on the observation that the probability that any number is divisible by a prime is (for example, every 7th integer is divisible by 7.) Hence the probability that two numbers are both divisible by this prime is , and the probability that at least one of them is not is . For distinct primes, these divisibility events are mutually independent; so the probability that two numbers are relatively prime is given by a product over all primes: This probability can be used in conjunction with a random number generator to approximate using a Monte Carlo approach. The solution to the Basel problem implies that the geometrically derived quantity is connected in a deep way to the distribution of prime numbers. This is a special case of Weil's conjecture on Tamagawa numbers, which asserts the equality of similar such infinite products of arithmetic quantities, localized at each prime p, and a geometrical quantity: the reciprocal of the volume of a certain locally symmetric space. In the case of the Basel problem, it is the hyperbolic 3-manifold . The zeta function also satisfies Riemann's functional equation, which involves as well as the gamma function: Furthermore, the derivative of the zeta function satisfies A consequence is that can be obtained from the functional determinant of the harmonic oscillator. This functional determinant can be computed via a product expansion, and is equivalent to the Wallis product formula. The calculation can be recast in quantum mechanics, specifically the variational approach to the spectrum of the hydrogen atom. Fourier series The constant also appears naturally in Fourier series of periodic functions. Periodic functions are functions on the group of fractional parts of real numbers. The Fourier decomposition shows that a complex-valued function on can be written as an infinite linear superposition of unitary characters of . That is, continuous group homomorphisms from to the circle group of unit modulus complex numbers. It is a theorem that every character of is one of the complex exponentials . There is a unique character on , up to complex conjugation, that is a group isomorphism. Using the Haar measure on the circle group, the constant is half the magnitude of the Radon–Nikodym derivative of this character. The other characters have derivatives whose magnitudes are positive integral multiples of 2. As a result, the constant is the unique number such that the group T, equipped with its Haar measure, is Pontrjagin dual to the lattice of integral multiples of 2. This is a version of the one-dimensional Poisson summation formula. Modular forms and theta functions The constant is connected in a deep way with the theory of modular forms and theta functions. For example, the Chudnovsky algorithm involves in an essential way the j-invariant of an elliptic curve. Modular forms are holomorphic functions in the upper half plane characterized by their transformation properties under the modular group (or its various subgroups), a lattice in the group . An example is the Jacobi theta function which is a kind of modular form called a Jacobi form. This is sometimes written in terms of the nome . The constant is the unique constant making the Jacobi theta function an automorphic form, which means that it transforms in a specific way. Certain identities hold for all automorphic forms. An example is which implies that transforms as a representation under the discrete Heisenberg group. General modular forms and other theta functions also involve , once again because of the Stone–von Neumann theorem. Cauchy distribution and potential theory The Cauchy distribution is a probability density function. The total probability is equal to one, owing to the integral: The Shannon entropy of the Cauchy distribution is equal to , which also involves . The Cauchy distribution plays an important role in potential theory because it is the simplest Furstenberg measure, the classical Poisson kernel associated with a Brownian motion in a half-plane. Conjugate harmonic functions and so also the Hilbert transform are associated with the asymptotics of the Poisson kernel. The Hilbert transform H is the integral transform given by the Cauchy principal value of the singular integral The constant is the unique (positive) normalizing factor such that H defines a linear complex structure on the Hilbert space of square-integrable real-valued functions on the real line. The Hilbert transform, like the Fourier transform, can be characterized purely in terms of its transformation properties on the Hilbert space : up to a normalization factor, it is the unique bounded linear operator that commutes with positive dilations and anti-commutes with all reflections of the real line. The constant is the unique normalizing factor that makes this transformation unitary. In the Mandelbrot set An occurrence of in the fractal called the Mandelbrot set was discovered by David Boll in 1991. He examined the behaviour of the Mandelbrot set near the "neck" at . When the number of iterations until divergence for the point is multiplied by , the result approaches as approaches zero. The point at the cusp of the large "valley" on the right side of the Mandelbrot set behaves similarly: the number of iterations until divergence multiplied by the square root of tends to . Outside mathematics Describing physical phenomena Although not a physical constant, appears routinely in equations describing fundamental principles of the universe, often because of 's relationship to the circle and to spherical coordinate systems. A simple formula from the field of classical mechanics gives the approximate period of a simple pendulum of length , swinging with a small amplitude ( is the earth's gravitational acceleration): One of the key formulae of quantum mechanics is Heisenberg's uncertainty principle, which shows that the uncertainty in the measurement of a particle's position (Δ) and momentum (Δ) cannot both be arbitrarily small at the same time (where is Planck's constant): The fact that is approximately equal to 3 plays a role in the relatively long lifetime of orthopositronium. The inverse lifetime to lowest order in the fine-structure constant is where is the mass of the electron. is present in some structural engineering formulae, such as the buckling formula derived by Euler, which gives the maximum axial load that a long, slender column of length , modulus of elasticity , and area moment of inertia can carry without buckling: The field of fluid dynamics contains in Stokes' law, which approximates the frictional force exerted on small, spherical objects of radius , moving with velocity in a fluid with dynamic viscosity : In electromagnetics, the vacuum permeability constant μ0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation. Before 20 May 2019, it was defined as exactly A relation for the speed of light in vacuum, can be derived from Maxwell's equations in the medium of classical vacuum using a relationship between μ0 and the electric constant (vacuum permittivity), in SI units: Under ideal conditions (uniform gentle slope on a homogeneously erodible substrate), the sinuosity of a meandering river approaches . The sinuosity is the ratio between the actual length and the straight-line distance from source to mouth. Faster currents along the outside edges of a river's bends cause more erosion than along the inside edges, thus pushing the bends even farther out, and increasing the overall loopiness of the river. However, that loopiness eventually causes the river to double back on itself in places and "short-circuit", creating an ox-bow lake in the process. The balance between these two opposing factors leads to an average ratio of between the actual length and the direct distance between source and mouth. Memorizing digits Piphilology is the practice of memorizing large numbers of digits of , and world-records are kept by the Guinness World Records. The record for memorizing digits of , certified by Guinness World Records, is 70,000 digits, recited in India by Rajveer Meena in 9 hours and 27 minutes on 21 March 2015. In 2006, Akira Haraguchi, a retired Japanese engineer, claimed to have recited 100,000 decimal places, but the claim was not verified by Guinness World Records. One common technique is to memorize a story or poem in which the word lengths represent the digits of : The first word has three letters, the second word has one, the third has four, the fourth has one, the fifth has five, and so on. Such memorization aids are called mnemonics. An early example of a mnemonic for pi, originally devised by English scientist James Jeans, is "How I want a drink, alcoholic of course, after the heavy lectures involving quantum mechanics." When a poem is used, it is sometimes referred to as a piem. Poems for memorizing have been composed in several languages in addition to English. Record-setting memorizers typically do not rely on poems, but instead use methods such as remembering number patterns and the method of loci. A few authors have used the digits of to establish a new form of constrained writing, where the word lengths are required to represent the digits of . The Cadaeic Cadenza contains
with a 3,072-sided polygon to obtain a value of of 3.1416. Liu later invented a faster method of calculating and obtained a value of 3.14 with a 96-sided polygon, by taking advantage of the fact that the differences in area of successive polygons form a geometric series with a factor of 4. The Chinese mathematician Zu Chongzhi, around 480 AD, calculated that and suggested the approximations = 3.14159292035... and = 3.142857142857..., which he termed the Milü (''close ratio") and Yuelü ("approximate ratio"), respectively, using Liu Hui's algorithm applied to a 12,288-sided polygon. With a correct value for its seven first decimal digits, this value remained the most accurate approximation of available for the next 800 years. The Indian astronomer Aryabhata used a value of 3.1416 in his Āryabhaṭīya (499 AD). Fibonacci in c. 1220 computed 3.1418 using a polygonal method, independent of Archimedes. Italian author Dante apparently employed the value . The Persian astronomer Jamshīd al-Kāshī produced 9 sexagesimal digits, roughly the equivalent of 16 decimal digits, in 1424 using a polygon with 3×228 sides, which stood as the world record for about 180 years. French mathematician François Viète in 1579 achieved 9 digits with a polygon of 3×217 sides. Flemish mathematician Adriaan van Roomen arrived at 15 decimal places in 1593. In 1596, Dutch mathematician Ludolph van Ceulen reached 20 digits, a record he later increased to 35 digits (as a result, was called the "Ludolphian number" in Germany until the early 20th century). Dutch scientist Willebrord Snellius reached 34 digits in 1621, and Austrian astronomer Christoph Grienberger arrived at 38 digits in 1630 using 1040 sides. Christiaan Huygens was able to arrive at 10 decimals places in 1654 using a slightly different method equivalent to Richardson extrapolation. Infinite series The calculation of was revolutionized by the development of infinite series techniques in the 16th and 17th centuries. An infinite series is the sum of the terms of an infinite sequence. Infinite series allowed mathematicians to compute with much greater precision than Archimedes and others who used geometrical techniques. Although infinite series were exploited for most notably by European mathematicians such as James Gregory and Gottfried Wilhelm Leibniz, the approach was first discovered in India sometime between 1400 and 1500 AD. The first written description of an infinite series that could be used to compute was laid out in Sanskrit verse by Indian astronomer Nilakantha Somayaji in his Tantrasamgraha, around 1500 AD. The series are presented without proof, but proofs are presented in a later Indian work, Yuktibhāṣā, from around 1530 AD. Nilakantha attributes the series to an earlier Indian mathematician, Madhava of Sangamagrama, who lived c. 1350 – c. 1425. Several infinite series are described, including series for sine, tangent, and cosine, which are now referred to as the Madhava series or Gregory–Leibniz series. Madhava used infinite series to estimate to 11 digits around 1400, but that value was improved on around 1430 by the Persian mathematician Jamshīd al-Kāshī, using a polygonal algorithm. The first infinite sequence discovered in Europe was an infinite product (rather than an infinite sum, which is more typically used in calculations) found by French mathematician François Viète in 1593: The second infinite sequence found in Europe, by John Wallis in 1655, was also an infinite product: The discovery of calculus, by English scientist Isaac Newton and German mathematician Gottfried Wilhelm Leibniz in the 1660s, led to the development of many infinite series for approximating . Newton himself used an arcsin series to compute a 15 digit approximation of in 1665 or 1666, later writing "I am ashamed to tell you to how many figures I carried these computations, having no other business at the time." In Europe, Madhava's formula was rediscovered by Scottish mathematician James Gregory in 1671, and by Leibniz in 1674: This formula, the Gregory–Leibniz series, equals when evaluated with = 1. In 1699, English mathematician Abraham Sharp used the Gregory–Leibniz series for to compute to 71 digits, breaking the previous record of 39 digits, which was set with a polygonal algorithm. The Gregory–Leibniz for series is simple, but converges very slowly (that is, approaches the answer gradually), so it is not used in modern calculations. In 1706 John Machin used the Gregory–Leibniz series to produce an algorithm that converged much faster: Machin reached 100 digits of with this formula. Other mathematicians created variants, now known as Machin-like formulae, that were used to set several successive records for calculating digits of . Machin-like formulae remained the best-known method for calculating well into the age of computers, and were used to set records for 250 years, culminating in a 620-digit approximation in 1946 by Daniel Ferguson – the best approximation achieved without the aid of a calculating device. A record was set by the calculating prodigy Zacharias Dase, who in 1844 employed a Machin-like formula to calculate 200 decimals of in his head at the behest of German mathematician Carl Friedrich Gauss. British mathematician William Shanks calculated to 607 digits in 1853, but made a mistake in the 528th digit, rendering all subsequent digits incorrect. Though he calculated an additional 100 digits in 1873, bring the total up to 707, his previous mistake rendered all the new digits incorrect as well. Rate of convergence Some infinite series for converge faster than others. Given the choice of two infinite series for , mathematicians will generally use the one that converges more rapidly because faster convergence reduces the amount of computation needed to calculate to any given accuracy. A simple infinite series for is the Gregory–Leibniz series: As individual terms of this infinite series are added to the sum, the total gradually gets closer to , and – with a sufficient number of terms – can get as close to as desired. It converges quite slowly, though – after 500,000 terms, it produces only five correct decimal digits of . An infinite series for (published by Nilakantha in the 15th century) that converges more rapidly than the Gregory–Leibniz series is: Note that (n − 1)n(n + 1) = n3 − n. The following table compares the convergence rates of these two series: After five terms, the sum of the Gregory–Leibniz series is within 0.2 of the correct value of , whereas the sum of Nilakantha's series is within 0.002 of the correct value of . Nilakantha's series converges faster and is more useful for computing digits of . Series that converge even faster include Machin's series and Chudnovsky's series, the latter producing 14 correct decimal digits per term. Irrationality and transcendence Not all mathematical advances relating to were aimed at increasing the accuracy of approximations. When Euler solved the Basel problem in 1735, finding the exact value of the sum of the reciprocal squares, he established a connection between and the prime numbers that later contributed to the development and study of the Riemann zeta function: Swiss scientist Johann Heinrich Lambert in 1761 proved that is irrational, meaning it is not equal to the quotient of any two whole numbers. Lambert's proof exploited a continued-fraction representation of the tangent function. French mathematician Adrien-Marie Legendre proved in 1794 that 2 is also irrational. In 1882, German mathematician Ferdinand von Lindemann proved that is transcendental, confirming a conjecture made by both Legendre and Euler. Hardy and Wright states that "the proofs were afterwards modified and simplified by Hilbert, Hurwitz, and other writers". Adoption of the symbol In the earliest usages, the Greek letter was an abbreviation of the Greek word for periphery (), and was combined in ratios with δ (for diameter) or ρ (for radius) to form circle constants. (Before then, mathematicians sometimes used letters such as c or p instead.) The first recorded use is Oughtred's "", to express the ratio of periphery and diameter in the 1647 and later editions of . Barrow likewise used "" to represent the constant 3.14..., while Gregory instead used "" to represent 6.28... . The earliest known use of the Greek letter alone to represent the ratio of a circle's circumference to its diameter was by Welsh mathematician William Jones in his 1706 work ; or, a New Introduction to the Mathematics. The Greek letter first appears there in the phrase "1/2 Periphery ()" in the discussion of a circle with radius one. However, he writes that his equations for are from the "ready pen of the truly ingenious Mr. John Machin", leading to speculation that Machin may have employed the Greek letter before Jones. Jones' notation was not immediately adopted by other mathematicians, with the fraction notation still being used as late as 1767. Euler started using the single-letter form beginning with his 1727 Essay Explaining the Properties of Air, though he used , the ratio of periphery to radius, in this and some later writing. Euler first used in his 1736 work Mechanica, and continued in his widely-read 1748 work (he wrote: "for the sake of brevity we will write this number as ; thus is equal to half the circumference of a circle of radius 1"). Because Euler corresponded heavily with other mathematicians in Europe, the use of the Greek letter spread rapidly, and the practice was universally adopted thereafter in the Western world, though the definition still varied between 3.14... and 6.28... as late as 1761. Modern quest for more digits Computer era and iterative algorithms The development of computers in the mid-20th century again revolutionized the hunt for digits of . Mathematicians John Wrench and Levi Smith reached 1,120 digits in 1949 using a desk calculator. Using an inverse tangent (arctan) infinite series, a team led by George Reitwiesner and John von Neumann that same year achieved 2,037 digits with a calculation that took 70 hours of computer time on the ENIAC computer. The record, always relying on an arctan series, was broken repeatedly (7,480 digits in 1957; 10,000 digits in 1958; 100,000 digits in 1961) until 1 million digits were reached in 1973. Two additional developments around 1980 once again accelerated the ability to compute . First, the discovery of new iterative algorithms for computing , which were much faster than the infinite series; and second, the invention of fast multiplication algorithms that could multiply large numbers very rapidly. Such algorithms are particularly important in modern computations because most of the computer's time is devoted to multiplication. They include the Karatsuba algorithm, Toom–Cook multiplication, and Fourier transform-based methods. The iterative algorithms were independently published in 1975–1976 by physicist Eugene Salamin and scientist Richard Brent. These avoid reliance on infinite series. An iterative algorithm repeats a specific calculation, each iteration using the outputs from prior steps as its inputs, and produces a result in each step that converges to the desired value. The approach was actually invented over 160 years earlier by Carl Friedrich Gauss, in what is now termed the arithmetic–geometric mean method (AGM method) or Gauss–Legendre algorithm. As modified by Salamin and Brent, it is also referred to as the Brent–Salamin algorithm. The iterative algorithms were widely used after 1980 because they are faster than infinite series algorithms: whereas infinite series typically increase the number of correct digits additively in successive terms, iterative algorithms generally multiply the number of correct digits at each step. For example, the Brent-Salamin algorithm doubles the number of digits in each iteration. In 1984, brothers John and Peter Borwein produced an iterative algorithm that quadruples the number of digits in each step; and in 1987, one that increases the number of digits five times in each step. Iterative methods were used by Japanese mathematician Yasumasa Kanada to set several records for computing between 1995 and 2002. This rapid convergence comes at a price: the iterative algorithms require significantly more memory than infinite series. Motives for computing For most numerical calculations involving , a handful of digits provide sufficient precision. According to Jörg Arndt and Christoph Haenel, thirty-nine digits are sufficient to perform most cosmological calculations, because that is the accuracy necessary to calculate the circumference of the observable universe with a precision of one atom. Accounting for additional digits needed to compensate for computational round-off errors, Arndt concludes that a few hundred digits would suffice for any scientific application. Despite this, people have worked strenuously to compute to thousands and millions of digits. This effort may be partly ascribed to the human compulsion to break records, and such achievements with often make headlines around the world. They also have practical benefits, such as testing supercomputers, testing numerical analysis algorithms (including high-precision multiplication algorithms); and within pure mathematics itself, providing data for evaluating the randomness of the digits of . Rapidly convergent series Modern calculators do not use iterative algorithms exclusively. New infinite series were discovered in the 1980s and 1990s that are as fast as iterative algorithms, yet are simpler and less memory intensive. The fast iterative algorithms were anticipated in 1914, when Indian mathematician Srinivasa Ramanujan published dozens of innovative new formulae for , remarkable for their elegance, mathematical depth and rapid convergence. One of his formulae, based on modular equations, is This series converges much more rapidly than most arctan series, including Machin's formula. Bill Gosper was the first to use it for advances in the calculation of , setting a record of 17 million digits in 1985. Ramanujan's formulae anticipated the modern algorithms developed by the Borwein brothers (Jonathan and Peter) and the Chudnovsky brothers. The Chudnovsky formula developed in 1987 is It produces about 14 digits of per term, and has been used for several record-setting calculations, including the first to surpass 1 billion (109) digits in 1989 by the Chudnovsky brothers, 10 trillion (1013) digits in 2011 by Alexander Yee and Shigeru Kondo, over 22 trillion digits in 2016 by Peter Trueb and 50 trillion digits by Timothy Mullican in 2020. For similar formulas, see also the Ramanujan–Sato series. In 2006, mathematician Simon Plouffe used the PSLQ integer relation algorithm to generate several new formulas for , conforming to the following template: where is (Gelfond's constant), is an odd number, and are certain rational numbers that Plouffe computed. Monte Carlo methods Monte Carlo methods, which evaluate the results of multiple random trials, can be used to create approximations of . Buffon's needle is one such technique: If a needle of length is dropped times on a surface on which parallel lines are drawn units apart, and if of those times it comes to rest crossing a line ( > 0), then one may approximate based on the counts: Another Monte Carlo method for computing is to draw a circle inscribed in a square, and randomly place dots in the square. The ratio of dots inside the circle to the total number of dots will approximately equal . Another way to calculate using probability is to start with a random walk, generated by a sequence of (fair) coin tosses: independent random variables such that with equal probabilities. The associated random walk is so that, for each , is drawn from a shifted and scaled binomial distribution. As varies, defines a (discrete) stochastic process. Then can be calculated by This Monte Carlo method is independent of any relation to circles, and is a consequence of the central limit theorem, discussed below. These Monte Carlo methods for approximating are very slow compared to other methods, and do not provide any information on the exact number of digits that are obtained. Thus they are never used to approximate when speed or accuracy is desired. Spigot algorithms Two algorithms were discovered in 1995 that opened up new avenues of research into . They are called spigot algorithms because, like water dripping from a spigot, they produce single digits of that are not reused after they are calculated. This is in contrast to infinite series or iterative algorithms, which retain and use all intermediate digits until the final result is produced. Mathematicians Stan Wagon and Stanley Rabinowitz produced a simple spigot algorithm in 1995. Its speed is comparable to arctan algorithms, but not as fast as iterative algorithms. Another spigot algorithm, the BBP digit extraction algorithm, was discovered in 1995 by Simon Plouffe: This formula, unlike others before it, can produce any individual hexadecimal digit of without calculating all the preceding digits. Individual binary digits may be extracted from individual hexadecimal digits, and octal digits can be extracted from one or two hexadecimal digits. Variations of the algorithm have been discovered, but no digit extraction algorithm has yet been found that rapidly produces decimal digits. An important application of digit extraction algorithms is to validate new claims of record computations: After a new record is claimed, the decimal result is converted to hexadecimal, and then a digit extraction algorithm is used to calculate several random hexadecimal digits near the end; if they match, this provides a measure of confidence that the entire computation is correct. Between 1998 and 2000, the distributed computing project PiHex used Bellard's formula (a modification of the BBP algorithm) to compute the quadrillionth (1015th) bit of , which turned out to be 0. In September 2010, a Yahoo! employee used the company's Hadoop application on one thousand computers over a 23-day period to compute 256 bits of at the two-quadrillionth (2×1015th) bit, which also happens to be zero. Role and characterizations in mathematics Because is closely related to the circle, it is found in many formulae from the fields of geometry and trigonometry, particularly those concerning circles, spheres, or ellipses. Other branches of science, such as statistics, physics, Fourier analysis, and number theory, also include in some of their important formulae. Geometry and trigonometry appears in formulae for areas and volumes of geometrical shapes based on circles, such as ellipses, spheres, cones, and tori. Below are some of the more common formulae that involve . The circumference of a circle with radius is . The area of a circle with radius is . The area of an ellipse with semi-major axis and semi-minor axis is . The volume of a sphere with radius is . The surface area of a sphere with radius is . Some of the formulae above are special cases of the volume of the n-dimensional ball and the surface area of its boundary, the (n−1)-dimensional sphere, given below. Apart from circles, there are other curves of constant width. By Barbier's theorem, every curve of constant width has perimeter times
such as discursive regime, or re-invoked those of older philosophers like episteme and genealogy in order to explain the relationship between meaning, power, and social behavior within social orders (see The Order of Things, The Archaeology of Knowledge, Discipline and Punish, and The History of Sexuality). Jean-François Lyotard Jean-François Lyotard is credited with being the first to use the term in a philosophical context, in his 1979 work . In it, he follows Wittgenstein's language games model and speech act theory, contrasting two different language games, that of the expert, and that of the philosopher. He talks about the transformation of knowledge into information in the computer age and likens the transmission or reception of coded messages (information) to a position within a language game. Lyotard defined philosophical postmodernism in The Postmodern Condition, writing: "Simplifying to the extreme, I define postmodern as incredulity towards metanarratives...." where what he means by metanarrative is something like a unified, complete, universal, and epistemically certain story about everything that is. Postmodernists reject metanarratives because they reject the concept of truth that metanarratives presuppose. Postmodernist philosophers, in general, argue that truth is always contingent on historical and social context rather than being absolute and universal—and that truth is always partial and "at issue" rather than being complete and certain. Richard Rorty Richard Rorty argues in Philosophy and the Mirror of Nature that contemporary analytic philosophy mistakenly imitates scientific methods. In addition, he denounces the traditional epistemological perspectives of representationalism and correspondence theory that rely upon the independence of knowers and observers from phenomena and the passivity of natural phenomena in relation to consciousness. Jean Baudrillard Jean Baudrillard, in Simulacra and Simulation, introduced the concept that reality or the principle of the Real is short-circuited by the interchangeability of signs in an era whose communicative and semantic acts are dominated by electronic media and digital technologies. For Baudrillard, "simulation is no longer that of a territory, a referential being or a substance. It is the generation by models of a real without origin or reality: a hyperreal." Fredric Jameson Fredric Jameson set forth one of the first expansive theoretical treatments of postmodernism as a historical period, intellectual trend, and social phenomenon in a series of lectures at the Whitney Museum, later expanded as Postmodernism, or, the Cultural Logic of Late Capitalism (1991). Douglas Kellner In Analysis of the Journey, a journal birthed from postmodernism, Douglas Kellner insists that the "assumptions and procedures of modern theory" must be forgotten. Extensively, Kellner analyzes the terms of this theory in real-life experiences and examples. Kellner used science and technology studies as a major part of his analysis; he urged that the theory is incomplete without it. The scale was larger than just postmodernism alone; it must be interpreted through cultural studies where science and technology studies play a huge role. The reality of the September 11 attacks on the United States of America is the catalyst for his explanation. In response, Kellner continues to examine the repercussions of understanding the effects of the 11 September attacks. He questions if the attacks are only able to be understood in a limited form of postmodern theory due to the level of irony. The conclusion he depicts is simple: postmodernism, as most use it today, will decide what experiences and signs in one's reality will be one's reality as they know it. Manifestations Architecture The idea of Postmodernism in architecture began as a response to the perceived blandness and failure of the Utopianism of the Modern movement. Modern Architecture, as established and developed by Walter Gropius and Le Corbusier, was focused on: the attempted harmony of form and function; and, the dismissal of "frivolous ornament." the pursuit of a perceived ideal perfection; They argued for architecture that represented the spirit of the age as depicted in cutting-edge technology, be it airplanes, cars, ocean liners, or even supposedly artless grain silos. Modernist Ludwig Mies van der Rohe is associated with the phrase "less is more". Critics of Modernism have: argued that the attributes of perfection and minimalism are themselves subjective; pointed out anachronisms in modern thought; and, questioned the benefits of its philosophy. The intellectual scholarship regarding postmodernism and architecture is closely linked with the writings of critic-turned-architect Charles Jencks, beginning with lectures in the early 1970s and his essay "The Rise of Post Modern Architecture" from 1975. His magnum opus, however, is the book The Language of Post-Modern Architecture, first published in 1977, and since running to seven editions. Jencks makes the point that Post-Modernism (like Modernism) varies for each field of art, and that for architecture it is not just a reaction to Modernism but what he terms double coding: "Double Coding: the combination of Modern techniques with something else (usually traditional building) in order for architecture to communicate with the public and a concerned minority, usually other architects." In their book, "Revisiting Postmodernism", Terry Farrell and Adam Furman argue that postmodernism brought a more joyous and sensual experience to the culture, particularly in architecture. Art Postmodern art is a body of art movements that sought to contradict some aspects of modernism or some aspects that emerged or developed in its aftermath. Cultural production manifesting as intermedia, installation art, conceptual art, deconstructionist display, and multimedia, particularly involving video, are described as postmodern. Graphic design Early mention of postmodernism as an element of graphic design appeared in the British magazine, "Design". A characteristic of postmodern graphic design is that "retro, techno, punk, grunge, beach, parody, and pastiche were all conspicuous trends. Each had its own sites and venues, detractors and advocates." Literature Jorge Luis Borges' (1939) short story "Pierre Menard, Author of the Quixote", is often considered as predicting postmodernism and is a paragon of the ultimate parody. Samuel Beckett is also considered an important precursor and influence. Novelists who are commonly connected with postmodern literature include Vladimir Nabokov, William Gaddis, Umberto Eco, Pier Vittorio Tondelli, John Hawkes, William S. Burroughs, Kurt Vonnegut, John Barth, Jean Rhys, Donald Barthelme, E. L. Doctorow, Richard Kalich, Jerzy Kosiński, Don DeLillo, Thomas Pynchon (Pynchon's work has also been described as high modern), Ishmael Reed, Kathy Acker, Ana Lydia Vega, Jáchym Topol and Paul Auster. In 1971, the Arab-American scholar Ihab Hassan published The Dismemberment of Orpheus: Toward a Postmodern Literature, an early work of literary criticism from a postmodern perspective that traces the development of what he calls "literature of silence" through Marquis de Sade, Franz Kafka, Ernest Hemingway, Samuel Beckett, and many others, including developments such as the Theatre of the Absurd and the nouveau roman. In Postmodernist Fiction (1987), Brian McHale details the shift from modernism to postmodernism, arguing that the former is characterized by an epistemological dominant and that postmodern works have developed out of modernism and are primarily concerned with questions of ontology. McHale's second book, Constructing Postmodernism (1992), provides readings of postmodern fiction and some contemporary writers who go under the label of cyberpunk. McHale's "What Was Postmodernism?" (2007) follows Raymond Federman's lead in now using the past tense when discussing postmodernism. Music Jonathan Kramer has written that avant-garde musical compositions (which some would consider modernist rather than postmodernist) "defy more than seduce the listener, and they extend by potentially unsettling means the very idea of what music is." The postmodern impulse in classical music arose in the 1960s with the advent of musical minimalism. Composers such as Terry Riley, Henryk Górecki, Bradley Joseph, John Adams, Steve Reich, Philip Glass, Michael Nyman, and Lou Harrison reacted to the perceived elitism and dissonant sound of atonal academic modernism by producing music with simple textures and relatively consonant harmonies, whilst others, most notably John Cage challenged the prevailing narratives of beauty and objectivity common to Modernism. Author on postmodernism, Dominic Strinati, has noted, it is also important "to include in this category the so-called 'art rock' musical innovations and mixing of styles associated with groups like Talking Heads, and performers like Laurie Anderson, together with the self-conscious 'reinvention of disco' by the Pet Shop Boys". Urban planning Modernism sought to design and plan cities that followed the logic of the new model of industrial mass production; reverting to large-scale solutions, aesthetic standardisation, and prefabricated design solutions. Modernism eroded urban living by its failure to recognise differences and aim towards homogeneous landscapes (Simonsen 1990, 57). Jane Jacobs' 1961 book The Death and Life of Great American Cities was a sustained critique of urban planning as it had developed within Modernism and marked a transition from modernity to postmodernity in thinking about urban planning (Irving 1993, 479). The transition from Modernism to Postmodernism is often said to have happened at 3:32 pm on 15 July in 1972, when Pruitt–Igoe, a housing development for low-income people in St. Louis designed by architect Minoru Yamasaki, which had been a prize-winning version of Le Corbusier's 'machine for modern living,' was deemed uninhabitable and was torn down (Irving 1993, 480). Since then, Postmodernism has involved theories that embrace and aim to create diversity. It exalts uncertainty, flexibility and change (Hatuka & D'Hooghe 2007) and rejects utopianism while embracing a utopian way of thinking and acting. Postmodernity of 'resistance' seeks to deconstruct Modernism and is a critique of the origins without necessarily returning to them (Irving 1993, 60). As a result of Postmodernism, planners are much less inclined to lay a firm or steady claim to there being one single 'right way' of engaging in urban planning and are more open to different styles and ideas of 'how to plan' (Irving 474). The postmodern approach to understanding the city were pioneered in the 1980s by what could be called the "Los Angeles School of Urbanism" centered on the UCLA's Urban Planning Department in the 1980s, where contemporary Los Angeles was taken to be the postmodern city par excellence, contra posed to what had been the dominant ideas of the Chicago School formed in the 1920s at the University of Chicago, with its framework of urban ecology and emphasis on functional areas of use within a city, and the concentric circles to understand the sorting of different population groups. Edward Soja of the Los Angeles School combined Marxist and postmodern perspectives and focused on the economic and social changes (globalization, specialization, industrialization/deindustrialization, Neo-Liberalism, mass migration) that lead to the creation of large city-regions with their patchwork of population groups and economic uses. Criticisms Criticisms of postmodernism are intellectually diverse, including the argument that postmodernism is meaningless and promotes obscurantism. In part in reference to post-modernism, conservative English philosopher Roger Scruton wrote, "A writer who says that there are no truths, or that all truth is 'merely relative,' is asking you not to believe him. So don't." Similarly, Dick Hebdige criticized the vagueness of the term, enumerating a long list of otherwise unrelated concepts that people have designated as postmodernism, from "the décor of a room" or "a 'scratch' video", to fear of nuclear armageddon and the "implosion of meaning", and stated that anything that could signify all of those things was "a buzzword". The linguist and philosopher Noam Chomsky has said that postmodernism is meaningless because it adds nothing to analytical or empirical knowledge. He asks why postmodernist intellectuals do not respond like people in other fields when asked, "what are the principles of their theories, on what evidence are they based, what do they explain that wasn't already obvious, etc.?...If [these requests] can't be met, then I'd suggest recourse to Hume's advice in similar circumstances: 'to the flames'." Christian philosopher William Lane Craig has said "The idea that we live in a postmodern culture is a myth. In fact, a postmodern culture is an impossibility; it would be utterly unliveable. People are not relativistic when it comes to matters of science, engineering, and technology; rather, they are relativistic and pluralistic in matters of religion and ethics. But, of course, that's not postmodernism; that's modernism!" American author Thomas Pynchon targeted postmodernism as an object of derision in his novels, openly mocking postmodernist discourse. American academic and aesthete Camille Paglia has said: German philosopher Albrecht Wellmer has said that "postmodernism at its best might be seen as a self-critical – a sceptical, ironic, but nevertheless unrelenting – form of modernism; a modernism beyond utopianism, scientism and foundationalism; in short a post-metaphysical modernism." A formal, academic critique of postmodernism can be found in Beyond the Hoax by physics professor Alan Sokal and in Fashionable Nonsense by Sokal and Belgian physicist Jean Bricmont, both books discussing the so-called Sokal affair. In 1996, Sokal wrote a deliberately nonsensical article in a style similar to postmodernist articles, which was accepted for publication by the postmodern cultural studies journal, Social Text. On the same day of the release he published another article in a different journal explaining the Social Text article hoax. The philosopher Thomas Nagel has supported Sokal and Bricmont, describing their book Fashionable Nonsense as consisting largely of "extensive quotations of scientific gibberish from name-brand French intellectuals, together with eerily patient explanations of why it is gibberish," and agreeing that "there does seem to be something about the Parisian scene that is particularly hospitable to reckless verbosity." The French psychotherapist and philosopher, Félix Guattari, rejected its theoretical assumptions by arguing that the structuralist and postmodernist visions of the world were not flexible enough to seek explanations in psychological, social, and environmental domains at the same time. Zimbabwean-born British Marxist Alex Callinicos says that postmodernism "reflects the disappointed revolutionary generation of '68, and the incorporation of many of its members into the professional and managerial 'new middle class'. It is best read as a symptom of political frustration and social mobility rather than as a significant intellectual or cultural phenomenon in its own right." Analytic philosopher Daniel Dennett said, "Postmodernism, the school of 'thought' that proclaimed 'There are no truths, only interpretations' has largely played itself out in absurdity, but it has left behind a generation of academics in the humanities disabled by their distrust of the very idea of truth and their disrespect for evidence, settling for 'conversations' in which nobody is wrong and nothing can be confirmed, only asserted with whatever style you can muster." American historian Richard Wolin traces the origins of postmodernism to intellectual roots in fascism, writing "postmodernism has been nourished by the doctrines of Friedrich Nietzsche, Martin Heidegger, Maurice Blanchot, and Paul de Man—all of whom either prefigured or succumbed to the proverbial intellectual fascination with fascism." Daniel A. Farber and Suzanna Sherry criticised postmodernism for reducing the complexity of the modern world to an expression of power and for undermining truth and reason: Richard Caputo, William Epstein, David Stoesz & Bruce Thyer consider postmodernism to be a "dead-end in social work epistemology." They write: H. Sidky pointed out what he sees as several inherent flaws of a postmodern antiscience perspective, including the confusion of the authority of science (evidence) with the scientist conveying the knowledge; its self-contradictory claim that all truths are relative; and its strategic ambiguity. He sees 21st-century anti-scientific and pseudo-scientific approaches to knowledge, particularly in the United States, as rooted in a postmodernist "decades-long academic assault on science:" See also Theory Culture and politics Philosophy Religion History Opposed by References Further reading Alexie, Sherman (2000). "The Toughest Indian in the World" ()
it to religion as well as theology, to Catholic feeling as well as to Catholic tradition." In 1942 H. R. Hays described postmodernism as a new literary form. In 1926, Bernard Iddings Bell, president of St. Stephen's College (now Bard College), published Postmodernism and Other Essays, marking the first use of the term to describe the historical period following Modernity. The essay criticizes the lingering socio-cultural norms, attitudes, and practices of the Age of Enlightenment. It also forecasts the major cultural shifts toward Postmodernity and (Bell being an Anglican Episcopal priest) suggests orthodox religion as a solution. However, the term postmodernity was first used as a general theory for a historical movement in 1939 by Arnold J. Toynbee: "Our own Post-Modern Age has been inaugurated by the general war of 1914–1918". In 1949 the term was used to describe a dissatisfaction with modern architecture and led to the postmodern architecture movement in response to the modernist architectural movement known as the International Style. Postmodernism in architecture was initially marked by a re-emergence of surface ornament, reference to surrounding buildings in urban settings, historical reference in decorative forms (eclecticism), and non-orthogonal angles. Author Peter Drucker suggested the transformation into a post-modern world that happened between 1937 and 1957 and described it as a "nameless era" characterized as a shift to a conceptual world based on pattern, purpose, and process rather than a mechanical cause. This shift was outlined by four new realities: the emergence of an Educated Society, the importance of international development, the decline of the nation-state, and the collapse of the viability of non-Western cultures. In 1971, in a lecture delivered at the Institute of Contemporary Art, London, Mel Bochner described "post-modernism" in art as having started with Jasper Johns, "who first rejected sense-data and the singular point-of-view as the basis for his art, and treated art as a critical investigation". In 1996, Walter Truett Anderson described postmodernism as belonging to one of four typological world views which he identified as: Neo-romantic, in which truth is found through attaining harmony with nature or spiritual exploration of the inner self. Postmodern-ironist, which sees truth as socially constructed. Scientific-rational, in which truth is defined through methodical, disciplined inquiry. Social-traditional, in which truth is found in the heritage of American and Western civilization. History The basic features of what is now called postmodernism can be found as early as the 1940s, most notably in the work of artists such as Jorge Luis Borges. However, most scholars today agree postmodernism began to compete with modernism in the late 1950s and gained ascendancy over it in the 1960s. The primary features of postmodernism typically include the ironic play with styles, citations, and narrative levels, a metaphysical skepticism or nihilism towards a "grand narrative" of Western culture, and a preference for the virtual at the expense of the Real (or more accurately, a fundamental questioning of what 'the real' constitutes). Since the late 1990s, there has been a growing sentiment in popular culture and in academia that postmodernism "has gone out of fashion". Others argue that postmodernism is dead in the context of current cultural production. Theories and derivatives Structuralism and post-structuralism Structuralism was a philosophical movement developed by French academics in the 1950s, partly in response to French existentialism, and often interpreted in relation to modernism and high modernism. Thinkers who have been called "structuralists" include the anthropologist Claude Lévi-Strauss, the linguist Ferdinand de Saussure, the Marxist philosopher Louis Althusser, and the semiotician Algirdas Greimas. The early writings of the psychoanalyst Jacques Lacan and the literary theorist Roland Barthes have also been called "structuralist". Those who began as structuralists but became post-structuralists include Michel Foucault, Roland Barthes, Jean Baudrillard, and Gilles Deleuze. Other post-structuralists include Jacques Derrida, Pierre Bourdieu, Jean-François Lyotard, Julia Kristeva, Hélène Cixous, and Luce Irigaray. The American cultural theorists, critics, and intellectuals whom they influenced include Judith Butler, John Fiske, Rosalind Krauss, Avital Ronell, and Hayden White. Like structuralists, post-structuralists start from the assumption that people's identities, values, and economic conditions determine each other rather than having intrinsic properties that can be understood in isolation. Thus the French structuralists considered themselves to be espousing relativism and constructionism. But they nevertheless tended to explore how the subjects of their study might be described, reductively, as a set of essential relationships, schematics, or mathematical symbols. (An example is Claude Lévi-Strauss's algebraic formulation of mythological transformation in "The Structural Study of Myth"). Postmodernism entails reconsideration of the entire Western value system (love, marriage, popular culture, shift from an industrial to a service economy) that took place since the 1950s and 1960s, with a peak in the Social Revolution of 1968—are described with the term postmodernity, as opposed to postmodernism, a term referring to an opinion or movement. Post-structuralism is characterized by new ways of thinking through structuralism, contrary to the original form. Deconstruction One of the most well-known postmodernist concerns is deconstruction, a theory for philosophy, literary criticism, and textual analysis developed by Jacques Derrida. Critics have insisted that Derrida's work is rooted in a statement found in Of Grammatology: "" ('there is no outside text). Such critics misinterpret the statement as denying any reality outside of books. The statement is actually part of a critique of "inside" and "outside" metaphors when referring to the text, and is a corollary to the observation that there is no "inside" of a text as well. This attention to a text's unacknowledged reliance on metaphors and figures embedded within its discourse is characteristic of Derrida's approach. Derrida's method sometimes involves demonstrating that a given philosophical discourse depends on binary oppositions or excluding terms that the discourse itself has declared to be irrelevant or inapplicable. Derrida's philosophy inspired a postmodern movement called deconstructivism among architects, characterized by a design that rejects structural "centers" and encourages decentralized play among its elements. Derrida discontinued his involvement with the movement after the publication of his collaborative project with architect Peter Eisenman in Chora L Works: Jacques Derrida and Peter Eisenman. Post-postmodernism The connection between postmodernism, posthumanism, and cyborgism has led to a challenge to postmodernism, for which the terms Post-postmodernism and postpoststructuralism were first coined in 2003: More recently metamodernism, post-postmodernism and the "death of postmodernism" have been widely debated: in 2007 Andrew Hoberek noted in his introduction to a special issue of the journal Twentieth-Century Literature titled "After Postmodernism" that "declarations of postmodernism's demise have become a critical commonplace". A small group of critics has put forth a range of theories that aim to describe culture or society in the alleged aftermath of postmodernism, most notably Raoul Eshelman (performatism), Gilles Lipovetsky (hypermodernity), Nicolas Bourriaud (altermodern), and Alan Kirby (digimodernism, formerly called pseudo-modernism). None of these new theories or labels have so far gained very widespread acceptance. Sociocultural anthropologist Nina Müller-Schwarze offers neostructuralism as a possible direction. The exhibition Postmodernism – Style and Subversion 1970–1990 at the Victoria and Albert Museum (London, 24 September 2011 – 15 January 2012) was billed as the first show to document postmodernism as a historical movement. Philosophy In the 1970s a group of poststructuralists in France developed a radical critique of modern philosophy with roots discernible in Nietzsche, Kierkegaard, and Heidegger, and became known as postmodern theorists, notably including Jacques Derrida, Michel Foucault, Jean-François Lyotard, Jean Baudrillard, and others. New and challenging modes of thought and writing pushed the development of new areas and topics in philosophy. By the 1980s, this spread to America (Richard Rorty) and the world. Jacques Derrida Jacques Derrida was a French-Algerian philosopher best known for developing a form of semiotic analysis known as deconstruction, which he discussed in numerous texts, and developed in the context of phenomenology. He is one of the major figures associated with post-structuralism and postmodern philosophy. Derrida re-examined the fundamentals of writing and its consequences on philosophy in general; sought to undermine the language of "presence" or metaphysics in an analytical technique which, beginning as a point of departure from Heidegger's notion of Destruktion, came to be known as deconstruction. Michel Foucault Michel Foucault was a French philosopher, historian of ideas, social theorist, and literary critic. First associated with structuralism, Foucault created an oeuvre that today is seen as belonging to post-structuralism and to postmodern philosophy. Considered a leading figure of , his work remains fruitful in the English-speaking academic world in a large number of sub-disciplines. The Times Higher Education Guide described him in 2009 as the most cited author in the humanities. Michel Foucault introduced concepts such as discursive regime, or re-invoked those of older philosophers like episteme and genealogy in order to explain the relationship between meaning, power, and social behavior within social orders (see The Order of Things, The Archaeology of Knowledge, Discipline and Punish, and The History of Sexuality). Jean-François Lyotard Jean-François Lyotard is credited with being the first to use the term in a philosophical context, in his 1979 work . In it, he follows Wittgenstein's language games model and speech act theory, contrasting two different language games, that of the expert, and that of the philosopher. He talks about the transformation of knowledge into information in the computer age and likens the transmission or reception of coded messages (information) to a
glass negative in late 1839. In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper. Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize in Physics in 1908. Glass plates were the medium for most original camera photography from the late 1850s until the general introduction of flexible plastic films during the 1890s. Although the convenience of the film greatly popularized amateur photography, early films were somewhat more expensive and of markedly lower optical quality than their glass plate equivalents, and until the late 1910s they were not available in the large formats preferred by most professional photographers, so the new medium did not immediately or completely replace the old. Because of the superior dimensional stability of glass, the use of plates for some scientific applications, such as astrophotography, continued into the 1990s, and in the niche field of laser holography, it has persisted into the 21st century. Film Hurter and Driffield began pioneering work on the light sensitivity of photographic emulsions in 1876. Their work enabled the first quantitative measure of film speed to be devised. The first flexible photographic roll film was marketed by George Eastman, founder of Kodak in 1885, but this original "film" was actually a coating on a paper base. As part of the processing, the image-bearing layer was stripped from the paper and transferred to a hardened gelatin support. The first transparent plastic roll film followed in 1889. It was made from highly flammable nitrocellulose known as nitrate film. Although cellulose acetate or "safety film" had been introduced by Kodak in 1908, at first it found only a few special applications as an alternative to the hazardous nitrate film, which had the advantages of being considerably tougher, slightly more transparent, and cheaper. The changeover was not completed for X-ray films until 1933, and although safety film was always used for 16 mm and 8 mm home movies, nitrate film remained standard for theatrical 35 mm motion pictures until it was finally discontinued in 1951. Films remained the dominant form of photography until the early 21st century when advances in digital photography drew consumers to digital formats. Although modern photography is dominated by digital users, film continues to be used by enthusiasts and professional photographers. The distinctive "look" of film based photographs compared to digital images is likely due to a combination of factors, including: (1) differences in spectral and tonal sensitivity (S-shaped density-to-exposure (H&D curve) with film vs. linear response curve for digital CCD sensors) (2) resolution and (3) continuity of tone. Black-and-white Originally, all photography was monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost, chemical stability, and its "classic" photographic look. The tones and contrast between light and dark areas define black-and-white photography. Monochromatic pictures are not necessarily composed of pure blacks, whites, and intermediate shades of gray but can involve shades of one particular hue depending on the process. The cyanotype process, for example, produces an image composed of blue tones. The albumen print process, publicly revealed in 1847, produces brownish tones. Many photographers continue to produce some monochrome images, sometimes because of the established archival permanence of well-processed silver-halide-based materials. Some full-color digital images are processed using a variety of techniques to create black-and-white results, and some manufacturers produce digital cameras that exclusively shoot monochrome. Monochrome printing or electronic display can be used to salvage certain photographs taken in color which are unsatisfactory in their original form; sometimes when presented as black-and-white or single-color-toned images they are found to be more effective. Although color photography has long predominated, monochrome images are still produced, mostly for artistic reasons. Almost all digital cameras have an option to shoot in monochrome, and almost all image editing software can combine or selectively discard RGB color channels to produce a monochrome image from one shot in color. Color Color photography was explored beginning in the 1840s. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light. The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by Scottish physicist James Clerk Maxwell in 1855. The foundation of virtually all practical color processes, Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image. Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s. Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images. Implementation of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability. Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s. Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multi-layer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure. Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently, available color films still employ a multi-layer emulsion and the same principles, most closely resembling Agfa's product. Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963. Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment. After a transition period centered around 1995–2005, color film was relegated to a niche market by inexpensive multi-megapixel digital cameras. Film continues to be the preference of some photographers because of its distinctive "look". Digital In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. The first digital camera to both record and save images in a digital format was the Fujix DS-1P created by Fujfilm in 1988. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born. Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications. Digital photography dominates the 21st century. More than 99% of photographs taken around the world are through digital cameras, increasingly through smartphones. Techniques A large variety of photographic techniques and media are used in the process of capturing images for photography. These include the camera; dualphotography; full-spectrum, ultraviolet and infrared media; light field photography; and other imaging techniques. Cameras The camera is the image-forming device, and a photographic plate, photographic film or a silicon electronic image sensor is the capture medium. The respective recording medium can be the plate or film itself, or a digital magnetic or electronic memory. Photographers control the camera and lens to "expose" the light recording material to the required amount of light to form a "latent image" (on plate or film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on a paper. The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. It was discovered and used in the 16th century by painters. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera). As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens. The movie camera is a type of photographic camera which takes a rapid sequence of photographs on recording medium. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per
and professional photographers. The distinctive "look" of film based photographs compared to digital images is likely due to a combination of factors, including: (1) differences in spectral and tonal sensitivity (S-shaped density-to-exposure (H&D curve) with film vs. linear response curve for digital CCD sensors) (2) resolution and (3) continuity of tone. Black-and-white Originally, all photography was monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost, chemical stability, and its "classic" photographic look. The tones and contrast between light and dark areas define black-and-white photography. Monochromatic pictures are not necessarily composed of pure blacks, whites, and intermediate shades of gray but can involve shades of one particular hue depending on the process. The cyanotype process, for example, produces an image composed of blue tones. The albumen print process, publicly revealed in 1847, produces brownish tones. Many photographers continue to produce some monochrome images, sometimes because of the established archival permanence of well-processed silver-halide-based materials. Some full-color digital images are processed using a variety of techniques to create black-and-white results, and some manufacturers produce digital cameras that exclusively shoot monochrome. Monochrome printing or electronic display can be used to salvage certain photographs taken in color which are unsatisfactory in their original form; sometimes when presented as black-and-white or single-color-toned images they are found to be more effective. Although color photography has long predominated, monochrome images are still produced, mostly for artistic reasons. Almost all digital cameras have an option to shoot in monochrome, and almost all image editing software can combine or selectively discard RGB color channels to produce a monochrome image from one shot in color. Color Color photography was explored beginning in the 1840s. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light. The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by Scottish physicist James Clerk Maxwell in 1855. The foundation of virtually all practical color processes, Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters. This provides the photographer with the three basic channels required to recreate a color image. Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s. Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images. Implementation of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability. Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s. Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multi-layer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure. Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently, available color films still employ a multi-layer emulsion and the same principles, most closely resembling Agfa's product. Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963. Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment. After a transition period centered around 1995–2005, color film was relegated to a niche market by inexpensive multi-megapixel digital cameras. Film continues to be the preference of some photographers because of its distinctive "look". Digital In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital. The first digital camera to both record and save images in a digital format was the Fujix DS-1P created by Fujfilm in 1988. In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born. Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film. An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications. Digital photography dominates the 21st century. More than 99% of photographs taken around the world are through digital cameras, increasingly through smartphones. Techniques A large variety of photographic techniques and media are used in the process of capturing images for photography. These include the camera; dualphotography; full-spectrum, ultraviolet and infrared media; light field photography; and other imaging techniques. Cameras The camera is the image-forming device, and a photographic plate, photographic film or a silicon electronic image sensor is the capture medium. The respective recording medium can be the plate or film itself, or a digital magnetic or electronic memory. Photographers control the camera and lens to "expose" the light recording material to the required amount of light to form a "latent image" (on plate or film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on a paper. The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. It was discovered and used in the 16th century by painters. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera). As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens. The movie camera is a type of photographic camera which takes a rapid sequence of photographs on recording medium. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures to create the illusion of motion. Stereoscopic Photographs, both monochrome and color, can be captured and displayed through two side-by-side images that emulate human stereoscopic vision. Stereoscopic photography was the first that captured figures in motion. While known colloquially as "3-D" photography, the more accurate term is stereoscopy. Such cameras have long been realized by using film and more recently in digital electronic methods (including cell phone cameras). Dualphotography Dualphotography consists of photographing a scene from both sides of a photographic device at once (e.g. camera for back-to-back dualphotography, or two networked cameras for portal-plane dualphotography). The dualphoto apparatus can be used to simultaneously capture both the subject and the photographer, or both sides of a geographical place at once, thus adding a supplementary narrative layer to that of a single image. Full-spectrum, ultraviolet and infrared Ultraviolet and infrared films have been available for many decades and employed in a variety of photographic avenues since the 1960s. New technological trends in digital photography have opened a new direction in full spectrum photography, where careful filtering choices across the ultraviolet, visible and infrared lead to new artistic visions. Modified digital cameras can detect some ultraviolet, all of the visible and much of the near infrared spectrum, as most digital imaging sensors are sensitive from about 350 nm to 1000 nm. An off-the-shelf digital camera contains an infrared hot mirror filter that blocks most of the infrared and a bit of the ultraviolet that would otherwise be detected by the sensor, narrowing the accepted range from about 400 nm to 700 nm. Replacing a hot mirror or infrared blocking filter with an infrared pass or a wide spectrally transmitting filter allows the camera to detect the wider spectrum light at greater sensitivity. Without the hot-mirror, the red, green and blue (or cyan, yellow and magenta) colored micro-filters placed over the sensor elements pass varying amounts of ultraviolet (blue window) and infrared (primarily red and somewhat lesser the green and blue micro-filters). Uses of full spectrum photography are for fine art photography, geology, forensics and law enforcement. Layering Layering is a photographic composition technique that manipulates the foreground, subject or middle-ground, and background layers in a way that they all work together to tell a story through the image. Layers may be incorporated by altering the focal length, distorting the perspective by positioning the camera in a certain spot. People, movement, light and a variety of objects can be used in layering. Light field Digital methods of image capture and display processing have enabled the new technology of "light field photography" (also known as synthetic aperture photography). This process allows focusing at various depths of field to be selected after the photograph has been captured. As explained by Michael Faraday in 1846, the "light field" is understood as 5-dimensional, with each point in 3-D space having attributes of two more angles that define the direction of each ray passing through that point. These additional vector attributes can be captured optically through the use of microlenses at each pixel point within the 2-dimensional image sensor. Every pixel of the final image is actually a selection from each sub-array located under each microlens, as identified by a post-image capture focus algorithm. Other Besides the camera, other methods of forming images with light are available. For instance, a photocopy or xerography machine forms permanent images but uses the transfer of static electrical charges rather than photographic medium, hence the term electrophotography. Photograms are images produced by the shadows of objects cast on the photographic paper, without the use of a camera. Objects can also be placed directly on the glass of an image scanner to produce digital pictures. Types Amateur Amateur photographers take photos for personal use, as a hobby or out of casual interest, rather than as a business or job. The quality amateur work can be comparable to that of many professionals. Amateurs can fill a gap in subjects or topics that might not otherwise be photographed if they are not commercially useful or salable. Amateur photography grew during the late 19th century due to the popularization of the hand-held camera. Twenty-first century social media and near-ubiquitous camera phones have made photographic and video recording pervasive in everyday life. In the mid-2010s smartphone cameras added numerous automatic assistance features like color management, autofocus face detection and image stabilization that significantly decreased skill and effort needed to take high quality images. Commercial Commercial photography is probably best defined as any photography for which the photographer is paid for images rather than works of art. In this light, money could be paid for the subject of the photograph or the photograph itself. Wholesale, retail, and professional uses of photography would fall under this definition. The commercial photographic world could include: Advertising photography: photographs made to illustrate and usually sell a service or product. These images, such as packshots, are generally done with an advertising agency, design firm or with an in-house corporate design team. Architectural photography focuses on capturing photographs of buildings and architectural structures that are aesthetically pleasing and accurate in terms of representations of their subjects. Event photography focuses on photographing guests and occurrences at mostly social events. Fashion and glamour photography usually incorporates models and is a form of advertising photography. Fashion photography, like the work featured in Harper's Bazaar, emphasizes clothes and other products; glamour emphasizes the model and body form. Glamour photography is popular in advertising and men's magazines. Models in glamour photography sometimes work nude. 360 product photography displays a series of photos to give the impression of a rotating object. This technique is commonly used by ecommerce websites to help shoppers visualise products. Concert photography focuses on capturing candid images of both the artist or band as well as the atmosphere (including the crowd). Many of these photographers work freelance and are contracted through an artist or their management to cover a specific show. Concert photographs are often used to promote the artist or band in addition to the venue. Crime scene photography consists of photographing scenes of crime such as robberies and murders. A black and white camera or an infrared camera may be used to capture specific details. Still life photography usually depicts inanimate subject matter, typically commonplace objects which may be either natural or man-made. Still life is a broader category for food and some natural photography and can be used for advertising purposes. Real Estate photography focuses on the production of photographs showcasing a property that is for sale, such photographs requires the use of wide-lens and extensive knowledge in High-dynamic-range imaging photography. Food photography can be used for editorial, packaging or advertising use. Food photography is similar to still life photography but requires some special skills. Photojournalism can be considered a subset of editorial photography. Photographs made in this context are accepted as a documentation of a news story. Paparazzi is a form of photojournalism in which the photographer captures candid images of athletes, celebrities, politicians, and other prominent people. Portrait and wedding photography: photographs made and sold directly to the end user of the images. Landscape photography depicts locations. Wildlife photography demonstrates the life of wild animals. Art During the 20th century, both fine art photography and documentary photography became accepted by the English-speaking art world and the gallery system. In the United States, a handful of photographers, including Alfred Stieglitz, Edward Steichen, John Szarkowski, F. Holland Day, and Edward Weston, spent their lives advocating for photography as a fine art. At first, fine art photographers tried to imitate painting styles. This movement is called Pictorialism, often using soft focus for a dreamy, 'romantic' look. In reaction to that, Weston, Ansel Adams, and others formed the Group f/64 to advocate 'straight photography', the photograph as a (sharply focused) thing in itself and not an imitation of something else. The aesthetics of photography is a matter that continues to be discussed regularly, especially in artistic circles. Many artists argued that photography was the mechanical reproduction of an image. If photography is authentically art, then photography in the context of art would need redefinition, such as determining what component of a photograph makes it beautiful to the viewer. The controversy began with the earliest images "written with light"; Nicéphore Niépce, Louis Daguerre, and others among the very earliest photographers were met with acclaim, but some questioned if their work met the definitions and purposes of art. Clive Bell in his classic essay Art states that only "significant form" can distinguish art from what is not art. On 7 February 2007, Sotheby's London sold the 2001 photograph 99 Cent II Diptychon for an unprecedented $3,346,456 to an anonymous bidder, making it the most expensive at the time. Conceptual photography turns a concept or idea into a photograph. Even though what is depicted in the photographs are real objects, the subject is strictly abstract. In parallel to this development, the then largely separate interface between painting and photography was closed in the early 1970s with the work of the photo artists Pierre Cordier (Chimigramm), Chemigram and Josef H. Neumann, Chemogram. In 1974 the chemograms by Josef H. Neumann concluded the separation of the painterly background and the photographic layer by showing the picture elements in a symbiosis that had never existed before, as an unmistakable unique specimen, in a simultaneous painterly and at the same time real photographic perspective, using lenses, within a photographic layer, united in colors and shapes. This Neumann chemogram from the seventies of the 20th century thus differs from the beginning of the previously created cameraless chemigrams of a Pierre Cordier
version of the Hebrew Pentateuch, written in the Samaritan alphabet and used by the Samaritans, for whom it is the entire biblical canon Targum Yerushalmi, a western targum (translation) of the
to: Ashburnham Pentateuch, late 6th- or early 7th-century Latin illuminated manuscript of the Pentateuch Chumash, printed Torah, as opposed to a Torah scroll Samaritan Pentateuch, a version of the Hebrew Pentateuch,
postmodern era is positioned to synthesize at a higher level—the level of experience, where the being of things and the activity of the finite knower compenetrate one another and provide the materials whence can be derived knowledge of nature and knowledge of culture in their full symbiosis—the achievements of the ancients and the moderns in a way that gives full credit to the preoccupations of the two. The postmodern era has for its distinctive task in philosophy the exploration of a new path, no longer the ancient way of things nor the modern way of ideas, but the way of signs, whereby the peaks and valleys of ancient and modern thought alike can be surveyed and cultivated by a generation which has yet further peaks to climb and valleys to find. History Precursors Postmodern philosophy originated primarily in France during the mid-20th century. However, several philosophical antecedents inform many of postmodern philosophy's concerns. It was greatly influenced by the writings of Søren Kierkegaard and Friedrich Nietzsche in the 19th century and other early-to-mid 20th-century philosophers, including phenomenologists Edmund Husserl and Martin Heidegger, psychoanalyst Jacques Lacan, structuralist Roland Barthes, Georges Bataille, and the later work of Ludwig Wittgenstein. Postmodern philosophy also drew from the world of the arts and architecture, particularly Marcel Duchamp, John Cage and artists who practiced collage, and the architecture of Las Vegas and the Pompidou Centre. Early postmodern philosophers The most influential early postmodern philosophers were Jean Baudrillard, Jean-François Lyotard, and Jacques Derrida. Michel Foucault is also often cited as an early postmodernist although he personally rejected that label. Following Nietzsche, Foucault argued that knowledge is produced through the operations of power, and changes fundamentally in different historical periods. The writings of Lyotard were largely concerned with the role of narrative in human culture, and particularly how that role has changed as we have left modernity and entered a "postindustrial" or postmodern condition. He argued that modern philosophies legitimized their truth-claims not (as they themselves claimed) on logical or empirical grounds, but rather on the grounds of accepted stories (or "metanarratives") about knowledge and the world—comparing these with Wittgenstein's concept of language-games. He further argued that in our postmodern condition, these metanarratives no longer work to legitimize truth-claims. He suggested that in the wake of the collapse of modern metanarratives, people are developing a new "language-game"—one that does not make claims to absolute truth but rather celebrates a world of ever-changing relationships (among people and between people and the world). Derrida, the father of deconstruction, practiced philosophy as a form of textual criticism. He criticized Western philosophy as privileging the concept
in their full symbiosis—the achievements of the ancients and the moderns in a way that gives full credit to the preoccupations of the two. The postmodern era has for its distinctive task in philosophy the exploration of a new path, no longer the ancient way of things nor the modern way of ideas, but the way of signs, whereby the peaks and valleys of ancient and modern thought alike can be surveyed and cultivated by a generation which has yet further peaks to climb and valleys to find. History Precursors Postmodern philosophy originated primarily in France during the mid-20th century. However, several philosophical antecedents inform many of postmodern philosophy's concerns. It was greatly influenced by the writings of Søren Kierkegaard and Friedrich Nietzsche in the 19th century and other early-to-mid 20th-century philosophers, including phenomenologists Edmund Husserl and Martin Heidegger, psychoanalyst Jacques Lacan, structuralist Roland Barthes, Georges Bataille, and the later work of Ludwig Wittgenstein. Postmodern philosophy also drew from the world of the arts and architecture, particularly Marcel Duchamp, John Cage and artists who practiced collage, and the architecture of Las Vegas and the Pompidou Centre. Early postmodern philosophers The most influential early postmodern philosophers were Jean Baudrillard, Jean-François Lyotard, and Jacques Derrida. Michel Foucault is also often cited as an early postmodernist although he personally rejected that label. Following Nietzsche, Foucault argued that knowledge is produced through the operations of power, and changes fundamentally in different historical periods. The writings of Lyotard were largely concerned with the role of narrative in human culture, and particularly how that role has changed as we have left modernity and entered a "postindustrial" or postmodern condition. He argued that modern philosophies legitimized their truth-claims not (as they themselves claimed) on logical or empirical grounds, but rather on the grounds of accepted stories (or "metanarratives") about knowledge and the world—comparing these with Wittgenstein's concept of language-games. He further argued that in our postmodern condition, these metanarratives no longer work to legitimize truth-claims. He suggested that in the wake of the collapse of modern metanarratives, people are developing a new "language-game"—one that does not make claims to absolute truth but rather celebrates a world of ever-changing relationships (among people and between people and the world). Derrida, the father of deconstruction, practiced philosophy as a form of textual criticism. He criticized Western philosophy as privileging the concept of presence and logos, as opposed to absence and markings or writings. In the United States, the most famous pragmatist and self-proclaimed postmodernist was Richard Rorty. An analytic philosopher, Rorty believed that combining Willard Van Orman Quine's criticism of the analytic-synthetic distinction with Wilfrid Sellars's critique of the "Myth of the Given" allowed for an abandonment of the view of the thought or language as a mirror of a reality or external world. Further, drawing upon Donald Davidson's criticism of the dualism between conceptual scheme and empirical content, he challenges the sense of questioning whether our particular concepts are related to the world in an appropriate way, whether we can justify our ways of describing the world as compared with other ways. He argued that truth was not about getting it right or representing reality, but was part of a social practice and language was what served our purposes in a particular time; ancient languages are sometimes untranslatable into modern ones because they possess a different vocabulary and are unuseful today. Donald Davidson is not usually considered a postmodernist, although he and Rorty have both acknowledged that there are few differences between their philosophies. Criticism Criticisms of postmodernism,
away from the perceptibly damaging hegemony of binaries such as aestheticism/formalism, subject/object, unity/disunity, part/whole, that were seen to dominate former aesthetic discourse, and that when left unchallenged (as postmodernists claim of modernist discourse) are thought to de-humanise music analysis". Fredric Jameson, a major figure in the thinking on postmodernism and culture, calls postmodernism "the cultural dominant of the logic of late capitalism", meaning that, through globalization, postmodern culture is tied inextricably with capitalism (Mark Fisher, writing 20 years later, goes further, essentially calling it the sole cultural possibility). Drawing from Jameson and other theorists, David Beard and Kenneth Gloag argue that, in music, postmodernism is not just an attitude but also an inevitability in the current cultural climate of fragmentation. As early as 1938, Theodor Adorno had already identified a trend toward the dissolution of "a culturally dominant set of values", citing the commodification of all genres as beginning of the end of genre or value distinctions in music. In some respects, Postmodern music could be categorized as simply the music of the postmodern era, or music that follows aesthetic and philosophical trends of postmodernism, but with Jameson in mind, it is clear these definitions are inadequate. As the name suggests, the postmodernist movement formed partly in reaction to the ideals of modernism, but in fact postmodern music is more to do with functionality and the effect of globalization than it is with a specific reaction, movement, or attitude. In the face of capitalism, Jameson says, "It is safest to grasp the concept of the postmodern as an attempt to think the present historically in an age that has forgotten how to think historically in the first place". Characteristics Jonathan Kramer posits the idea (following Umberto Eco and Jean-François Lyotard) that postmodernism (including musical postmodernism) is less a surface style or historical period (i.e., condition) than an attitude. Kramer enumerates 16 (arguably subjective) "characteristics of postmodern music, by which I mean music that is understood in a postmodern manner, or that calls forth postmodern listening strategies, or that provides postmodern listening experiences, or that exhibits postmodern compositional practices." According to Kramer, postmodern music: is not simply a repudiation of modernism or its continuation, but has aspects of both a break and an extension is, on some level and in some way, ironic does not respect boundaries between sonorities and procedures of the past and of the present challenges barriers between 'high' and 'low' styles shows disdain for the often unquestioned value of structural unity questions the mutual exclusivity of elitist and populist values avoids totalizing forms (e.g., does not want entire pieces to be tonal or serial or cast in a prescribed formal mold) considers music not as autonomous but as relevant to cultural, social, and political contexts includes quotations of or references to music of many traditions and cultures
on some level and in some way, ironic does not respect boundaries between sonorities and procedures of the past and of the present challenges barriers between 'high' and 'low' styles shows disdain for the often unquestioned value of structural unity questions the mutual exclusivity of elitist and populist values avoids totalizing forms (e.g., does not want entire pieces to be tonal or serial or cast in a prescribed formal mold) considers music not as autonomous but as relevant to cultural, social, and political contexts includes quotations of or references to music of many traditions and cultures considers technology not only as a way to preserve and transmit music but also as deeply implicated in the production and essence of music embraces contradictions distrusts binary oppositions includes fragmentations and discontinuities encompasses pluralism and eclecticism presents multiple meanings and multiple temporalities locates meaning and even structure in listeners, more than in scores, performances, or composers Daniel Albright summarizes the main tendencies of musical postmodernism as: Bricolage Polystylism Randomness Timescale One author has suggested that the emergence of postmodern music in popular music occurred in the late 1960s, influenced in part by psychedelic rock and one or more of the later Beatles albums. Beard and Gloag support this position, citing Jameson's theory that "the radical changes of musical styles and languages throughout the 1960s [are] now seen as a reflection of postmodernism". Others have placed the beginnings of postmodernism in the arts, with particular reference to music, at around 1930. See also List of postmodernist composers 20th-century classical music 21st-century classical music References Bibliography Further reading Bertens, Hans. 1995. The Idea of the Postmodern: A History. London and New York: Routledge. . Beverley, John. 1989. "The Ideology of Postmodern Music and Left Politics". Critical Quarterly 31, no. 1 (Spring): 40–56. Burkholder, J. Peter. 1995. All Made of Tunes: Charles Ives and the Uses of Musical Borrowings. New Haven: Yale University Press. Danuser, Hermann. 1991. "Postmodernes Musikdenken—Lösung oder Flucht?". In Neue Musik im politischen Wandel: fünf Kongressbeiträge und drei Seminarberichte, edited by Hermann Danuser, 56–66. Mainz & New York: Schott. . Edwards, George. 1991. "Music and Postmodernism". Partisan Review 58, no. 4 (Fall): 693–705. Reprinted in his Collected Essays on Modern and Classical Music, with a foreword by Fred Lerdahl and an afterword by Joseph Dubiel, 49–60. Lanham, Maryland: Scarecrow Press, 2008. . Gloag, Kenneth. 2012. Postmodernism in Music. Cambridge Introductions to Music, Cambridge and New York: Cambridge University Press. . Harrison, Max, Charles Fox, Eric Thacker, and Stuart Nicholson. 1999. The Essential Jazz Records: Vol. 2: Modernism to Postmodernism. London: Mansell Publishing. (cloth); (pbk). Heilbroner, Robert L.
conducting experiments Medical protocol (disambiguation) Computing Protocol (object-oriented programming), a common means for unrelated objects to communicate with each other (sometimes also called interfaces) Communication protocol, a defined set of rules and regulations that determine how data is transmitted in telecommunications and computer networking Cryptographic protocol, a protocol for encrypting messages Decentralized network protocol, a protocol for operation of an open source peer-to-peer network where no single entity nor colluding group controls a majority of the network nodes Music Protocol (album), by Simon Phillips Protocol (band), a British band "Protocol", a song by Gordon Lightfoot from the album Summertime
communicate with each other (sometimes also called interfaces) Communication protocol, a defined set of rules and regulations that determine how data is transmitted in telecommunications and computer networking Cryptographic protocol, a protocol for encrypting messages Decentralized network protocol, a protocol for operation of an open source peer-to-peer network where no single entity nor colluding group controls a majority of the network nodes Music Protocol
low volumes. Roots-type pumps Named after the Roots brothers who invented it, this lobe pump displaces the liquid trapped between two long helical rotors, each fitted into the other when perpendicular at 90°, rotating inside a triangular shaped sealing line configuration, both at the point of suction and at the point of discharge. This design produces a continuous flow with equal volume and no vortex. It can work at low pulsation rates, and offers gentle performance that some applications require. Applications include: High capacity industrial air compressors. Roots superchargers on internal combustion engines. A brand of civil defense siren, the Federal Signal Corporation's Thunderbolt. Peristaltic pump A peristaltic pump is a type of positive-displacement pump. It contains fluid within a flexible tube fitted inside a circular pump casing (though linear peristaltic pumps have been made). A number of rollers, shoes, or wipers attached to a rotor compresses the flexible tube. As the rotor turns, the part of the tube under compression closes (or occludes), forcing the fluid through the tube. Additionally, when the tube opens to its natural state after the passing of the cam it draws (restitution) fluid into the pump. This process is called peristalsis and is used in many biological systems such as the gastrointestinal tract. Plunger pumpsPlunger pumps are reciprocating positive-displacement pumps. These consist of a cylinder with a reciprocating plunger. The suction and discharge valves are mounted in the head of the cylinder. In the suction stroke, the plunger retracts and the suction valves open causing suction of fluid into the cylinder. In the forward stroke, the plunger pushes the liquid out of the discharge valve. Efficiency and common problems: With only one cylinder in plunger pumps, the fluid flow varies between maximum flow when the plunger moves through the middle positions, and zero flow when the plunger is at the end positions. A lot of energy is wasted when the fluid is accelerated in the piping system. Vibration and water hammer may be a serious problem. In general, the problems are compensated for by using two or more cylinders not working in phase with each other. Triplex-style plunger pumps Triplex plunger pumps use three plungers, which reduces the pulsation of single reciprocating plunger pumps. Adding a pulsation dampener on the pump outlet can further smooth the pump ripple, or ripple graph of a pump transducer. The dynamic relationship of the high-pressure fluid and plunger generally requires high-quality plunger seals. Plunger pumps with a larger number of plungers have the benefit of increased flow, or smoother flow without a pulsation damper. The increase in moving parts and crankshaft load is one drawback. Car washes often use these triplex-style plunger pumps (perhaps without pulsation dampers). In 1968, William Bruggeman reduced the size of the triplex pump and increased the lifespan so that car washes could use equipment with smaller footprints. Durable high-pressure seals, low-pressure seals and oil seals, hardened crankshafts, hardened connecting rods, thick ceramic plungers and heavier duty ball and roller bearings improve reliability in triplex pumps. Triplex pumps now are in a myriad of markets across the world. Triplex pumps with shorter lifetimes are commonplace to the home user. A person who uses a home pressure washer for 10 hours a year may be satisfied with a pump that lasts 100 hours between rebuilds. Industrial-grade or continuous duty triplex pumps on the other end of the quality spectrum may run for as much as 2,080 hours a year. The oil and gas drilling industry uses massive semi trailer-transported triplex pumps called mud pumps to pump drilling mud, which cools the drill bit and carries the cuttings back to the surface. Drillers use triplex or even quintuplex pumps to inject water and solvents deep into shale in the extraction process called fracking. Compressed-air-powered double-diaphragm pumps One modern application of positive-displacement pumps is compressed-air-powered double-diaphragm pumps. Run on compressed air, these pumps are intrinsically safe by design, although all manufacturers offer ATEX certified models to comply with industry regulation. These pumps are relatively inexpensive and can perform a wide variety of duties, from pumping water out of bunds to pumping hydrochloric acid from secure storage (dependent on how the pump is manufactured – elastomers / body construction). These double-diaphragm pumps can handle viscous fluids and abrasive materials with a gentle pumping process ideal for transporting shear-sensitive media. Rope pumps Devised in China as chain pumps over 1000 years ago, these pumps can be made from very simple materials: A rope, a wheel and a PVC pipe are sufficient to make a simple rope pump. Rope pump efficiency has been studied by grassroots organizations and the techniques for making and running them have been continuously improved. Impulse pumps Impulse pumps use pressure created by gas (usually air). In some impulse pumps the gas trapped in the liquid (usually water), is released and accumulated somewhere in the pump, creating a pressure that can push part of the liquid upwards. Conventional impulse pumps include: Hydraulic ram pumps – kinetic energy of a low-head water supply is stored temporarily in an air-bubble hydraulic accumulator, then used to drive water to a higher head. Pulser pumps – run with natural resources, by kinetic energy only. Airlift pumps – run on air inserted into pipe, which pushes the water up when bubbles move upward Instead of a gas accumulation and releasing cycle, the pressure can be created by burning of hydrocarbons. Such combustion driven pumps directly transmit the impulse from a combustion event through the actuation membrane to the pump fluid. In order to allow this direct transmission, the pump needs to be almost entirely made of an elastomer (e.g. silicone rubber). Hence, the combustion causes the membrane to expand and thereby pumps the fluid out of the adjacent pumping chamber. The first combustion-driven soft pump was developed by ETH Zurich. Hydraulic ram pumps A hydraulic ram is a water pump powered by hydropower. It takes in water at relatively low pressure and high flow-rate and outputs water at a higher hydraulic-head and lower flow-rate. The device uses the water hammer effect to develop pressure that lifts a portion of the input water that powers the pump to a point higher than where the water started. The hydraulic ram is sometimes used in remote areas, where there is both a source of low-head hydropower, and a need for pumping water to a destination higher in elevation than the source. In this situation, the ram is often useful, since it requires no outside source of power other than the kinetic energy of flowing water. Velocity pumps Rotodynamic pumps (or dynamic pumps) are a type of velocity pump in which kinetic energy is added to the fluid by increasing the flow velocity. This increase in energy is converted to a gain in potential energy (pressure) when the velocity is reduced prior to or as the flow exits the pump into the discharge pipe. This conversion of kinetic energy to pressure is explained by the First law of thermodynamics, or more specifically by Bernoulli's principle. Dynamic pumps can be further subdivided according to the means in which the velocity gain is achieved. These types of pumps have a number of characteristics: Continuous energy Conversion of added energy to increase in kinetic energy (increase in velocity) Conversion of increased velocity (kinetic energy) to an increase in pressure head A practical difference between dynamic and positive-displacement pumps is how they operate under closed valve conditions. Positive-displacement pumps physically displace fluid, so closing a valve downstream of a positive-displacement pump produces a continual pressure build up that can cause mechanical failure of pipeline or pump. Dynamic pumps differ in that they can be safely operated under closed valve conditions (for short periods of time). Radial-flow pumps Such a pump is also referred to as a centrifugal pump. The fluid enters along the axis or center, is accelerated by the impeller and exits at right angles to the shaft (radially); an example is the centrifugal fan, which is commonly used to implement a vacuum cleaner. Another type of radial-flow pump is a vortex pump. The liquid in them moves in tangential direction around the working wheel. The conversion from the mechanical energy of motor into the potential energy of flow comes by means of multiple whirls, which are excited by the impeller in the working channel of the pump. Generally, a radial-flow pump operates at higher pressures and lower flow rates than an axial- or a mixed-flow pump. Axial-flow pumps These are also referred to as All fluid pumps. The fluid is pushed outward or inward to move fluid axially. They operate at much lower pressures and higher flow rates than radial-flow (centrifugal) pumps. Axial-flow pumps cannot be run up to speed without special precaution. If at a low flow rate, the total head rise and high torque associated with this pipe would mean that the starting torque would have to become a function of acceleration for the whole mass of liquid in the pipe system. If there is a large amount of fluid in the system, accelerate the pump slowly. Mixed-flow pumps function as a compromise between radial and axial-flow pumps. The fluid experiences both radial acceleration and lift and exits the impeller somewhere between 0 and 90 degrees from the axial direction. As a consequence mixed-flow pumps operate at higher pressures than axial-flow pumps while delivering higher discharges than radial-flow pumps. The exit angle of the flow dictates the pressure head-discharge characteristic in relation to radial and mixed-flow. Regenerative turbine pumps Also known as drag, friction, liquid-ring pump, peripheral, side-channel, traction, turbulence, or vortex pumps, regenerative turbine pumps are class of rotodynamic pump that operates at high head pressures, typically . The pump has an impeller with a number of vanes or paddles which spins in a cavity. The suction port and pressure ports are located at the perimeter of the cavity and are isolated by a barrier called a stripper, which allows only the tip channel (fluid between the blades) to recirculate, and forces any fluid in the side channel (fluid in the cavity outside of the blades) through the pressure port. In a regenerative turbine pump, as fluid spirals repeatedly from a vane into the side channel and back to the next vane, kinetic energy is imparted to the periphery, thus pressure builds with each spiral, in a manner similar to a regenerative blower. As regenerative turbine pumps cannot become vapor locked, they are commonly applied to volatile, hot, or cryogenic fluid transport. However, as tolerances are typically tight, they are vulnerable to solids or particles causing jamming or rapid wear. Efficiency is typically low, and pressure and power consumption typically decrease with flow. Additionally, pumping direction can be reversed by reversing direction of spin. Eductor-jet pump This uses a jet, often of steam, to create a low pressure. This low pressure sucks in fluid and
pump was used extensively in the 19th century—in the early days of steam propulsion—as boiler feed water pumps. Now reciprocating pumps typically pump highly viscous fluids like concrete and heavy oils, and serve in special applications that demand low flow rates against high resistance. Reciprocating hand pumps were widely used to pump water from wells. Common bicycle pumps and foot pumps for inflation use reciprocating action. These positive-displacement pumps have an expanding cavity on the suction side and a decreasing cavity on the discharge side. Liquid flows into the pumps as the cavity on the suction side expands and the liquid flows out of the discharge as the cavity collapses. The volume is constant given each cycle of operation and the pump's volumetric efficiency can be achieved through routine maintenance and inspection of its valves. Typical reciprocating pumps are: Plunger pumps – a reciprocating plunger pushes the fluid through one or two open valves, closed by suction on the way back. Diaphragm pumps – similar to plunger pumps, where the plunger pressurizes hydraulic oil which is used to flex a diaphragm in the pumping cylinder. Diaphragm valves are used to pump hazardous and toxic fluids. Piston pumps displacement pumps – usually simple devices for pumping small amounts of liquid or gel manually. The common hand soap dispenser is such a pump. Radial piston pumps - a form of hydraulic pump where pistons extend in a radial direction. Various positive-displacement pumps The positive-displacement principle applies in these pumps: Rotary lobe pump Progressive cavity pump Rotary gear pump Piston pump Diaphragm pump Screw pump Gear pump Hydraulic pump Rotary vane pump Peristaltic pump Rope pump Flexible impeller pump Gear pump This is the simplest form of rotary positive-displacement pumps. It consists of two meshed gears that rotate in a closely fitted casing. The tooth spaces trap fluid and force it around the outer periphery. The fluid does not travel back on the meshed part, because the teeth mesh closely in the center. Gear pumps see wide use in car engine oil pumps and in various hydraulic power packs. Screw pump A screw pump is a more complicated type of rotary pump that uses two or three screws with opposing thread — e.g., one screw turns clockwise and the other counterclockwise. The screws are mounted on parallel shafts that have gears that mesh so the shafts turn together and everything stays in place. The screws turn on the shafts and drive fluid through the pump. As with other forms of rotary pumps, the clearance between moving parts and the pump's casing is minimal. Progressing cavity pump Widely used for pumping difficult materials, such as sewage sludge contaminated with large particles, this pump consists of a helical rotor, about ten times as long as its width. This can be visualized as a central core of diameter x with, typically, a curved spiral wound around of thickness half x, though in reality it is manufactured in a single casting. This shaft fits inside a heavy-duty rubber sleeve, of wall thickness also typically x. As the shaft rotates, the rotor gradually forces fluid up the rubber sleeve. Such pumps can develop very high pressure at low volumes. Roots-type pumps Named after the Roots brothers who invented it, this lobe pump displaces the liquid trapped between two long helical rotors, each fitted into the other when perpendicular at 90°, rotating inside a triangular shaped sealing line configuration, both at the point of suction and at the point of discharge. This design produces a continuous flow with equal volume and no vortex. It can work at low pulsation rates, and offers gentle performance that some applications require. Applications include: High capacity industrial air compressors. Roots superchargers on internal combustion engines. A brand of civil defense siren, the Federal Signal Corporation's Thunderbolt. Peristaltic pump A peristaltic pump is a type of positive-displacement pump. It contains fluid within a flexible tube fitted inside a circular pump casing (though linear peristaltic pumps have been made). A number of rollers, shoes, or wipers attached to a rotor compresses the flexible tube. As the rotor turns, the part of the tube under compression closes (or occludes), forcing the fluid through the tube. Additionally, when the tube opens to its natural state after the passing of the cam it draws (restitution) fluid into the pump. This process is called peristalsis and is used in many biological systems such as the gastrointestinal tract. Plunger pumpsPlunger pumps are reciprocating positive-displacement pumps. These consist of a cylinder with a reciprocating plunger. The suction and discharge valves are mounted in the head of the cylinder. In the suction stroke, the plunger retracts and the suction valves open causing suction of fluid into the cylinder. In the forward stroke, the plunger pushes the liquid out of the discharge valve. Efficiency and common problems: With only one cylinder in plunger pumps, the fluid flow varies between maximum flow when the plunger moves through the middle positions, and zero flow when the plunger is at the end positions. A lot of energy is wasted when the fluid is accelerated in the piping system. Vibration and water hammer may be a serious problem. In general, the problems are compensated for by using two or more cylinders not working in phase with each other. Triplex-style plunger pumps Triplex plunger pumps use three plungers, which reduces the pulsation of single reciprocating plunger pumps. Adding a pulsation dampener on the pump outlet can further smooth the pump ripple, or ripple graph of a pump transducer. The dynamic relationship of the high-pressure fluid and plunger generally requires high-quality plunger seals. Plunger pumps with a larger number of plungers have the benefit of increased flow, or smoother flow without a pulsation damper. The increase in moving parts and crankshaft load is one drawback. Car washes often use these triplex-style plunger pumps (perhaps without pulsation dampers). In 1968, William Bruggeman reduced the size of the triplex pump and increased the lifespan so that car washes could use equipment with smaller footprints. Durable high-pressure seals, low-pressure seals and oil seals, hardened crankshafts, hardened connecting rods, thick ceramic plungers and heavier duty ball and roller bearings improve reliability in triplex pumps. Triplex pumps now are in a myriad of markets across the world. Triplex pumps with shorter lifetimes are commonplace to the home user. A person who uses a home pressure washer for 10 hours a year may be satisfied with a pump that lasts 100 hours between rebuilds. Industrial-grade or continuous duty triplex pumps on the other end of the quality spectrum may run for as much as 2,080 hours a year. The oil and gas drilling industry uses massive semi trailer-transported triplex pumps called mud pumps to pump drilling mud, which cools the drill bit and carries the cuttings back to the surface. Drillers use triplex or even quintuplex pumps to inject water and solvents deep into shale in the extraction process called fracking. Compressed-air-powered double-diaphragm pumps One modern application of positive-displacement pumps is compressed-air-powered double-diaphragm pumps. Run on compressed air, these pumps are intrinsically safe by design, although all manufacturers offer ATEX certified models to comply with industry regulation. These pumps are relatively inexpensive and can perform a wide variety of duties, from pumping water out of bunds to pumping hydrochloric acid from secure storage (dependent on how the pump is manufactured – elastomers / body construction). These double-diaphragm pumps can handle viscous fluids and abrasive materials with a gentle pumping process ideal for transporting shear-sensitive media. Rope pumps Devised in China as chain pumps over 1000 years ago, these pumps can be made from very simple materials: A rope, a wheel and a PVC pipe are sufficient to make a simple rope pump. Rope pump efficiency has been studied by grassroots organizations and the techniques for making and running them have been continuously improved. Impulse pumps Impulse pumps use pressure created by gas (usually air). In some impulse pumps the gas trapped in the liquid (usually water), is released and accumulated somewhere in the pump, creating a pressure that can push part of the liquid upwards. Conventional impulse pumps include: Hydraulic ram pumps – kinetic energy of a low-head water supply is stored temporarily in an air-bubble hydraulic accumulator, then used to drive water to a higher head. Pulser pumps – run with natural resources, by kinetic energy only. Airlift pumps – run on air inserted into pipe, which pushes the water up when bubbles move upward Instead of a gas accumulation and releasing cycle, the pressure can be created by burning of hydrocarbons. Such combustion driven pumps directly transmit the impulse from a combustion event through the actuation membrane to the pump fluid. In order to allow this direct transmission, the pump needs to be almost entirely made of an elastomer (e.g. silicone rubber). Hence, the combustion causes the membrane to expand and thereby pumps the fluid out of the adjacent pumping chamber. The first combustion-driven soft pump was developed by ETH Zurich. Hydraulic ram pumps A hydraulic ram is a water pump powered by hydropower. It takes in water at relatively low pressure and high flow-rate and outputs water at a higher hydraulic-head and lower flow-rate. The device uses the water hammer effect to develop pressure that lifts a portion of the input water that powers the pump to a point higher than where the water started. The hydraulic ram is sometimes used in remote areas, where there is both a source of low-head hydropower, and a need for pumping water to a destination higher in elevation than the source. In this situation, the ram is often useful, since it requires no outside source of power other than the kinetic energy of flowing water. Velocity pumps Rotodynamic pumps (or dynamic pumps) are a type of velocity pump in which kinetic energy is added to the fluid by increasing the flow velocity. This increase in energy is converted to a gain in potential energy (pressure) when the velocity is reduced prior to or as the flow exits the pump into the discharge pipe. This conversion of kinetic energy to pressure is explained by the First law of thermodynamics, or more specifically by Bernoulli's principle. Dynamic pumps can be further subdivided according to the means in which the velocity gain is achieved. These types of pumps have a number of characteristics: Continuous energy Conversion of added energy to increase in kinetic energy (increase in velocity) Conversion of increased velocity (kinetic energy) to an increase in pressure head A practical difference between dynamic and positive-displacement pumps is how they operate under closed valve conditions. Positive-displacement pumps physically displace fluid, so closing a valve downstream of a positive-displacement pump produces a continual pressure build up that can cause mechanical failure of pipeline or pump. Dynamic pumps differ in that they can be safely operated under closed valve conditions (for short periods of time). Radial-flow pumps Such a pump is also referred to as a centrifugal pump. The fluid enters along the axis or center, is accelerated by the impeller and exits at right angles to the shaft (radially); an example is the centrifugal fan, which is commonly used to implement a vacuum cleaner. Another type of radial-flow pump is a vortex pump. The liquid in them moves in tangential direction around the working wheel. The conversion from the mechanical energy of motor into the potential energy of flow comes by means of multiple whirls, which are excited by the impeller in the working channel of the pump. Generally, a radial-flow pump operates at higher pressures and lower flow rates than an axial- or a mixed-flow pump. Axial-flow pumps These are also referred to as All fluid pumps. The fluid is pushed outward or inward to move fluid axially. They operate at much lower pressures and higher flow rates than radial-flow (centrifugal) pumps. Axial-flow pumps cannot be run up to speed without special precaution. If at a low flow rate, the total head rise and high torque
a chess variant Progressive talk radio, a talk radio format devoted to expressing liberal or progressive viewpoints of issues The Progressive, an American left-wing magazine Brands and enterprises Progressive Corporation, a U.S. insurance company Progressive Enterprises, a New Zealand retail cooperative Healthcare Progressive disease Progressive lens, a type of corrective eyeglass lenses Religion Progressive Adventism, a sect of the Seventh-day Adventist Church Progressive Christianity, a movement within contemporary Protestantism Progressive creationism, a form of Old Earth creationism Progressive Islam, a modern liberal interpretation of Islam Progressive Judaism, a major denomination within Judaism Progressive religion, a religious tradition which embraces theological diversity Progressive revelation (Bahá'í), a core teaching of Bahá'í that suggests that religious truth is revealed by God progressively and cyclically over time Progressive revelation (Christianity), the concept that the sections of the Bible written later contain a fuller revelation of God Technology Progressive disclosure, a technique used in human computer interaction Progressive scan, a form of video transmission Progressive shifting, a technique for changing gears in trucks Progressive stamping, a metalworking technique Verb forms Progressive aspect (also called continuous), a
towards civilization Progressivism in the United States, the political philosophy in the American context Other uses in politics Progressive Era, a period of reform in the United States (ca. 1890–1930) Progressive tax, a type of tax rate structure Arts, entertainment, and media Music Progressive music, a type of music that expands stylistic boundaries outwards "Progressive" (song), a 2009 single by Kalafina Progressive, a demo album by the band Haggard Other uses in arts, entertainment, and media Progressive chess, a chess variant Progressive talk radio, a talk radio format devoted to expressing liberal or progressive viewpoints of issues The Progressive, an American left-wing magazine Brands and enterprises Progressive Corporation, a U.S. insurance company Progressive Enterprises, a New Zealand retail cooperative Healthcare Progressive disease Progressive lens, a type of corrective eyeglass lenses Religion Progressive Adventism, a sect of the Seventh-day Adventist Church Progressive Christianity, a movement within contemporary Protestantism Progressive creationism, a form of Old Earth creationism Progressive Islam, a modern liberal interpretation of Islam Progressive Judaism, a major denomination within Judaism Progressive religion, a religious tradition which embraces theological diversity Progressive revelation (Bahá'í), a core teaching of Bahá'í that suggests that religious truth is
for all points of a system filled with a constant-density fluid is where: p, pressure of the fluid, = ρg, density × acceleration of gravity is the (volume-) specific weight of the fluid, v, velocity of the fluid, g, acceleration of gravity, z, elevation, , pressure head, , velocity head. Applications Hydraulic brakes Artesian well Blood pressure Hydraulic head Plant cell turgidity Pythagorean cup Pressure washing Explosion or deflagration pressures Explosion or deflagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconfined and confined spaces. Negative pressures While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of −21 kPa (i.e., 21 kPa below an atmospheric pressure of 101 kPa). For example, abdominal decompression is an obstetric procedure during which negative gauge pressure is applied intermittently to a pregnant woman's abdomen. Negative absolute pressures are possible. They are effectively tension, and both bulk solids and bulk liquids can be put under negative absolute pressure by pulling on them. Microscopically, the molecules in solids and liquids have attractive interactions that overpower the thermal kinetic energy, so some tension can be sustained. Thermodynamically, however, a bulk material under negative pressure is in a metastable state, and it is especially fragile in the case of liquids where the negative pressure state is similar to superheating and is easily susceptible to cavitation. In certain situations, the cavitation can be avoided and negative pressures sustained indefinitely, for example, liquid mercury has been observed to sustain up to in clean glass containers. Negative liquid pressures are thought to be involved in the ascent of sap in plants taller than 10 m (the atmospheric pressure head of water). The Casimir effect can create a small attractive force due to interactions with vacuum energy; this force is sometimes termed "vacuum pressure" (not to be confused with the negative gauge pressure of a vacuum). For non-isotropic stresses in rigid bodies, depending on how the orientation of a surface is chosen, the same distribution of forces may have a component of positive pressure along one surface normal, with a component of negative pressure acting along another surface normal. The stresses in an electromagnetic field are generally non-isotropic, with the pressure normal to one surface element (the normal stress) being negative, and positive for surface elements perpendicular to this. In cosmology, dark energy creates a very small yet cosmically significant amount of negative pressure, which accelerates the expansion of the universe. Stagnation pressure Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: where is the stagnation pressure, is the density, is the flow velocity, is the static pressure. The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or Cobra probe, connected to a manometer. Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures. Surface pressure and surface tension There is a two-dimensional analog of pressure – the lateral force per unit length applied on a line perpendicular to the force. Surface pressure is denoted by π: and shares many similar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring pressure/area isotherms, as the two-dimensional analog of Boyle's law, , at constant temperature. Surface tension is another example of surface pressure, but with a reversed sign, because "tension" is the opposite to "pressure". Pressure of an ideal gas In an ideal gas, molecules have no volume and do not interact. According to the ideal gas law, pressure varies linearly with temperature and quantity, and inversely with volume: where: p is the absolute pressure of the gas, n is the amount of substance, T is the absolute temperature, V is the volume, R is the ideal gas constant. Real gases exhibit a more complex dependence on the variables of state. Vapour pressure Vapour pressure is the pressure of a vapour in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and solids have a tendency to evaporate into a gaseous form, and all gases have a tendency to condense back to their liquid or solid form. The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapour bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases. The vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial vapor pressure. Liquid pressure When a person swims under the water, water pressure is felt acting on the person's eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above the person and therefore greater pressure. The pressure a liquid exerts depends on its depth. Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. Thus, we can say that the depth, density and liquid pressure are directly proportionate. The pressure due to a liquid in liquid columns of constant density or at a depth within a substance is represented by the following formula: where: p is liquid pressure, g is gravity at the surface of overlaying material, ρ is density of liquid, h is height of liquid column or depth within a substance. Another way of saying the same formula is the following: The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible – that is, their volume can hardly be changed by pressure (water
not as an absolute pressure, but relative to atmospheric pressure; such measurements are called gauge pressure. An example of this is the air pressure in an automobile tire, which might be said to be "", but is actually 220 kPa (32 psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute pressure in the tire is therefore about . In technical work, this is written "a gauge pressure of ". Where space is limited, such as on pressure gauges, name plates, graph labels, and table headings, the use of a modifier in parentheses, such as "kPa (gauge)" or "kPa (absolute)", is permitted. In non-SI technical work, a gauge pressure of is sometimes written as "32 psig", and an absolute pressure as "32 psia", though the other methods explained above that avoid attaching characters to the unit of pressure are preferred. Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is , a gas (such as helium) at (gauge) ( [absolute]) is 50% denser than the same gas at (gauge) ( [absolute]). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one. Scalar nature In a static gas, the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant random motion. Because we are dealing with an extremely large number of molecules and because the motion of the individual molecules is random in every direction, we do not detect any motion. If we enclose the gas within a container, we detect a pressure in the gas from the molecules colliding with the walls of our container. We can put the walls of our container anywhere inside the gas, and the force per unit area (the pressure) is the same. We can shrink the size of our "container" down to a very small point (becoming less true as we approach the atomic scale), and the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has magnitude but no direction sense associated with it. Pressure force acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface. A closely related quantity is the stress tensor σ, which relates the vector force to the vector area via the linear relation . This tensor may be expressed as the sum of the viscous stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, but in the following, the term "pressure" will refer only to the scalar pressure. According to the theory of general relativity, pressure increases the strength of a gravitational field (see stress–energy tensor) and so adds to the mass-energy cause of gravity. This effect is unnoticeable at everyday pressures but is significant in neutron stars, although it has not been experimentally tested. Types Fluid pressure Fluid pressure is most often the compressive stress at some point within a fluid. (The term fluid refers to both liquids and gases – for more information specifically about liquid pressure, see section below.) Fluid pressure occurs in one of two situations: An open condition, called "open channel flow", e.g. the ocean, a swimming pool, or the atmosphere. A closed condition, called "closed conduit", e.g. a water line or gas line. Pressure in open conditions usually can be approximated as the pressure in "static" or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics. The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure. Closed bodies of fluid are either "static", when the fluid is not moving, or "dynamic", when the fluid can move as in either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the principles of fluid dynamics. The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli. Bernoulli's equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the fluid being ideal and incompressible. An ideal fluid is a fluid in which there is no friction, it is inviscid (zero viscosity). The equation for all points of a system filled with a constant-density fluid is where: p, pressure of the fluid, = ρg, density × acceleration of gravity is the (volume-) specific weight of the fluid, v, velocity of the fluid, g, acceleration of gravity, z, elevation, , pressure head, , velocity head. Applications Hydraulic brakes Artesian well Blood pressure Hydraulic head Plant cell turgidity Pythagorean cup Pressure washing Explosion or deflagration pressures Explosion or deflagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconfined and confined spaces. Negative pressures While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of −21 kPa (i.e., 21 kPa below an atmospheric pressure of 101 kPa). For example, abdominal decompression is an obstetric procedure during which negative gauge pressure is applied intermittently to a pregnant woman's abdomen. Negative absolute pressures are possible. They are effectively tension, and both bulk solids and bulk liquids can be put under negative absolute pressure by pulling on them. Microscopically, the molecules in solids and liquids have attractive interactions that overpower the thermal kinetic energy, so some tension can be sustained. Thermodynamically, however, a bulk material under negative pressure is in a metastable state, and it is especially fragile in the case of liquids where the negative pressure state is similar to superheating and is easily susceptible to cavitation. In certain situations, the cavitation can be avoided and negative pressures sustained indefinitely, for example, liquid mercury has been observed to sustain up to in clean glass containers. Negative liquid pressures are thought to be involved in the ascent of sap in plants taller than 10 m (the atmospheric pressure head of water). The Casimir effect can create a small attractive force due to interactions with vacuum energy; this force is sometimes termed "vacuum pressure" (not to be confused with the negative gauge pressure of a vacuum). For non-isotropic stresses in rigid bodies, depending on how the orientation of a surface is chosen, the same distribution of forces may have a component of positive pressure along one surface normal, with a component of negative pressure acting along another surface normal. The stresses in an electromagnetic field are generally non-isotropic, with the pressure normal to one surface element (the normal stress) being negative, and positive for surface elements perpendicular to this. In cosmology, dark energy creates a very small yet cosmically significant amount of negative pressure, which accelerates the expansion of the universe. Stagnation pressure Stagnation pressure is the pressure a fluid exerts when it is forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: where is the stagnation pressure, is the density, is the flow velocity, is the static pressure. The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or Cobra probe, connected to a manometer. Depending on where the inlet holes are located on the probe, it can measure static pressures or stagnation
This corresponds to the area of the plane covered by the polygon or to the area of one or more simple polygons having the same outline as the self-intersecting one. In the case of the cross-quadrilateral, it is treated as two simple triangles. Centroid Using the same convention for vertex coordinates as in the previous section, the coordinates of the centroid of a solid simple polygon are In these formulas, the signed value of area must be used. For triangles (), the centroids of the vertices and of the solid shape are the same, but, in general, this is not true for . The centroid of the vertex set of a polygon with vertices has the coordinates Generalizations The idea of a polygon has been generalized in various ways. Some of the more important include: A spherical polygon is a circuit of arcs of great circles (sides) and vertices on the surface of a sphere. It allows the digon, a polygon having only two sides and two corners, which is impossible in a flat plane. Spherical polygons play an important role in cartography (map making) and in Wythoff's construction of the uniform polyhedra. A skew polygon does not lie in a flat plane, but zigzags in three (or more) dimensions. The Petrie polygons of the regular polytopes are well known examples. An apeirogon is an infinite sequence of sides and angles, which is not closed but has no ends because it extends indefinitely in both directions. A skew apeirogon is an infinite sequence of sides and angles that do not lie in a flat plane. A complex polygon is a configuration analogous to an ordinary polygon, which exists in the complex plane of two real and two imaginary dimensions. An abstract polygon is an algebraic partially ordered set representing the various elements (sides, vertices, etc.) and their connectivity. A real geometric polygon is said to be a realization of the associated abstract polygon. Depending on the mapping, all the generalizations described here can be realized. A polyhedron is a three-dimensional solid bounded by flat polygonal faces, analogous to a polygon in two dimensions. The corresponding shapes in four or higher dimensions are called polytopes. (In other conventions, the words polyhedron and polytope are used in any dimension, with the distinction between the two that a polytope is necessarily bounded.) Naming The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions. Beyond decagons (10-sided) and dodecagons (12-sided), mathematicians generally use numerical notation, for example 17-gon and 257-gon. Exceptions exist for side counts that are easily expressed in verbal form (e.g. 20 and 30), or are used by non-mathematicians. Some special polygons also have their own names; for example the regular star pentagon is also known as the pentagram. {|class="wikitable" |- |+ Polygon names and miscellaneous properties |- !style="width:20em;" | Name !style="width:2em;" | Sides !style="width:auto;" | Properties |- |monogon || 1 || Not generally recognised as a polygon, although some disciplines such as graph theory sometimes use the term. |- |digon || 2 || Not generally recognised as a polygon in the Euclidean plane, although it can exist as a spherical polygon. |- |triangle (or trigon) || 3 || The simplest polygon which can exist in the Euclidean plane. Can tile the plane. |- |quadrilateral (or tetragon) || 4 || The simplest polygon which can cross itself; the simplest polygon which can be concave; the simplest polygon which can be non-cyclic. Can tile the plane. |- |pentagon || 5 || The simplest polygon which can exist as a regular star. A star pentagon is known as a pentagram or pentacle. |- |hexagon || 6 || Can tile the plane. |- |heptagon (or septagon) || 7 || The simplest polygon such that the regular form is not constructible with compass and straightedge. However, it can be constructed using a neusis construction. |- |octagon || 8 || |- |nonagon (or enneagon) || 9 || "Nonagon" mixes Latin [novem = 9] with Greek; "enneagon" is pure Greek. |- |decagon || 10 || |- |hendecagon (or undecagon) || 11 || The simplest polygon such that the regular form cannot be constructed with compass, straightedge, and angle trisector. However, it can be constructed with neusis. |- |dodecagon (or duodecagon) || 12 || |- |tridecagon (or triskaidecagon)|| 13 || |- |tetradecagon (or tetrakaidecagon)|| 14 || |- |pentadecagon (or pentakaidecagon) || 15 || |- |hexadecagon (or hexakaidecagon) || 16 || |- |heptadecagon (or heptakaidecagon)|| 17 || Constructible polygon |- |octadecagon (or octakaidecagon)|| 18 || |- |enneadecagon (or enneakaidecagon)|| 19 || |- |icosagon || 20 || |- |icositrigon (or icosikaitrigon) || 23 || The simplest polygon such that the regular form cannot be constructed with neusis. |- |icositetragon (or icosikaitetragon) || 24 || |- |icosipentagon (or icosikaipentagon) || 25 || The simplest polygon such that it is not known if the regular form can be constructed with neusis or not. |- |triacontagon || 30 || |- |tetracontagon (or tessaracontagon) || 40 || |- |pentacontagon (or pentecontagon) || 50 || <ref name=Peirce>The New Elements of Mathematics: Algebra and Geometry] by Charles Sanders Peirce (1976), p.298</ref> |- |hexacontagon (or hexecontagon) || 60 || |- |heptacontagon (or hebdomecontagon) || 70 || |- |octacontagon (or ogdoëcontagon) || 80 || |- |enneacontagon (or enenecontagon) || 90 || |- |hectogon (or hecatontagon) || 100 || |- | 257-gon || 257 || Constructible polygon |- |chiliagon || 1000 || Philosophers including René Descartes, Immanuel Kant, David Hume, have used the chiliagon as an example in discussions. |- |myriagon || 10,000 || Used as an example in some philosophical discussions, for example in Descartes's Meditations on First Philosophy|- | 65537-gon || 65,537 || Constructible polygon |- |megagonDarling, David J., The universal book of mathematics: from Abracadabra to Zeno's paradoxes, John Wiley & Sons, 2004. p. 249. . || 1,000,000 || As with René Descartes's example of the chiliagon, the million-sided polygon has been used as an illustration of a well-defined concept that cannot be visualised.Merrill, John Calhoun and Odell, S. Jack, Philosophy and Journalism, Longman, 1983, p. 47, .Mandik, Pete, Key Terms in Philosophy of Mind, Continuum International Publishing Group, 2010, p. 26, .Balmes, James, Fundamental Philosophy, Vol II, Sadlier and Co., Boston, 1856, p. 27. The megagon is also used as an illustration of the convergence of regular polygons to a circle. |- |apeirogon || ∞|| A degenerate polygon of infinitely many sides. |} To construct the name of a polygon with more than 20 and less than 100 edges, combine the prefixes as follows. The "kai" term applies to 13-gons and higher and was used by Kepler, and advocated by John H. Conway for clarity of concatenated prefix numbers in the naming of quasiregular polyhedra, though not all sources use it. History Polygons have been known since ancient times. The regular polygons were known to the ancient Greeks, with the pentagram, a non-convex regular polygon (star polygon), appearing as early as the 7th century B.C. on a krater by Aristophanes, found at Caere and now in the Capitoline Museum.Cratere with the blinding of Polyphemus and a naval battle , Castellani Halls, Capitoline Museum, accessed 2013-11-11. Two pentagrams are visible near the center of the image, The first known systematic study of non-convex polygons in general was made by Thomas Bradwardine in the 14th century. In 1952, Geoffrey Colin Shephard generalized the idea of polygons to the complex plane, where each real dimension is accompanied by an imaginary one, to create complex polygons. In nature Polygons appear in rock formations, most commonly as the flat facets of crystals, where the angles between the sides depend on the type of mineral from which the crystal is made. Regular hexagons can occur when the cooling of lava forms areas of tightly packed
it has sides. Each corner has several angles. The two most important ones are: Interior angle – The sum of the interior angles of a simple n-gon is radians or degrees. This is because any simple n-gon ( having n sides ) can be considered to be made up of triangles, each of which has an angle sum of π radians or 180 degrees. The measure of any interior angle of a convex regular n-gon is radians or degrees. The interior angles of regular star polygons were first studied by Poinsot, in the same paper in which he describes the four regular star polyhedra: for a regular -gon (a p-gon with central density q), each interior angle is radians or degrees. Exterior angle – The exterior angle is the supplementary angle to the interior angle. Tracing around a convex n-gon, the angle "turned" at a corner is the exterior or external angle. Tracing all the way around the polygon makes one full turn, so the sum of the exterior angles must be 360°. This argument can be generalized to concave simple polygons, if external angles that turn in the opposite direction are subtracted from the total turned. Tracing around an n-gon in general, the sum of the exterior angles (the total amount one rotates at the vertices) can be any integer multiple d of 360°, e.g. 720° for a pentagram and 0° for an angular "eight" or antiparallelogram, where d is the density or turning number of the polygon. See also orbit (dynamics). Area In this section, the vertices of the polygon under consideration are taken to be in order. For convenience in some formulas, the notation will also be used. If the polygon is non-self-intersecting (that is, simple), the signed area is or, using determinants where is the squared distance between and The signed area depends on the ordering of the vertices and of the orientation of the plane. Commonly, the positive orientation is defined by the (counterclockwise) rotation that maps the positive -axis to the positive -axis. If the vertices are ordered counterclockwise (that is, according to positive orientation), the signed area is positive; otherwise, it is negative. In either case, the area formula is correct in absolute value. This is commonly called the shoelace formula or Surveyor's formula. The area A of a simple polygon can also be computed if the lengths of the sides, a1, a2, ..., an and the exterior angles, θ1, θ2, ..., θn are known, from: The formula was described by Lopshits in 1963. If the polygon can be drawn on an equally spaced grid such that all its vertices are grid points, Pick's theorem gives a simple formula for the polygon's area based on the numbers of interior and boundary grid points: the former number plus one-half the latter number, minus 1. In every polygon with perimeter p and area A , the isoperimetric inequality holds. For any two simple polygons of equal area, the Bolyai–Gerwien theorem asserts that the first can be cut into polygonal pieces which can be reassembled to form the second polygon. The lengths of the sides of a polygon do not in general determine its area. However, if the polygon is simple and cyclic then the sides do determine the area. Of all n-gons with given side lengths, the one with the largest area is cyclic. Of all n-gons with a given perimeter, the one with the largest area is regular (and therefore cyclic). Regular polygons Many specialized formulas apply to the areas of regular polygons. The area of a regular polygon is given in terms of the radius r of its inscribed circle and its perimeter p by This radius is also termed its apothem and is often represented as a. The area of a regular n-gon in terms of the radius R of its circumscribed circle can be expressed trigonometrically as: The area of a regular n-gon inscribed in a unit-radius circle, with side s and interior angle can also be expressed trigonometrically as: Self-intersecting The area of a self-intersecting polygon can be defined in two different ways, giving different answers: Using the formulas for simple polygons, we allow that particular regions within the polygon may have their area multiplied by a factor which we call the density of the region. For example, the central convex pentagon in the center of a pentagram has density 2. The two triangular regions of a cross-quadrilateral (like a figure 8) have opposite-signed densities, and adding their areas together can give a total area of zero for the whole figure. Considering the enclosed regions as point sets, we can find the area of the enclosed point set. This corresponds to the area of the plane covered by the polygon or to the area of one or more simple polygons having the same outline as the self-intersecting one. In the case of the cross-quadrilateral, it is treated as two simple triangles. Centroid Using the same convention for vertex coordinates as in the previous section, the coordinates of the centroid of a solid simple polygon are In these formulas, the signed value of area must be used. For triangles (), the centroids of the vertices and of the solid shape are the same, but, in general, this is not true for . The centroid of the vertex set of a polygon with vertices has the coordinates Generalizations The idea of a polygon has been generalized in various ways. Some of the more important include: A spherical polygon is a circuit of arcs of great circles (sides) and vertices on the surface of a sphere. It allows the digon, a polygon having only two sides and two corners, which is impossible in a flat plane. Spherical polygons play an important role in cartography (map making) and in Wythoff's construction of the uniform polyhedra. A skew polygon does not lie in a flat plane, but zigzags in three (or more) dimensions. The Petrie polygons of the regular polytopes are well known examples. An apeirogon is an infinite sequence of sides and angles, which is not closed but has no ends because it extends indefinitely in both directions. A skew apeirogon is an infinite sequence of sides and angles that do not lie in a flat plane. A complex polygon is a configuration analogous to an ordinary polygon, which exists in the complex plane of two real and two imaginary dimensions. An abstract polygon is an algebraic partially ordered set representing the various elements (sides, vertices, etc.) and their connectivity. A real geometric polygon is said to be a realization of the associated abstract polygon. Depending on the mapping, all the generalizations described here can be realized. A polyhedron is a three-dimensional solid bounded by flat polygonal faces, analogous to a polygon in two dimensions. The corresponding shapes in four or higher dimensions are called polytopes. (In other conventions, the words polyhedron and polytope are used in any dimension, with the distinction between the two that a polytope is necessarily bounded.) Naming The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions. Beyond decagons (10-sided) and dodecagons (12-sided), mathematicians generally use numerical notation, for example 17-gon and 257-gon. Exceptions exist for side counts that are easily expressed in verbal form (e.g. 20 and 30), or are used by non-mathematicians. Some special polygons also have their own names; for example the regular star pentagon is also known as the pentagram. {|class="wikitable" |- |+ Polygon names and miscellaneous properties |- !style="width:20em;" | Name !style="width:2em;" | Sides !style="width:auto;" | Properties |- |monogon || 1 || Not generally recognised as a polygon, although some disciplines such as graph theory sometimes use the term. |- |digon || 2 || Not generally recognised as a polygon in the Euclidean plane, although it can exist as a spherical polygon. |- |triangle (or trigon) || 3 || The simplest polygon which can exist in the Euclidean plane. Can tile the plane. |- |quadrilateral (or tetragon) || 4 || The simplest polygon which can cross itself; the simplest polygon which can be concave; the simplest polygon which can be non-cyclic. Can tile the plane. |- |pentagon || 5 || The simplest polygon which can exist as a regular star. A star pentagon is known as a pentagram or pentacle. |- |hexagon || 6 || Can tile the plane. |- |heptagon (or septagon) || 7 || The simplest polygon such that the regular form is not constructible with compass and straightedge. However, it can be constructed using a neusis construction. |- |octagon || 8 || |- |nonagon (or enneagon) || 9 || "Nonagon" mixes Latin [novem = 9] with Greek; "enneagon" is pure Greek. |- |decagon || 10 || |- |hendecagon (or undecagon) || 11 || The simplest polygon such that the regular form cannot be constructed with compass, straightedge, and angle trisector. However, it can be constructed with neusis. |- |dodecagon (or duodecagon) || 12 || |- |tridecagon (or triskaidecagon)|| 13 || |- |tetradecagon (or tetrakaidecagon)|| 14 || |- |pentadecagon (or pentakaidecagon) || 15 || |- |hexadecagon (or hexakaidecagon) || 16 || |- |heptadecagon (or heptakaidecagon)|| 17 || Constructible polygon |- |octadecagon (or octakaidecagon)|| 18 || |- |enneadecagon (or enneakaidecagon)|| 19 || |- |icosagon || 20 || |- |icositrigon (or
to the 1990s, which involve player characters defeating large groups of weaker enemies along a side-scrolling playfield. Examples include beat 'em ups like Kung-Fu Master and Double Dragon, ninja action games like The Legend of Kage and Shinobi, scrolling platformers like Super Mario Bros. and Sonic the Hedgehog, and run-and-gun shooters like Rolling Thunder and Gunstar Heroes. "Character action games" is also a term used for 3D hack and slash games modelled after Devil May Cry, which represent an evolution of arcade character action games. Other examples of this sub-genre include Ninja Gaiden, God of War, and Bayonetta. Fighting games Fighting games typically have a larger number of player characters to choose from, with some basic moves available to all or most characters and some unique moves only available to one or a few characters. Having many distinctive characters to play as and against, all possessing different moves and abilities, is necessary to create a larger gameplay variety in such games. Hero shooters Similarly to MOBAs, hero shooters emphasize pre-designed "hero" characters with distinctive abilities and weapons that are not available to the other characters. Hero shooters strongly encourage teamwork between players on a team, guiding players to select effective combinations of hero characters and coordinate the use of hero abilities during a match. Multiplayer online battle arena Multiplayer online battle arena (MOBA) games offer a large group of viable player characters for the player to choose from, each of which having distinctive abilities, strengths, and weaknesses to make the game play style different. Characters can learn new abilities or augment existing ones over the course of a match by collecting experience points. Choosing a character who complements player's teammates
for Dune, and Command & Conquer series. In such games, the only real indication that the player has a character (instead of an omnipresent status), is from the cutscenes during which the character is being given a mission briefing or debriefing; the player is usually addressed as "general", "commander", or another military rank. In gaming culture, such a character was called Ageless, Faceless, Gender-Neutral, Culturally Ambiguous Adventure Person, abbreviated as AFGNCAAP; a term that originated in Zork: Grand Inquisitor where it is used satirically to refer to the player. Character action games Character action games (also called "character-driven" games, "character games" or just "action games") are a broad category of action games, referring to a variety of games that are driven by the physical actions of player characters. The term dates back to the golden age of arcade video games in the early 1980s, when the terms "action games" and "character games" began being used to distinguish a new emerging genre of character-driven action games from the space shoot 'em ups that had previously dominated the arcades in the late 1970s. Classic examples of character action games from that period include maze games like Pac-Man, platformers like Donkey Kong, and Frogger. Side-scrolling character action games (also called "side-scrolling action games" or "side-scrollers") are a broad category of character action games that were popular from the mid-1980s to the 1990s, which involve player characters defeating large groups of weaker enemies along a side-scrolling playfield. Examples include beat 'em ups like Kung-Fu Master and Double Dragon, ninja action games like The Legend of Kage and Shinobi, scrolling platformers like Super Mario Bros. and Sonic the Hedgehog, and run-and-gun shooters like Rolling Thunder and Gunstar Heroes. "Character action games" is also a term used for 3D hack and slash games modelled after Devil May Cry, which represent an evolution of arcade character action games. Other examples of this sub-genre include Ninja Gaiden, God of War, and Bayonetta. Fighting games Fighting games typically have a larger number of player characters to choose from, with some basic moves available to all or most characters and some unique moves only available to one or a few characters. Having many distinctive characters to play as and against, all possessing different moves and abilities, is necessary to create a larger gameplay variety in such games. Hero shooters Similarly to MOBAs, hero shooters emphasize pre-designed "hero" characters with distinctive abilities and weapons that are not available to the other characters. Hero shooters strongly encourage teamwork between players on a team, guiding players to select effective combinations of hero characters and coordinate the use of hero abilities during a match. Multiplayer online battle arena Multiplayer online battle arena (MOBA) games offer a large group of viable player characters for the player to choose
a number of neighbouring parishes to be placed under one benefice in the charge of a priest who conducts services by rotation, with additional services being provided by lay readers or other non-ordained members of the church community. A chapelry was a subdivision of an ecclesiastical parish in England, and parts of Lowland Scotland up to the mid 19th century. It had a similar status to a township but was so named as it had a chapel which acted as a subsidiary place of worship to the main parish church. In England civil parishes and their governing parish councils evolved in the 19th century as ecclesiastical parishes began to be relieved of what became considered to be civic responsibilities. Thus their boundaries began to diverge. The word "parish" acquired a secular usage. Since 1895, a parish council elected by public vote or a (civil) parish meeting administers a civil parish and is formally recognised as the level of local government below a district council. The traditional structure of the Church of England with the parish as the basic unit has been exported to other countries and churches throughout the Anglican Communion and Commonwealth but does not necessarily continue to be administered in the same way. Church of Scotland The parish is also the basic level of church administration in the Church of Scotland. Spiritual oversight of each parish church in Scotland is responsibility of the congregation's Kirk Session. Patronage was regulated in 1711 (Patronage Act) and abolished in 1874, with the result that ministers must be elected by members of the congregation. Many parish churches in Scotland today are "linked" with neighbouring parish churches served by a single minister. Since the abolition of parishes as a unit of civil government in Scotland in 1929, Scottish parishes have purely ecclesiastical significance and the boundaries may be adjusted by the local Presbytery. Church in Wales The church in Wales was disestablished in 1920 and is made up of six dioceses. Parishes were also civil administration areas until communities were established in 1974. Methodist Church Although they are more often simply called congregations and have no geographic boundaries, in the United Methodist Church congregations are called parishes. A prominent example of this usage comes in The Book of Discipline of The United Methodist Church, in which the committee of every local congregation that handles staff support is referred to as the committee on Pastor-Parish Relations. This committee gives recommendations to the bishop on behalf of the parish/congregation since it is the United Methodist Bishop of the episcopal area who appoints a pastor to each congregation. The same is true in the African Methodist Episcopal Church and the Christian Methodist Episcopal Church. In New Zealand, a local grouping of Methodist churches that share one or more ministers (which in the United Kingdom would be called a circuit) is referred to as a parish. Catholic Church In the Catholic Church, each parish normally has its own
of civil government in Scotland in 1929, Scottish parishes have purely ecclesiastical significance and the boundaries may be adjusted by the local Presbytery. Church in Wales The church in Wales was disestablished in 1920 and is made up of six dioceses. Parishes were also civil administration areas until communities were established in 1974. Methodist Church Although they are more often simply called congregations and have no geographic boundaries, in the United Methodist Church congregations are called parishes. A prominent example of this usage comes in The Book of Discipline of The United Methodist Church, in which the committee of every local congregation that handles staff support is referred to as the committee on Pastor-Parish Relations. This committee gives recommendations to the bishop on behalf of the parish/congregation since it is the United Methodist Bishop of the episcopal area who appoints a pastor to each congregation. The same is true in the African Methodist Episcopal Church and the Christian Methodist Episcopal Church. In New Zealand, a local grouping of Methodist churches that share one or more ministers (which in the United Kingdom would be called a circuit) is referred to as a parish. Catholic Church In the Catholic Church, each parish normally has its own parish priest (in some countries called pastor or provost), who has responsibility and canonical authority over the parish. What in most English-speaking countries is termed the "parish priest" is referred to as the "pastor" in the United States, where the term "parish priest" is used of any priest assigned to a parish even in a subordinate capacity. These are called "assistant priests", "parochial vicars", "curates", or, in the United States, "associate pastors" and "assistant pastors". Each diocese (administrative region) is divided into parishes, each with their own central church called the parish church, where religious services take place. Some larger parishes or parishes that have been combined under one parish priest may have two or more such churches, or the parish may be responsible for chapels (or chapels of ease) located at some distance from the mother church for the convenience of distant parishioners. In addition to a parish church, each parish may maintain auxiliary organizations and their facilities such as a rectory, parish hall, parochial school, or convent, frequently located on the same campus or adjacent to the church. Normally, a parish comprises all Catholics living within its geographically defined area, but non-territorial parishes can also be established within a defined area on a personal basis for Catholics belonging to a particular rite, language, nationality, or community. An example is that of personal parishes established in accordance with the 7 July 2007 motu proprio Summorum Pontificum for those attached to the pre-Vatican II liturgy. Most Catholic parishes are part of Latin Rite dioceses, which together cover the whole territory of a country. There can also be overlapping parishes of eparchies of Eastern Catholic Churches, personal ordinariates or military ordinariates. Parishes are generally territorial, but may be personal. See also Parish church Parish pump Parish registers: Birth certificate, Marriage certificate, Death certificate Collegiate
of Palaestina Prima. He would have received a conventional upper class education in the Greek classics and rhetoric, perhaps at the famous school at Gaza. He may have attended law school, possibly at Berytus (present-day Beirut) or Constantinople (now Istanbul), and became a lawyer (rhetor). He evidently knew Latin, as was natural for a man with legal training. In 527, the first year of the reign of the emperor JustinianI, he became the legal adviser () for Belisarius, a general whom Justinian made his chief military commander in a great attempt to restore control over the lost western provinces of the empire. Procopius was with Belisarius on the eastern front until the latter was defeated at the Battle of Callinicum in 531 and recalled to Constantinople. Procopius witnessed the Nika riots of January, 532, which Belisarius and his fellow general Mundus repressed with a massacre in the Hippodrome. In 533, he accompanied Belisarius on his victorious expedition against the Vandal kingdom in North Africa, took part in the capture of Carthage, and remained in Africa with Belisarius's successor Solomon the Eunuch when Belisarius returned east to the capital. Procopius recorded a few of the extreme weather events of 535–536, although these were presented as a backdrop to Byzantine military activities, such as a mutiny in and around Carthage. He rejoined Belisarius for his campaign against the Ostrogothic kingdom in Italy and experienced the Gothic siege of Rome that lasted a year and nine days, ending in mid-March 538. He witnessed Belisarius's entry into the Gothic capital, Ravenna, in 540. Both the Wars and the Secret History suggest that his relationship with Belisarius cooled thereafter. When Belisarius was sent back to Italy in 544 to cope with a renewal of the war with the Goths, now led by the able king Totila, Procopius appears to have no longer been on Belisarius's staff. As magister militum, Belisarius was an "illustrious man" (; , illoústrios); being his , Procopius must therefore have had at least the rank of a "visible man" (vir spectabilis). He thus belonged to the mid-ranking group of the senatorial order (). However, the Suda, which is usually well informed in such matters, also describes Procopius himself as one of the . Should this information be correct, Procopius would have had a seat in Constantinople's senate, which was restricted to the under Justinian. He also wrote that under Justinian's reign in 560, a major Christian church dedicated to the Virgin Mary was built on the site of the Temple Mount. It is not certain when Procopius died. Many historians including Howard-Johnson, Cameron, and Geoffrey Greatrex date his death to 554, but there was an urban prefect of Constantinople () called Procopius in 562. In that year, Belisarius was implicated in a conspiracy and was brought before this urban prefect. In fact, some scholars have argued that Procopius died at least a few years after 565 as he unequivocally states in the beginning of his Secret History that he planned to publish it after the death of Justinian for fear he would be tortured and killed by the emperor (or even by general Belisarius) if the emperor (or the general) learned about what Procopius wrote (his scathing criticism of the emperor, of his wife, of Belisarius, of the general's wife, Antonia: calling the former "demons in human form" and the latter incompetent and treacherous) in this later history. However, most scholars believe that the Secret History was written in 550 and remained unpublished during Procopius' lifetime. Writings The writings of Procopius are the primary source of information for the rule of the emperor JustinianI. Procopius was the author of a history in eight books on the wars prosecuted by Justinian, a panegyric on the emperor's public works projects throughout the empire, and a book known as the Secret History that claims to report the scandals that Procopius could not include in his officially sanctioned history for fear of angering the emperor, his wife, Belisarius, and the general's wife and had to wait until all of them were dead to avoid retaliation. History of the Wars Procopius's Wars or History of the Wars (, Hypèr tōn Polémon Lógoi, "Words on the Wars"; , "On the Wars") is his most important work, although less well known than the Secret History. The first seven books seem to have been largely completed by 545 and may have been published as a unit. They were, however, updated to mid-century before publication, with the latest mentioned event occurring in early 551. The eighth and final book brings the history to 553. The first two booksoften known as The Persian War ()deal with the conflict between the Romans and Sassanid Persia in Mesopotamia, Syria, Armenia, Lazica, and Iberia (present-day Georgia). It details the campaigns of the Sassanid shah KavadhI, the 532 'Nika' revolt, the war by Kavadh's successor KhosrauI in 540, his destruction of Antioch and deportation of its inhabitants to Mesopotamia, and the great plague that devastated the empire from 542. The Persian War also covers the early career of Procopius's patron Belisarius in some detail. The Wars’ next two booksknown as The Vandal War or Vandalic War ()cover Belisarius's successful campaign against the Vandal kingdom that had occupied Rome's provinces in northwest Africa for the last century. The final four booksknown as The Gothic War ()cover the Italian campaigns by Belisarius and others against the Ostrogoths. Procopius includes accounts of the 1st and 2nd sieges of Naples and the 1st, 2nd, and 3rd sieges of Rome. He also includes an account of the rise of the Franks (see Arborychoi). The last book describes the eunuch Narses's successful conclusion of
who has also provided a new commentary and notes. Prokopios, The Secret History, translated by Anthony Kaldellis. Indianapolis: Hackett Publishing, 2010. This edition includes related texts, an introductory essay, notes, maps, a timeline, a guide to the main sources from the period and a guide to scholarship in English. The translator uses blunt and precise English prose in order to adhere to the style of the original text. Notes References This article is based on an earlier version by James Allan Evans, originally posted at Nupedia. Further reading Adshead, Katherine: Procopius' Poliorcetica: continuities and discontinuities, in: G. Clarke et al. (eds.): Reading the past in late antiquity, Australian National UP, Rushcutters Bay 1990, pp. 93–119 Alonso-Núñez, J. M.: Jordanes and Procopius on Northern Europe, in: Nottingham Medieval Studies 31 (1987), 1–16. Amitay, Ory: Procopius of Caesarea and the Girgashite Diaspora, in: Journal for the Study of the Pseudepigrapha 20 (2011), 257–276. Anagnostakis, Ilias: Procopius's dream before the campaign against Libya: a reading of Wars 3.12.1-5, in: C. Angelidi and G. Calofonos (eds.), Dreaming in Byzantium and Beyond, Farnham: Ashgate Publishing 2014, 79–94. Bachrach, Bernard S.: Procopius, Agathias and the Frankish Military, in: Speculum 45 (1970), 435–441. Bachrach, Bernard S.: Procopius and the chronology of Clovis's reign, in: Viator 1 (1970), 21–32. Baldwin, Barry: An Aphorism in Procopius, in: Rheinisches Museum für Philologie 125 (1982), 309–311. Baldwin, Barry: Sexual Rhetoric in Procopius, in: Mnemosyne 40 (1987), pp. 150–152 Belke, Klaus: Prokops De aedificiis, Buch V, zu Kleinasien, in: Antiquité Tardive 8 (2000), 115–125. Börm, Henning: Prokop und die Perser. Stuttgart: Franz Steiner Verlag, 2007. (Review in English by G. Greatrex and Review in English by A. Kaldellis) Börm, Henning: Procopius of Caesarea, in Encyclopaedia Iranica Online, New York 2013. Börm, Henning: Procopius, his predecessors, and the genesis of the Anecdota: Antimonarchic discourse in late antique historiography, in: H. Börm (ed.): Antimonarchic discourse in Antiquity. Stuttgart: Franz Steiner Verlag 2015, 305–346. Braund, David: Procopius on the Economy of Lazica, in: The Classical Quarterly 41 (1991), 221–225. Brodka, Dariusz: Die Geschichtsphilosophie in der spätantiken Historiographie. Studien zu Prokopios von Kaisareia, Agathias von Myrina und Theophylaktos Simokattes. Frankfurt am Main: Peter Lang, 2004. Burn, A. R.: Procopius and the island of ghosts, in: English Historical Review 70 (1955), 258–261. Cameron, Averil: Procopius and the Sixth Century. Berkeley: University of California Press, 1985. Cameron, Averil: The scepticism of Procopius, in: Historia 15 (1966), 466–482. Colvin, Ian: Reporting Battles and Understanding Campaigns in Procopius and Agathias: Classicising Historians' Use of Archived Documents as Sources, in: A. Sarantis (ed.): War and warfare in late antiquity. Current perspectives, Leiden: Brill 2013, 571–598. Cresci, Lia Raffaella: Procopio al confine tra due tradizioni storiografiche, in: Rivista di Filologia e di Istruzione Classica 129 (2001), 61–77. Cristini, Marco: Il seguito ostrogoto di Amalafrida: confutazione di Procopio, Bellum Vandalicum 1.8.12, in: Klio 99 (2017), 278–289. Cristini, Marco: Totila and the Lucanian Peasants: Procop. Goth. 3.22.20, in: Greek, Roman and Byzantine Studies 61 (2021), 73–84. Croke, Brian and James Crow: Procopius and Dara, in: The Journal of Roman Studies 73 (1983), 143–159. Downey, Glanville: The Composition of Procopius, De Aedificiis, in: Transactions and Proceedings of the American Philological Association 78 (1947), 171–183. Evans, James A. S.: Justinian and the Historian Procopius, in: Greece & Rome 17 (1970), 218–223. Evans, James A. S.: Procopius. New York: Twayne Publishers, 1972. Gordon, C. D.: Procopius and Justinian's Financial Policies, in: Phoenix 13 (1959), 23–30. Greatrex, Geoffrey: Procopius and the Persian Wars, D.Phil. thesis, Oxford, 1994. Greatrex, Geoffrey: The dates of Procopius' works, in: BMGS 18 (1994), 101–114. Greatrex, Geoffrey: The Composition of Procopius' Persian Wars and John the Cappadocian, in: Prudentia 27 (1995), 1–13. Greatrex, Geoffrey: Rome and Persia at War, 502–532. London: Francis Cairns, 1998. Greatrex, Geoffrey: Recent work on Procopius and the composition of Wars VIII, in: BMGS 27 (2003), 45–67. Greatrex, Geoffrey: Perceptions of Procopius in Recent Scholarship, in: Histos 8 (2014), 76–121 and 121a–e (addenda). Howard-Johnson, James: The Education and Expertise of Procopius, in: Antiquité Tardive 10 (2002), 19–30 Kaegi, Walter: Procopius the military historian, in: Byzantinische Forschungen. 15, 1990, , 53–85 (online (PDF; 989 KB)). Kaldellis, Anthony: Classicism, Barbarism, and Warfare: Prokopios and the Conservative Reaction to Later Roman Military Policy, American Journal of Ancient History, n.s. 3-4 (2004-2005 [2007]), 189–218. Kaldellis, Anthony: Identifying Dissident Circles in Sixth-Century Byzantium: The Friendship of Prokopios and Ioannes Lydos, Florilegium, Vol. 21 (2004), 1–17. Kaldellis, Anthony: Procopius of Caesarea: Tyranny, History and Philosophy at the End of Antiquity. Philadelphia: University of Pennsylvania Press, 2004. Kaldellis, Anthony: Prokopios’ Persian War: A Thematic and Literary Analysis, in: R. Macrides, ed., History as Literature in Byzantium, Aldershot: Ashgate, 2010, 253–273. Kaldellis, Anthony: Prokopios’ Vandal War: Thematic Trajectories and Hidden Transcripts, in: S. T. Stevens & J. Conant, eds., North Africa under Byzantium and Early Islam, Washington, D.C: Dumbarton Oaks, 2016, 13–21. Kaldellis, Anthony: The Date and Structure of Prokopios’ Secret History and his Projected Work on Church History, in: Greek, Roman, and Byzantine Studies, Vol. 49 (2009), 585–616. Kruse, Marion: The Speech of the Armenians in Procopius: Justinian's Foreign Policy and the Transition between Books 1 and 2 of the Wars, in: The Classical Quarterly 63 (2013), 866–881. Kovács, Tamás: "Procopius's Sibyl - the fall of Vitigis and the Ostrogoths", Graeco-Latina Brunensia 24.2 (2019), 113–124. Lillington-Martin, Christopher, 2007–2017: 2007, "Archaeological and Ancient Literary Evidence for a Battle near Dara Gap, Turkey, AD 530: Topography, Texts and Trenches" in BAR –S1717, 2007 The Late Roman Army in the Near East from Diocletian to the Arab Conquest Proceedings of a colloquium held at Potenza, Acerenza and Matera, Italy edited by Ariel S. Lewin and Pietrina Pellegrini, pp. 299–311; 2009, "Procopius, Belisarius and the Goths" in Journal of the Oxford University History Society,(2009) Odd Alliances edited by Heather Ellis and Graciela Iglesias Rogers. , pages 1– 17, https://sites.google.com/site/jouhsinfo/issue7specialissueforinternetexplorer; 2011, "Secret Histories", http://classicsconfidential.co.uk/2011/11/19/secret-histories/; 2012, "Hard and Soft Power on the Eastern Frontier: a Roman Fortlet between Dara and Nisibis, Mesopotamia, Turkey: Prokopios’ Mindouos?" in The Byzantinist, edited by Douglas Whalin, Issue 2 (2012), pp. 4–5, http://oxfordbyzantinesociety.files.wordpress.com/2012/06/obsnews2012final.pdf; 2013, Procopius on the struggle for Dara and Rome, in A. Sarantis, N. Christie (eds.): War and Warfare in Late Antiquity: Current Perspectives (Late Antique Archaeology 8.1–8.2 2010–11), Leiden: Brill 2013, pp. 599–630, ; 2013 “La defensa de Roma por Belisario” in: Justiniano I el Grande (Desperta Ferro) edited by Alberto Pérez Rubio, no. 18 (July 2013), pages 40–45, ISSN 2171-9276; 2017, Procopius of Caesarea: Literary and Historical Interpretations (editor), Routledge (July 2017), www.routledge.com/9781472466044; 2017, "Introduction" and chapter 10, “Procopius, πάρεδρος / quaestor, Codex Justinianus, I.27 and Belisarius’ strategy in the Mediterranean” in Procopius of Caesarea: Literary and Historical Interpretations above. Maas, Michael Robert: Strabo and Procopius: Classical Geography for a Christian Empire, in H. Amirav et al. (eds.): From Rome to Constantinople. Studies in Honour of Averil Cameron, Leuven: Peeters, 2007, 67–84. Martindale, John: The Prosopography of the Later Roman Empire III, Cambridge 1992, 1060–1066. Meier, Mischa: Prokop, Agathias, die Pest und das ′Ende′ der antiken Historiographie, in Historische Zeitschrift 278 (2004), 281–310. Meier, Mischa and Federico Montinaro (eds.): A Companion to Procopius of Caesarea. Brill, Leiden 2022, ISBN 978-3-89781-215-4. Pazdernik, Charles F.: Xenophon’s Hellenica in Procopius’ Wars: Pharnabazus and Belisarius, in: Greek, Roman and Byzantine Studies 46 (2006) 175–206. Rance,
such as communications channels and pairs of electromagnetic spectrum band and signal transmission power can only be used by a single party at a time, or a single party in a divisible context, if owned or used at all. Thus far or usually those are not considered property, or at least not private property, even though the party bearing right of exclusive use may transfer that right to another. In many societies the human body is considered property of some kind or other. The question of the ownership and rights to one's body arise in general in the discussion of human rights, including the specific issues of slavery, conscription, rights of children under the age of majority, marriage, abortion, prostitution, drugs, euthanasia and organ donation. Related concepts Of the following, only sale and at-will sharing involve no encumbrance. Violation Miscellaneous action Issues in property theory What can be property? The two major justifications given for original property, or the homestead principle, are effort and scarcity. John Locke emphasized effort, "mixing your labor" with an object, or clearing and cultivating virgin land. Benjamin Tucker preferred to look at the telos of property, i.e. What is the purpose of property? His answer: to solve the scarcity problem. Only when items are relatively scarce with respect to people's desires do they become property. For example, hunter-gatherers did not consider land to be property, since there was no shortage of land. Agrarian societies later made arable land property, as it was scarce. For something to be economically scarce it must necessarily have the exclusivity property—that use by one person excludes others from using it. These two justifications lead to different conclusions on what can be property. Intellectual property—incorporeal things like ideas, plans, orderings and arrangements (musical compositions, novels, computer programs)—are generally considered valid property to those who support an effort justification, but invalid to those who support a scarcity justification, since the things don't have the exclusivity property (however, those who support a scarcity justification may still support other "intellectual property" laws such as Copyright, as long as these are a subject of contract instead of government arbitration). Thus even ardent propertarians may disagree about IP. By either standard, one's body is one's property. From some anarchist points of view, the validity of property depends on whether the "property right" requires enforcement by the state. Different forms of "property" require different amounts of enforcement: intellectual property requires a great deal of state intervention to enforce, ownership of distant physical property requires quite a lot, ownership of carried objects requires very little, while ownership of one's own body requires absolutely no state intervention. Some anarchists don't believe in property at all. Many things have existed that did not have an owner, sometimes called the commons. The term "commons," however, is also often used to mean something quite different: "general collective ownership"—i.e. common ownership. Also, the same term is sometimes used by statists to mean government-owned property that the general public is allowed to access (public property). Law in all societies has tended to develop towards reducing the number of things not having clear owners. Supporters of property rights argue that this enables better protection of scarce resources, due to the tragedy of the commons, while critics argue that it leads to the 'exploitation' of those resources for personal gain and that it hinders taking advantage of potential network effects. These arguments have differing validity for different types of "property"—things that are not scarce are, for instance, not subject to the tragedy of the commons. Some apparent critics advocate general collective ownership rather than ownerlessness. Things that do not have owners include: ideas (except for intellectual property), seawater (which is, however, protected by anti-pollution laws), parts of the seafloor (see the United Nations Convention on the Law of the Sea for restrictions), gases in Earth's atmosphere, animals in the wild (although in most nations, animals are tied to the land. In the United States and Canada wildlife are generally defined in statute as property of the state. This public ownership of wildlife is referred to as the North American Model of Wildlife Conservation and is based on The Public Trust Doctrine.), celestial bodies and outer space, and land in Antarctica. The nature of children under the age of majority is another contested issue here. In ancient societies children were generally considered the property of their parents. Children in most modern societies theoretically own their own bodies but are not considered competent to exercise their rights, and their parents or guardians are given most of the actual rights of control over them. Questions regarding the nature of ownership of the body also come up in the issue of abortion, drugs and euthanasia. In many ancient legal systems (e.g. early Roman law), religious sites (e.g. temples) were considered property of the God or gods they were devoted to. However, religious pluralism makes it more convenient to have religious sites owned by the religious body that runs them. Intellectual property and air (airspace, no-fly zone, pollution laws, which can include tradable emissions rights) can be property in some senses of the word. Ownership of land can be held separately from the ownership of rights over that land, including sporting rights, mineral rights, development rights, air rights, and such other rights as may be worth segregating from simple land ownership. Who can be an owner? Ownership laws may vary widely among countries depending on the nature of the property of interest (e.g. firearms, real property, personal property, animals). Persons can own property directly. In most societies legal entities, such as corporations, trusts and nations (or governments) own property. In many countries women have limited access to property following restrictive inheritance and family laws, under which only men have actual or formal rights to own property. In the Inca empire, the dead emperors, who were considered gods, still controlled property after death. Whether and to what extent the state may interfere with property In 17th-century England, the legal directive that nobody may enter a home, which in the 17th-century would typically have been male owned, unless by the owners invitation or consent, was established as common law in Sir Edward Coke’s Institutes of the Lawes of England. "For a man's house is his castle, et domus sua cuique est tutissimum refugium [and each man's home is his safest refuge]." It is the origin of the famous dictum, “an Englishman’s home is his castle”. The ruling enshrined into law what several English writers had espoused in the 16th-century. Unlike the rest of Europe the British had a proclivity towards owning their own homes. British Prime Minister William Pitt, 1st Earl of Chatham defined the meaning of castle in 1763, "The poorest man may in his cottage bid defiance to all the forces of the crown. It may be frail – its roof may shake – the wind may blow through it – the storm may enter – the rain may enter – but the King of England cannot enter." A principle exported to the United States, under U.S. law the principal limitations on whether and the extent to which the State may interfere with property rights are set by the Constitution. The "Takings" clause requires that the government (whether state or federal—for the 14th Amendment's due process clause imposes the 5th Amendment's takings clause on state governments) may take private property only for a public purpose, after exercising due process of law, and upon making "just compensation." If an interest is not deemed a "property" right or the conduct is merely an intentional tort, these limitations do not apply and the doctrine of sovereign immunity precludes relief. Moreover, if the interference does not almost completely make the property valueless, the interference will not be deemed a taking but instead a mere regulation of use. On the other hand, some governmental regulations of property use have been deemed so severe that they have been considered "regulatory takings." Moreover, conduct sometimes deemed only a nuisance or other tort has been held a taking of property where the conduct was sufficiently persistent and severe. Theories There exist many theories of property. One is the relatively rare first possession theory of property, where ownership of something is seen as justified simply by someone seizing something before someone else does. Perhaps one of the most popular is the natural rights definition of property rights as advanced by John Locke. Locke advanced the theory that God granted dominion over nature to man through Adam in the book of Genesis. Therefore, he theorized that when one mixes one's labor with nature, one gains a relationship with that part of nature with which the labor is mixed, subject to the limitation that there should be "enough, and as good, left in common for others." (see Lockean proviso) From the RERUM NOVARUM, Pope Leo XIII wrote "It is surely undeniable that, when a man engages in remunerative labor, the impelling reason and motive of his work is to obtain property, and thereafter to hold it as his very own." Anthropology studies the diverse systems of ownership, rights of use and transfer, and possession under the term "theories of property." Western legal theory is based, as mentioned, on the owner of property being a legal person. However, not all property systems are founded on this basis. In every culture studied ownership and possession are the subject of custom and regulation, and "law" where the term can meaningfully be applied. Many tribal cultures balance individual ownership with the laws of collective groups: tribes, families, associations and nations. For example, the 1839 Cherokee Constitution frames the issue in these terms: Communal property systems describe ownership as belonging to the entire social and political unit. Common ownership in a hypothetical communist society is distinguished from primitive forms of common property that have existed throughout history, such as Communalism and primitive communism, in that communist common ownership is the outcome of social and technological developments leading to the elimination of material scarcity in society. Corporate systems describe ownership as being attached to an identifiable group with an identifiable responsible individual. The Roman property law was based on such a corporate system. In a well-known paper that contributed to the creation of the field of law and economics in the late 1960s, the American scholar Harold Demsetz described how the concept of property rights makes social interactions easier: Different societies may have different theories of property for differing types of ownership. Pauline Peters argued that property systems are not isolable from the social fabric, and notions of property may not be stated as such, but instead may be framed in negative terms: for example the taboo system among Polynesian peoples. Property in philosophy In medieval and Renaissance Europe the term "property" essentially referred to land. After much rethinking, land has come to be regarded as only a special case of the property genus. This rethinking was inspired by at least three broad features of early modern Europe: the surge of commerce, the breakdown of efforts to prohibit interest (then called "usury"), and the development of centralized national monarchies. Ancient philosophy Urukagina, the king of the Sumerian city-state Lagash, established the first laws that forbade compelling the sale of property. The Bible in Leviticus 19:11 and ibid. 19:13 states that the Israelites are not to steal. Aristotle, in Politics, advocates "private property." He argues that self-interest leads to neglect of the commons. "[T]hat which is common to the greatest number has the least care bestowed upon it. Every one thinks chiefly of his own, hardly at all of the common interest; and only when he is himself concerned as an individual." In addition he says that when property is common, there are natural problems that arise due to differences in labor: "If they do not share equally enjoyments and toils, those who labor much and get little will necessarily complain of those who labor little and receive or consume much. But indeed there is always a difficulty in men living together and having all human relations in common, but especially in their having common property." (Politics, 1261b34) Cicero held that there is no private property under natural law but only under human law. Seneca viewed property as only becoming necessary when men become avaricious. St. Ambrose later adopted this view and St. Augustine even derided heretics for complaining the Emperor could not confiscate property they had labored for. Medieval philosophy Thomas Aquinas (13th century) The canon law Decretum Gratiani maintained that mere human law creates property, repeating the phrases used by St. Augustine. St. Thomas Aquinas agreed with regard to the private consumption of property but modified patristic theory in finding that the private possession of property is necessary. Thomas Aquinas concludes that, given certain detailed provisions, it is natural for man to possess external things it is lawful for a man to possess a thing as his own the essence of theft consists in taking another's thing secretly theft and robbery are sins of different species, and robbery is a more grievous sin than theft theft is a sin; it is also a mortal sin it is, however, lawful to steal through stress of need: "in cases of need all things are common property." Modern philosophy Thomas Hobbes (17th century) The principal writings of Thomas Hobbes appeared between 1640 and 1651—during and immediately following the war between forces loyal to King Charles I and those loyal to Parliament. In his own words, Hobbes' reflection began with the idea of "giving to every man his own," a phrase he drew from the writings of Cicero. But he wondered: How can anybody call anything his own? He concluded: My own can only truly be mine if there is one unambiguously strongest power in the realm, and that power treats it as mine, protecting its status as such. James Harrington (17th century) A contemporary of Hobbes, James Harrington, reacted to the same tumult in a different way: he considered property natural but not inevitable. The author of Oceana, he may have been the first political theorist to postulate that political power is a consequence, not the cause, of the distribution of property. He said that the worst possible situation is one in which the commoners have half a nation's property, with crown and nobility holding the other half—a circumstance fraught with instability and violence. A much better situation (a stable republic) will exist once the commoners own most property, he suggested. In later years, the ranks of Harrington's admirers included American revolutionary and founder John Adams. Robert Filmer (17th century) Another member of the Hobbes/Harrington generation, Sir Robert Filmer, reached conclusions much like Hobbes', but through Biblical exegesis. Filmer said that the institution of kingship is analogous to that of fatherhood, that subjects are but children, whether obedient or unruly, and that property rights are akin to the household goods that a father may dole out among his children—his to take back and dispose of according to his pleasure. John Locke (17th century) In the following generation, John Locke sought to answer Filmer, creating a rationale for a balanced constitution in which the monarch had a part to play, but not an overwhelming part. Since Filmer's views essentially require that the Stuart family be uniquely descended from the patriarchs of the Bible, and since even in the late 17th century that was a difficult view to uphold, Locke attacked Filmer's views in his First Treatise on Government, freeing him to set out his own views in the Second Treatise on Civil Government. Therein, Locke imagined a pre-social world, each of the unhappy residents of which are willing to create a social contract because otherwise "the enjoyment of the property he has in this state is very unsafe, very unsecure," and therefore the "great and chief end, therefore, of men's uniting into commonwealths, and putting themselves under government, is the preservation of their property." They would, he allowed, create a monarchy, but its task would be to execute the will of an elected legislature. "To this end" (to achieve the previously specified goal), he wrote, "it is that men give up all their natural power to the society they enter into, and the community put the legislative power into such hands as they think fit, with this trust, that they shall be governed by declared laws, or else their peace, quiet, and property will still be at the same uncertainty as it was in the state of nature." Even when it keeps to proper legislative form, though, Locke held that there are limits to what a government established by such a contract might rightly do. "It cannot be supposed that [the hypothetical contractors] they should intend, had they a power so to do, to give any one or more an absolute arbitrary power over their persons and estates, and put a force into the magistrate's hand to execute his unlimited will arbitrarily upon them; this were to put themselves into a worse condition than the state of nature, wherein they had a liberty to defend their right against the injuries of others, and were upon equal terms of force to maintain it, whether invaded by a single man or many in combination. Whereas by supposing they have given up themselves to the absolute arbitrary power and will of a legislator, they have disarmed themselves, and armed him to make a prey of them when he pleases..." Note that both "persons and estates" are to be protected from the arbitrary power of any magistrate, inclusive of the "power and will of a legislator." In Lockean terms, depredations against an estate are just as plausible a justification for resistance and revolution as are those against persons. In neither case are subjects required to allow themselves to become prey. To explain the ownership of property Locke advanced a labor theory of property. David Hume (18th century) In contrast to the figures discussed in this section thus far David Hume lived a relatively quiet life that had settled down to a relatively stable social and political structure. He lived the life of a solitary writer until 1763 when, at 52 years of age, he went off to Paris to work at the British embassy. In contrast, one might think, to his polemical works on religion and his empiricism-driven skeptical epistemology, Hume's views on law and property were quite conservative. He did not believe in hypothetical contracts, or in the love of mankind in general, and sought to ground politics upon actual human beings as one knows them. "In general," he wrote, "it may be affirmed that there is no such passion in human mind, as the love of mankind, merely as such, independent of personal qualities, or services, or of relation to ourselves." Existing customs should not lightly be disregarded, because they have come to be what they are as a result of human nature. With this endorsement of custom comes an endorsement of existing governments, because he conceived of the two as complementary: "A regard for liberty, though a laudable passion, ought commonly to be subordinate to a reverence for established government." Therefore, Hume's view was that there are property rights because of and to the extent that the
are but children, whether obedient or unruly, and that property rights are akin to the household goods that a father may dole out among his children—his to take back and dispose of according to his pleasure. John Locke (17th century) In the following generation, John Locke sought to answer Filmer, creating a rationale for a balanced constitution in which the monarch had a part to play, but not an overwhelming part. Since Filmer's views essentially require that the Stuart family be uniquely descended from the patriarchs of the Bible, and since even in the late 17th century that was a difficult view to uphold, Locke attacked Filmer's views in his First Treatise on Government, freeing him to set out his own views in the Second Treatise on Civil Government. Therein, Locke imagined a pre-social world, each of the unhappy residents of which are willing to create a social contract because otherwise "the enjoyment of the property he has in this state is very unsafe, very unsecure," and therefore the "great and chief end, therefore, of men's uniting into commonwealths, and putting themselves under government, is the preservation of their property." They would, he allowed, create a monarchy, but its task would be to execute the will of an elected legislature. "To this end" (to achieve the previously specified goal), he wrote, "it is that men give up all their natural power to the society they enter into, and the community put the legislative power into such hands as they think fit, with this trust, that they shall be governed by declared laws, or else their peace, quiet, and property will still be at the same uncertainty as it was in the state of nature." Even when it keeps to proper legislative form, though, Locke held that there are limits to what a government established by such a contract might rightly do. "It cannot be supposed that [the hypothetical contractors] they should intend, had they a power so to do, to give any one or more an absolute arbitrary power over their persons and estates, and put a force into the magistrate's hand to execute his unlimited will arbitrarily upon them; this were to put themselves into a worse condition than the state of nature, wherein they had a liberty to defend their right against the injuries of others, and were upon equal terms of force to maintain it, whether invaded by a single man or many in combination. Whereas by supposing they have given up themselves to the absolute arbitrary power and will of a legislator, they have disarmed themselves, and armed him to make a prey of them when he pleases..." Note that both "persons and estates" are to be protected from the arbitrary power of any magistrate, inclusive of the "power and will of a legislator." In Lockean terms, depredations against an estate are just as plausible a justification for resistance and revolution as are those against persons. In neither case are subjects required to allow themselves to become prey. To explain the ownership of property Locke advanced a labor theory of property. David Hume (18th century) In contrast to the figures discussed in this section thus far David Hume lived a relatively quiet life that had settled down to a relatively stable social and political structure. He lived the life of a solitary writer until 1763 when, at 52 years of age, he went off to Paris to work at the British embassy. In contrast, one might think, to his polemical works on religion and his empiricism-driven skeptical epistemology, Hume's views on law and property were quite conservative. He did not believe in hypothetical contracts, or in the love of mankind in general, and sought to ground politics upon actual human beings as one knows them. "In general," he wrote, "it may be affirmed that there is no such passion in human mind, as the love of mankind, merely as such, independent of personal qualities, or services, or of relation to ourselves." Existing customs should not lightly be disregarded, because they have come to be what they are as a result of human nature. With this endorsement of custom comes an endorsement of existing governments, because he conceived of the two as complementary: "A regard for liberty, though a laudable passion, ought commonly to be subordinate to a reverence for established government." Therefore, Hume's view was that there are property rights because of and to the extent that the existing law, supported by social customs, secure them. He offered some practical home-spun advice on the general subject, though, as when he referred to avarice as "the spur of industry," and expressed concern about excessive levels of taxation, which "destroy industry, by engendering despair." Adam Smith "The property which every man has in his own labour, as it is the original foundation of all other property, so it is the most sacred and inviolable. The patrimony of a poor man lies in the strength and dexterity of his hands; and to hinder him from employing this strength and dexterity in what manner he thinks proper without injury to his neighbour, is a plain violation of this most sacred property. It is a manifest encroachment upon the just liberty both of the workman, and of those who might be disposed to employ him. As it hinders the one from working at what he thinks proper, so it hinders the others from employing whom they think proper. To judge whether he is fit to be employed, may surely be trusted to the discretion of the employers whose interest it so much concerns. The affected anxiety of the law-giver lest they should employ an improper person, is evidently as impertinent as it is oppressive." — (Source: Adam Smith, The Wealth of Nations, 1776, Book I, Chapter X, Part II.) By the mid 19th century, the industrial revolution had transformed England and the United States, and had begun in France. The established conception of what constitutes property expanded beyond land to encompass scarce goods in general. In France, the revolution of the 1790s had led to large-scale confiscation of land formerly owned by church and king. The restoration of the monarchy led to claims by those dispossessed to have their former lands returned. Karl Marx Section VIII, "Primitive Accumulation" of Capital involves a critique of Liberal Theories of property rights. Marx notes that under Feudal Law, peasants were legally as entitled to their land as the aristocracy was to its manors. Marx cites several historical events in which large numbers of the peasantry were removed from their lands, which were then seized by the aristocracy. This seized land was then used for commercial ventures (sheep herding). Marx sees this "Primitive Accumulation" as integral to the creation of English Capitalism. This event created a large un-landed class which had to work for wages in order to survive. Marx asserts that Liberal theories of property are "idyllic" fairy tales that hide a violent historical process. Charles Comte: legitimate origin of property Charles Comte, in Traité de la propriété (1834), attempted to justify the legitimacy of private property in response to the Bourbon Restoration. According to David Hart, Comte had three main points: "firstly, that interference by the state over the centuries in property ownership has had dire consequences for justice as well as for economic productivity; secondly, that property is legitimate when it emerges in such a way as not to harm anyone; and thirdly, that historically some, but by no means all, property which has evolved has done so legitimately, with the implication that the present distribution of property is a complex mixture of legitimately and illegitimately held titles." Comte, as Proudhon later did, rejected Roman legal tradition with its toleration of slavery. He posited a communal "national" property consisting of non-scarce goods, such as land in ancient hunter-gatherer societies. Since agriculture was so much more efficient than hunting and gathering, private property appropriated by someone for farming left remaining hunter-gatherers with more land per person, and hence did not harm them. Thus this type of land appropriation did not violate the Lockean proviso – there was "still enough, and as good left." Comte's analysis would be used by later theorists in response to the socialist critique on property. Pierre-Joseph Proudhon: property is theft In his 1840 treatise What is Property?, Pierre Proudhon answers with "Property is theft!" In natural resources, he sees two types of property, de jure property (legal title) and de facto property (physical possession), and argues that the former is illegitimate. Proudhon's conclusion is that "property, to be just and possible, must necessarily have equality for its condition." His analysis of the product of labor upon natural resources as property (usufruct) is more nuanced. He asserts that land itself cannot be property, yet it should be held by individual possessors as stewards of mankind with the product of labor being the property of the producer. Proudhon reasoned that any wealth gained without labor was stolen from those who labored to create that wealth. Even a voluntary contract to surrender the product of labor to an employer was theft, according to Proudhon, since the controller of natural resources had no moral right to charge others for the use of that which he did not labor to create and therefore did not own. Proudhon's theory of property greatly influenced the budding socialist movement, inspiring anarchist theorists such as Mikhail Bakunin who modified Proudhon's ideas, as well as antagonizing theorists like Karl Marx. Frédéric Bastiat: property is value Frédéric Bastiat's main treatise on property can be found in chapter 8 of his book Economic Harmonies (1850). In a radical departure from traditional property theory, he defines property not as a physical object, but rather as a relationship between people with respect to an object. Thus, saying one owns a glass of water is merely verbal shorthand for I may justly gift or trade this water to another person. In essence, what one owns is not the object but the value of the object. By "value," Bastiat apparently means market value; he emphasizes that this is quite different from utility. "In our relations with one another, we are not owners of the utility of things, but of their value, and value is the appraisal made of reciprocal services." Bastiat theorized that, as a result of technological progress and the division of labor, the stock of communal wealth increases over time; that the hours of work an unskilled laborer expends to buy e.g. 100 liters of wheat decreases over time, thus amounting to "gratis" satisfaction. Thus, private property continually destroys itself, becoming transformed into communal wealth. The increasing proportion of communal wealth to private property results in a tendency toward equality of mankind. "Since the human race started from the point of greatest poverty, that is, from the point where there were the most obstacles to be overcome, it is clear that all that has been gained from one era to the next has been due to the spirit of property." This transformation of private property into the communal domain, Bastiat points out, does not imply that private property will ever totally disappear. This is because man, as he progresses, continually invents new and more sophisticated needs and desires. Andrew J. Galambos: a precise definition of property Andrew J. Galambos (1924–1997) was an astrophysicist and philosopher who innovated a social structure that seeks to maximize human peace and freedom. Galambos’ concept of property was basic to his philosophy. He defined property as a man's life and all non-procreative derivatives of his life. (Because the English language is deficient in omitting the feminine form “man” when referring to humankind, it is implicit and obligatory that the feminine is included in the term “man”.) Galambos taught that property is essential to a non-coercive social structure. That is why he defined freedom as follows: “Freedom is the societal condition that exists when every individual has full (100%) control over his own property.” Galambos defines property as having the following elements: Primordial property, which is an individual's life Primary property, which includes ideas, thoughts, and actions Secondary property, which includes all tangible and intangible possessions which are derivatives of the individual's primary property. Property includes all non-procreative derivatives of an individual's life; this means children are not the property of their parents. and "primary property" (a person's own ideas). Galambos emphasized repeatedly that true government exists to protect property and that the state attacks property. For example, the state requires payment for its services in the form of taxes whether or not people desire such services. Since an individual's money is his property, the confiscation of money in the form of taxes is an attack on property. Military conscription is likewise an attack on a person's primordial property. Contemporary views Contemporary political thinkers who believe that natural persons enjoy rights to own property and to enter into contracts espouse two views about John Locke. On the one hand, some admire Locke, such as William H. Hutt (1956), who praised Locke for laying down the "quintessence of individualism". On the other hand, those such as Richard Pipes regard Locke's arguments as weak, and think that undue reliance thereon has weakened the cause of individualism in recent times. Pipes has written that Locke's work "marked a regression because it rested on the concept of Natural Law" rather than upon Harrington's sociological framework. Hernando de Soto has argued that an important characteristic of capitalist market economy is the functioning state protection of property rights in a formal property system which clearly records ownership and transactions. These property rights and the whole formal system of property make possible: Greater independence for individuals from local community arrangements to protect their assets Clear, provable, and protectable ownership The standardization and integration of property rules and property information in a country as a whole Increased trust arising from a greater certainty of punishment for cheating in economic transactions More formal and complex written statements of ownership that permit the easier assumption of shared risk and ownership in companies, and insurance against risk Greater availability of loans for new projects, since more things can serve as collateral for the loans Easier access to and more reliable information regarding such things as credit history and the worth of assets Increased fungibility, standardization and transferability of statements documenting the ownership of property, which paves the way for structures such as national markets for companies and the easy transportation of property through complex networks of individuals and other entities Greater protection of biodiversity due to minimizing of shifting agriculture practices All of the above, according to de Soto, enhance economic growth. Academics have criticised the capitalist frame through which property is viewed pointing to the fact that commodifying property or land by assigning it monetary value takes away from the traditional cultural heritage, particularly from first nation inhabitants. These academics point to the personal nature of property and its link to identity being irreconcilable with wealth creation that contemporary Western society subscribes to. See also Allemansrätten Anarchism Binary economics Buying agent Capitalism Communism Homestead principle Immovable Property Inclusive Democracy International Property Rights Index Labor theory of property Libertarianism Lien Off plan Ownership society Patrimony Personal property Propertarian Property is theft Property law Property rights (economics) Socialism Sovereignty Taxation as theft Interpersonal relationship Public liability Property-giving (legal) Charity Essenes Gift Kibbutz Monasticism Tithe, Zakat (modern sense) Property-taking (legal) Adverse possession Confiscation Eminent domain Fine Jizya Nationalization Regulatory fees and costs Search and seizure Tariff Tax Turf and twig (historical) Tithe, Zakat
beneath them were Samurai serving as , who were responsible for patrolling the streets, keeping the peace, and making arrests when necessary. The were responsible for managing the . and were typically drawn from low-ranking samurai families. This system typically did not apply to the Samurai themselves. Samurai clans were expected to resolve disputes among each other through negotiation, or when that failed through duels. Only rarely did Samurai bring their disputes to a magistrate or answer to police. Assisting the were the , non-Samurai who went on patrol with them and provided assistance, the , non-Samurai from the lowest outcast class, often former criminals, who worked for them as informers and spies, and or , chōnin, often former criminals, who were hired by local residents and merchants to work as police assistants in a particular neighborhood.Botsman, Dani (2005). Punishment and Power in the Making of Modern Japan. Princeton University Press. ISBN 9780691114910, p. 94 In Sweden, local governments were responsible for law and order by way of a royal decree issued by Magnus III in the 13th century. The cities financed and organized groups of watchmen who patrolled the streets. In the late 1500s in Stockholm, patrol duties were in large part taken over by a special corps of salaried city guards. The city guard was organized, uniformed and armed like a military unit and was responsible for interventions against various crimes and the arrest of suspected criminals. These guards were assisted by the military, fire patrolmen, and a civilian unit that did not wear a uniform, but instead wore a small badge around the neck. The civilian unit monitored compliance with city ordinances relating to e.g. sanitation issues, traffic and taxes. In rural areas, the King's bailiffs were responsible for law and order until the establishment of counties in the 1630s.Bergsten, Magnus; Furuhagen, Björn (2 March 2002). "Ordning på stan". sv:Populär Historia (in Swedish). Retrieved 17 August 2015. Up to the early 18th century, the level of state involvement in law enforcement in Britain was low. Although some law enforcement officials existed in the form of constables and watchmen, there was no organized police force. A professional police force like the one already present in France would have been ill-suited to Britain, which saw examples such as the French one as a threat to the people's liberty and balanced constitution in favor of an arbitrary and tyrannical government. Law enforcement was mostly up to the private citizens, who had the right and duty to prosecute crimes in which they were involved or in which they were not. At the cry of 'murder!' or 'stop thief!' everyone was entitled and obliged to join the pursuit. Once the criminal had been apprehended, the parish constables and night watchmen, who were the only public figures provided by the state and who were typically part-time and local, would make the arrest. As a result, the state set a reward to encourage citizens to arrest and prosecute offenders. The first of such rewards was established in 1692 of the amount of £40 for the conviction of a highwayman and in the following years it was extended to burglars, coiners and other forms of offense. The reward was to be increased in 1720 when, after the end of the War of the Spanish Succession and the consequent rise of criminal offenses, the government offered £100 for the conviction of a highwayman. Although the offer of such a reward was conceived as an incentive for the victims of an offense to proceed to the prosecution and to bring criminals to justice, the efforts of the government also increased the number of private thief-takers. Thief-takers became infamously known not so much for what they were supposed to do, catching real criminals and prosecuting them, as for "setting themselves up as intermediaries between victims and their attackers, extracting payments for the return of stolen goods and using the threat of prosecution to keep offenders in thrall". Some of them, such as Jonathan Wild, became infamous at the time for staging robberies in order to receive the reward."Browse - Central Criminal Court". Oldbaileyonline.org. In 1737, George II began paying some London and Middlesex watchmen with tax monies, beginning the shift to government control. In 1749, Judge Henry Fielding began organizing a force of quasi-professional constables known as the Bow Street Runners. The Bow Street Runners are considered to have been Britain's first dedicated police force. They represented a formalization and regularization of existing policing methods, similar to the unofficial 'thief-takers'. What made them different was their formal attachment to the Bow Street magistrates' office, and payment by the magistrate with funds from central government. They worked out of Fielding's office and court at No. 4 Bow Street, and did not patrol but served writs and arrested offenders on the authority of the magistrates, travelling nationwide to apprehend criminals. Fielding wanted to regulate and legalize law enforcement activities due to the high rate of corruption and mistaken or malicious arrests seen with the system that depended mainly on private citizens and state rewards for law enforcement. Henry Fielding's work was carried on by his brother, Justice John Fielding, who succeeded him as magistrate in the Bow Street office. Under John Fielding, the institution of the Bow Street Runners gained more and more recognition from the government, although the force was only funded intermittently in the years that followed. In 1763, the Bow Street Horse Patrol was established to combat highway robbery, funded by a government grant. The Bow Street Runners served as the guiding principle for the way that policing developed over the next 80 years. Bow Street was a manifestation of the move towards increasing professionalisation and state control of street life, beginning in London. The Macdaniel affair, a 1754 British political scandal in which a group of thief-takers was found to be falsely prosecuting innocent men in order to collect reward money from bounties, added further impetus for a publicly salaried police force that did not depend on rewards. Nonetheless, In 1828, there were privately financed police units in no fewer than 45 parishes within a 10-mile radius of London. The word police was borrowed from French into the English language in the 18th century, but for a long time it applied only to French and continental European police forces. The word, and the concept of police itself, were "disliked as a symbol of foreign oppression". Before the 19th century, the first use of the word police recorded in government documents in the United Kingdom was the appointment of Commissioners of Police for Scotland in 1714 and the creation of the Marine Police in 1798. Modern Scotland and Ireland Following early police forces established in 1779 and 1788 in Glasgow, Scotland, the Glasgow authorities successfully petitioned the government to pass the Glasgow Police Act establishing the City of Glasgow Police in 1800. Other Scottish towns soon followed suit and set up their own police forces through acts of parliament. In Ireland, the Irish Constabulary Act of 1822 marked the beginning of the Royal Irish Constabulary. The Act established a force in each barony with chief constables and inspectors general under the control of the civil administration at Dublin Castle. By 1841 this force numbered over 8,600 men. London In 1797, Patrick Colquhoun was able to persuade the West Indies merchants who operated at the Pool of London on the River Thames to establish a police force at the docks to prevent rampant theft that was causing annual estimated losses of £500,000 worth of cargo. The idea of a police, as it then existed in France, was considered as a potentially undesirable foreign import. In building the case for the police in the face of England's firm anti-police sentiment, Colquhoun framed the political rationale on economic indicators to show that a police dedicated to crime prevention was "perfectly congenial to the principle of the British constitution". Moreover, he went so far as to praise the French system, which had reached "the greatest degree of perfection" in his estimation. With the initial investment of £4,200, the new force the Marine Police began with about 50 men charged with policing 33,000 workers in the river trades, of whom Colquhoun claimed 11,000 were known criminals and "on the game". The force was part funded by the London Society of West India Planters and Merchants. The force was a success after its first year, and his men had "established their worth by saving £122,000 worth of cargo and by the rescuing of several lives". Word of this success spread quickly, and the government passed the Depredations on the Thames Act 1800 on 28 July 1800, establishing a fully funded police force the Thames River Police together with new laws including police powers; now the oldest police force in the world. Colquhoun published a book on the experiment, The Commerce and Policing of the River Thames. It found receptive audiences far outside London, and inspired similar forces in other cities, notably, New York City, Dublin, and Sydney. Colquhoun's utilitarian approach to the problem – using a cost-benefit argument to obtain support from businesses standing to benefit – allowed him to achieve what Henry and John Fielding failed for their Bow Street detectives. Unlike the stipendiary system at Bow Street, the river police were full-time, salaried officers prohibited from taking private fees. His other contribution was the concept of preventive policing; his police were to act as a highly visible deterrent to crime by their permanent presence on the Thames. Colquhoun's innovations were a critical development leading up to Robert Peel's "new" police three decades later. Metropolitan London was fast reaching a size unprecedented in world history, due to the onset of the Industrial Revolution. It became clear that the locally maintained system of volunteer constables and "watchmen" was ineffective, both in detecting and preventing crime. A parliamentary committee was appointed to investigate the system of policing in London. Upon Sir Robert Peel being appointed as Home Secretary in 1822, he established a second and more effective committee, and acted upon its findings. Royal assent to the Metropolitan Police Act 1829 was given and the Metropolitan Police Service was established on September 29, 1829 in London. Peel, widely regarded as the father of modern policing, was heavily influenced by the social and legal philosophy of Jeremy Bentham, who called for a strong and centralised, but politically neutral, police force for the maintenance of social order, for the protection of people from crime and to act as a visible deterrent to urban crime and disorder. Peel decided to standardise the police force as an official paid profession, to organise it in a civilian fashion, and to make it answerable to the public. Due to public fears concerning the deployment of the military in domestic matters, Peel organised the force along civilian lines, rather than paramilitary. To appear neutral, the uniform was deliberately manufactured in blue, rather than red which was then a military colour, along with the officers being armed only with a wooden truncheon and a rattle to signal the need for assistance. Along with this, police ranks did not include military titles, with the exception of Sergeant. To distance the new police force from the initial public view of it as a new tool of government repression, Peel publicised the so-called Peelian principles, which set down basic guidelines for ethical policing: Whether the police are effective is not measured on the number of arrests but on the deterrence of crime. Above all else, an effective authority figure knows trust and accountability are paramount. Hence, Peel's most often quoted principle that "The police are the public and the public are the police." The 1829 Metropolitan Police Act created a modern police force by limiting the purview of the force and its powers, and envisioning it as merely an organ of the judicial system. Their job was apolitical; to maintain the peace and apprehend criminals for the courts to process according to the law. This was very different from the "continental model" of the police force that had been developed in France, where the police force worked within the parameters of the absolutist state as an extension of the authority of the monarch and functioned as part of the governing state. In 1863, the Metropolitan Police were issued with the distinctive custodian helmet, and in 1884 they switched to the use of whistles that could be heard from much further away. The Metropolitan Police became a model for the police forces in many countries, including the United States and most of the British Empire. Bobbies can still be found in many parts of the Commonwealth of Nations. Australia In Australia, organized law enforcement emerged soon after British colonization began in 1788. The first law enforcement organizations were the Night Watch and Row Boat Guard, which were formed in 1789 to police Sydney. Their ranks were drawn from well-behaved convicts deported to Australia. The Night Watch was replaced by the Sydney Foot Police in 1790. In New South Wales, rural law enforcement officials were appointed by local justices of the peace during the early to mid 19th century, and were referred to as "bench police" or "benchers". A mounted police force was formed in 1825. The first police force having centralised command as well as jurisdiction over an entire colony was the South Australia Police, formed in 1838 under Henry Inman. However, whilst the New South Wales Police Force was established in 1862, it was made up from a large number of policing and military units operating within the then Colony of New South Wales and traces its links back to the Royal Marines. The passing of the Police Regulation Act of 1862 essentially tightly regulated and centralised all of the police forces operating throughout the Colony of New South Wales. Each Australian state and territory maintains its own police force, while the Australian Federal Police enforces laws at the federal level. The New South Wales Police Force remains the largest police force in Australia in terms of personnel and physical resources. It is also the only police force that requires its recruits to undertake university studies at the recruit level and has the recruit pay for their own education. Brazil In 1566, the first police investigator of Rio de Janeiro was recruited. By the 17th century, most captaincies already had local units with law enforcement functions. On July 9, 1775 a Cavalry Regiment was created in the state of Minas Gerais for maintaining law and order. In 1808, the Portuguese royal family relocated to Brazil, because of the French invasion of Portugal. King João VI established the ('General Police Intendancy') for investigations. He also created a Royal Police Guard for Rio de Janeiro in 1809. In 1831, after independence, each province started organizing its local "military police", with order maintenance tasks. The Federal Railroad Police was created in 1852, Federal Highway Police, was established in 1928, and Federal Police in 1967. Canada During the early days of English and French colonization, municipalities hired watchmen and constables to provide security. Established in 1729, the Royal Newfoundland Constabulary (RNC) was the first policing service founded in Canada. The establishment of modern policing services in the Canadas occurred during the 1830s, modelling their services after the London Metropolitan Police, and adopting the ideas of the Peelian principles. The Toronto Police Service was established in 1834, whereas the Service de police de la Ville de Québec was established in 1840. A national police service, the Dominion Police, was founded in 1868. Initially the Dominion Police provided security for parliament, but its responsibilities quickly grew. In 1870, Rupert's Land and the North-Western Territory were incorporated into the country. In an effort to police its newly acquired territory, the Canadian government established the North-West Mounted Police in 1873 (renamed Royal North-West Mounted Police in 1904). In 1920, the Dominion Police, and the Royal Northwest Mounted Police were amalgamated into the Royal Canadian Mounted Police (RCMP). The RCMP provides federal law enforcement; and law enforcement in eight provinces, and all three territories. The provinces of Ontario, and Quebec maintain their own provincial police forces, the Ontario Provincial Police (OPP), and the Sûreté du Québec (SQ). Policing in Newfoundland and Labrador is provided by the RCMP, and the RNC. The aforementioned services also provides municipal policing, although larger Canadian municipalities may establish their own police service. Lebanon In Lebanon, the current police force were established in 1861, with creation of the Gendarmerie. India In India, the police are under the control of respective States and union territories and is known to be under State Police Services (SPS). The candidates selected for the SPS are usually posted as Deputy Superintendent of Police or Assistant Commissioner of Police once their probationary period ends. On prescribed satisfactory service in the SPS, the officers are nominated to the Indian Police Service. The service color is usually dark blue and red, while the uniform color is Khaki. United States In Colonial America, the county sheriff was the most important law enforcement official. For instance, the New York Sheriff's Office was founded in 1626, and the Albany County Sheriff's Department in the 1660s. The county sheriff, who was an elected official, was responsible for enforcing laws, collecting taxes, supervising elections, and handling the legal business of the county government. Sheriffs would investigate crimes and make arrests after citizens filed complaints or provided information about a crime, but did not carry out patrols or otherwise take preventative action. Villages and cities typically hired constables and marshals, who were empowered to make arrests and serve warrants. Many municipalities also formed a night watch, or group of citizen volunteers who would patrol the streets at night looking for crime or fires. Typically, constables and marshals were the main law enforcement officials available during the day while the night watch would serve during the night. Eventually, municipalities formed day watch groups. Rioting was handled by local militias. In the 1700s, the Province of Carolina (later North- and South Carolina) established slave patrols in order to prevent slave rebellions and enslaved people from escaping. By 1785 the Charleston Guard and Watch had "a distinct chain of command, uniforms, sole responsibility for policing, salary, authorized use of force, and a focus on preventing crime." In 1789 the United States Marshals Service was established, followed by other federal services such as the U.S. Parks Police (1791) and U.S. Mint Police (1792). The first city police services were established in Philadelphia in 1751, Richmond, Virginia in 1807, Boston in 1838, and New York in 1845. The U.S. Secret Service was founded in 1865 and was for some time the main investigative body for the federal government. In the American Old West, law enforcement was carried out by local sheriffs, rangers, constables, and federal marshals. There were also town marshals responsible for serving civil and criminal warrants, maintaining the jails, and carrying out arrests for petty crime. In recent years, in addition to federal, state, and local forces, some special districts have been formed to provide extra police protection in designated areas. These districts may be known as neighborhood improvement districts, crime prevention districts, or security districts. Development of theory Michel Foucault wrote that the contemporary concept of police as a paid and funded functionary of the state was developed by German and French legal scholars and practitioners in public administration and statistics in the 17th and early 18th centuries, most notably with Nicolas Delamare's Traité de la Police ("Treatise on the Police"), first published in 1705. The German Polizeiwissenschaft (Science of Police) first theorized by Philipp von Hörnigk, a 17th-century Austrian political economist and civil servant, and much more famously by Johann Heinrich Gottlob Justi, who produced an important theoretical work known as Cameral science on the formulation of police. Foucault cites Magdalene Humpert author of Bibliographie der Kameralwissenschaften (1937) in which the author makes note of a substantial bibliography was produced of over 4,000 pieces of the practice of Polizeiwissenschaft. However, this may be a mistranslation of Foucault's own work since the actual source of Magdalene Humpert states over 14,000 items were produced from the 16th century dates ranging from 1520 to 1850. As conceptualized by the Polizeiwissenschaft, according to Foucault the police had an administrative, economic and social duty ("procuring abundance"). It was in charge of demographic concerns and needed to be incorporated within the western political philosophy system of raison d'état and therefore giving the superficial appearance of empowering the population (and unwittingly supervising the population), which, according to mercantilist theory, was to be the main strength of the state. Thus, its functions largely overreached simple law enforcement activities and included public health concerns, urban planning (which was important because of the miasma theory of disease; thus, cemeteries were moved out of town, etc.), and surveillance of prices. The concept of preventive policing, or policing to deter crime from taking place, gained influence in the late 18th century. Police Magistrate John Fielding, head of the Bow Street Runners, argued that "...it is much better to prevent even one man from being a rogue than apprehending and bringing forty to justice." The Utilitarian philosopher, Jeremy Bentham, promoted the views of Italian Marquis Cesare Beccaria, and disseminated a translated version of "Essay on Crime in Punishment". Bentham espoused the guiding principle of "the greatest good for the greatest number: It is better to prevent crimes than to punish them. This is the chief aim of every good system of legislation, which is the art of leading men to the greatest possible happiness or to the least possible misery, according to calculation of all the goods and evils of life. Patrick Colquhoun's influential work, A Treatise on the Police of the Metropolis (1797) was heavily influenced by Benthamite thought. Colquhoun's Thames River Police was founded on these principles, and in contrast to the Bow Street Runners, acted as a deterrent by their continual presence on the riverfront, in addition to being able to intervene if they spotted a crime in progress. Edwin Chadwick's 1829 article, "Preventive police" in the London Review, argued that prevention ought to be the primary concern of a police body, which was not the case in practice. The reason, argued Chadwick, was that "A preventive police would act more immediately by placing difficulties in obtaining the objects of temptation." In contrast to a deterrent of punishment, a preventive police force would deter criminality by making crime cost-ineffective – "crime doesn't pay". In the second draft of his 1829 Police Act, the "object" of the new Metropolitan Police, was changed by Robert Peel to the "principal object," which was the "prevention of crime." Later historians would attribute the perception of England's "appearance of orderliness and love of public order" to the preventive principle entrenched in Peel's police system. Development of modern police forces around the world was contemporary to the formation of the state, later defined by sociologist Max Weber as achieving a "monopoly on the legitimate use of physical force" and which was primarily exercised by the police and the military. Marxist theory situates the development of the modern state as part of the rise of capitalism, in which the police are one component of the bourgeoisie's repressive apparatus for subjugating the working class. By contrast, the Peelian principles argue that "the power of the police...is dependent on public approval of their existence, actions and behavior", a philosophy known as policing by consent. Personnel and organization Police forces include both preventive (uniformed) police and detectives. Terminology varies from country to country. Police functions include protecting life and property, enforcing criminal law, criminal investigations, regulating traffic, crowd control, public safety duties, civil defense, emergency management, searching for missing persons, lost property and other duties concerned with public order. Regardless of size, police forces are generally organized as a hierarchy with multiple ranks. The exact structures and the names of rank vary considerably by country. Uniformed The police who wear uniforms make up the majority of a police service's personnel. Their main duty is to respond to calls to the emergency telephone number. When not responding to these call-outs, they will do work aimed at preventing crime, such as patrols. The uniformed police are known by varying names such as preventive police, the uniform branch/division, administrative police, order police, the patrol bureau/division or patrol. In Australia and the United Kingdom, patrol personnel are also known as "general duties" officers. Atypically, Brazil's preventive police are known as Military Police. As implied by the name, uniformed police wear uniforms. They perform functions that require an immediate recognition of an officer's legal authority and a potential need for force. Most commonly this means intervening to stop a crime in progress and securing the scene of a crime that has already happened. Besides dealing with crime, these officers may also manage and monitor traffic, carry out community policing duties, maintain order at public events or carry out searches for missing people (in 2012, the latter accounted for 14% of police time in the United Kingdom). As most of these duties must be available as a 24/7 service, uniformed police are required to do shift work. Detectives Police detectives are responsible for investigations and detective work. Detectives may be called Investigations Police, Judiciary/Judicial Police, and Criminal Police. In the UK, they are often referred to by the name of their department, the Criminal Investigation Department (CID). Detectives typically make up roughly 15–25% of a police service's personnel. Detectives, in contrast to uniformed police, typically wear 'business attire' in bureaucratic and investigative functions where a uniformed presence would be either a distraction or intimidating, but a need to establish police authority still exists. "Plainclothes" officers dress in attire consistent with that worn by the general public for purposes of blending in. In some cases, police are assigned to work "undercover", where they conceal their police identity to investigate crimes, such as organized crime or narcotics crime, that are unsolvable by other means. In some cases this type of policing shares aspects with espionage. The relationship between detective and uniformed branches varies by country. In the United States, there is high variation within the country itself. Many US police departments require detectives to spend some time on temporary assignments in the patrol division. The argument is that rotating officers helps the detectives to better understand the uniformed officers' work, to promote cross-training in a wider variety of skills, and prevent "cliques" that can contribute to corruption or other unethical behavior. Conversely, some countries regard detective work as being an entirely separate profession, with detectives working in separate agencies and recruited without having to serve in uniform. A common compromise in English-speaking countries is that most detectives are recruited from the uniformed branch, but once qualified they tend to spend the rest of their careers in the detective branch. Another point of variation is whether detectives have extra status. In some forces, such as the New York Police
to as police officers, troopers, sheriffs, constables, rangers, peace officers or civic/civil guards. Ireland differs from other English-speaking countries by using the Irish language terms Garda (singular) and Gardaí (plural), for both the national police force and its members. The word police is the most universal and similar terms can be seen in many non-English speaking countries. Numerous slang terms exist for the police. Many slang terms for police officers are decades or centuries old with lost etymologies. One of the oldest, cop, has largely lost its slang connotations and become a common colloquial term used both by the public and police officers to refer to their profession. Etymology First attested in English in the early 15th century, originally in a range of senses encompassing '(public) policy; state; public order', the word police comes from Middle French ('public order, administration, government'), in turn from Latin , which is the romanization of the Ancient Greek () 'citizenship, administration, civil polity'. This is derived from () 'city'. History Ancient China Law enforcement in ancient China was carried out by "prefects" for thousands of years since it developed in both the Chu and Jin kingdoms of the Spring and Autumn period. In Jin, dozens of prefects were spread across the state, each having limited authority and employment period. They were appointed by local magistrates, who reported to higher authorities such as governors, who in turn were appointed by the emperor, and they oversaw the civil administration of their "prefecture", or jurisdiction. Under each prefect were "subprefects" who helped collectively with law enforcement in the area. Some prefects were responsible for handling investigations, much like modern police detectives. Prefects could also be women. Local citizens could report minor judicial offenses against them such as robberies at a local prefectural office. The concept of the "prefecture system" spread to other cultures such as Korea and Japan. Babylonia In Babylonia, law enforcement tasks were initially entrusted to individuals with military backgrounds or imperial magnates during the Old Babylonian period, but eventually, law enforcement was delegated to officers known as , who were present in both cities and rural settlements. A was responsible for investigating petty crimes and carrying out arrests. Egypt In ancient Egypt evidence of law enforcement exists as far back as the Old Kingdom period. There are records of an office known as "Judge Commandant of the Police" dating to the fourth dynasty. During the fifth dynasty at the end of the Old Kingdom period, officers armed with wooden sticks were tasked with guarding public places such as markets, temples, and parks, and apprehending criminals. They are known to have made use of trained monkeys, baboons, and dogs in guard duties and catching criminals. After the Old Kingdom collapsed, ushering in the First Intermediate Period, it is thought that the same model applied. During this period, Bedouins were hired to guard the borders and protect trade caravans. During the Middle Kingdom period, a professional police force was created with a specific focus on enforcing the law, as opposed to the previous informal arrangement of using warriors as police. The police force was further reformed during the New Kingdom period. Police officers served as interrogators, prosecutors, and court bailiffs, and were responsible for administering punishments handed down by judges. In addition, there were special units of police officers trained as priests who were responsible for guarding temples and tombs and preventing inappropriate behavior at festivals or improper observation of religious rites during services. Other police units were tasked with guarding caravans, guarding border crossings, protecting royal necropolises, guarding slaves at work or during transport, patrolling the Nile River, and guarding administrative buildings. By the Eighteenth Dynasty of the New Kingdom period, an elite desert-ranger police force called the Medjay was used to protect valuable areas, especially areas of pharaonic interest like capital cities, royal cemeteries, and the borders of Egypt. Though they are best known for their protection of the royal palaces and tombs in Thebes and the surrounding areas, the Medjay were used throughout Upper and Lower Egypt. Each regional unit had its own captain. The police forces of ancient Egypt did not guard rural communities, which often took care of their own judicial problems by appealing to village elders, but many of them had a constable to enforce state laws. Greece In ancient Greece, publicly owned slaves were used by magistrates as police. In Athens, the Scythian Archers (the 'rod-bearers'), a group of about 300 Scythian slaves, was used to guard public meetings to keep order and for crowd control, and also assisted with dealing with criminals, handling prisoners, and making arrests. Other duties associated with modern policing, such as investigating crimes, were left to the citizens themselves. Athenian police forces were supervised by the Areopagus. In Sparta, the Ephors were in charge of maintaining public order as judges, and they used Sparta's Hippeis, a 300-member royal guard of honor, as their enforcers. There were separate authorities supervising women, children, and agricultural issues. Sparta also had a secret police force called the crypteia to watch the large population of helots, or slaves. Rome In the Roman Empire, the army played a major role in providing security. Roman soldiers detached from their legions and posted among civilians carried out law enforcement tasks. Local watchmen were hired by cities to provide some extra security. Magistrates such as and investigated crimes. There was no concept of public prosecution, so victims of crime or their families had to organize and manage the prosecution themselves. Under the reign of Augustus, when the capital had grown to almost one million inhabitants, 14 wards were created; the wards were protected by seven squads of 1,000 men called , who acted as night watchmen and firemen. Their duties included apprehending petty criminals, capturing runaway slaves, guarding the baths at night, and stopping disturbances of the peace. The primarily dealt with petty crime, while violent crime, sedition, and rioting was handled by the Urban Cohorts and even the Praetorian Guard if necessary, though the vigiles could act in a supporting role in these situations. India Law enforcement systems existed in the various kingdoms and empires of ancient India. The Apastamba Dharmasutra prescribes that kings should appoint officers and subordinates in the towns and villages to protect their subjects from crime. Various inscriptions and literature from ancient India suggest that a variety of roles existed for law enforcement officials such as those of a constable, thief catcher, watchman, and detective. In ancient India up to medieval and early modern times, kotwals were in charge of local law enforcement. Persian Empire The Persian Empire had well-organized police forces. A police force existed in every place of importance. In the cities, each ward was under the command of a Superintendent of Police, known as a , who was expected to command implicit obedience in his subordinates. Police officers also acted as prosecutors and carried out punishments imposed by the courts. They were required to know the court procedure for prosecuting cases and advancing accusations. Israel In ancient Israel and Judah, officials with the responsibility of making declarations to the people, guarding the king's person, supervising public works, and executing the orders of the courts existed in the urban areas. They are repeatedly mentioned in the Hebrew Bible, and this system lasted into the period of Roman rule. The first century Jewish historian Josephus related that every judge had two such officers under his command. Levites were preferred for this role. Cities and towns also had night watchmen. Besides officers of the town, there were officers for every tribe. The temple in Jerusalem had special temple police to guard it. The Talmud mentions various local police officials in the Jewish communities of the Land of Israel and Babylon who supervised economic activity. Their Greek-sounding titles suggest that the roles were introduced under Hellenic influence. Most of these officials received their authority from local courts and their salaries were drawn from the town treasury. The Talmud also mentions city watchmen and mounted and armed watchmen in the suburbs. Africa In many regions of pre-colonial Africa, particularly West and Central Africa, guild-like secret societies emerged as law enforcement. In the absence of a court system or written legal code, they carried out police-like activities, employing varying degrees of coercion to enforce conformity and deter antisocial behavior. In ancient Ethiopia, armed retainers of the nobility enforced law in the countryside according to the will of their leaders. The Songhai Empire had officials known as assara-munidios, or "enforcers", acting as police. The Americas Pre-Columbian civilizations in the Americas also had organized law enforcement. The city-states of the Maya civilization had constables known as , as well as bailiffs. In the Aztec Empire, judges had officers serving under them who were empowered to perform arrests, even of dignitaries. In the Inca Empire, officials called enforced the law among the households they were assigned to oversee, with inspectors known as () also stationed throughout the provinces to keep order. Post-classical In medieval Spain, , or 'holy brotherhoods', peacekeeping associations of armed individuals, were a characteristic of municipal life, especially in Castile. As medieval Spanish kings often could not offer adequate protection, protective municipal leagues began to emerge in the twelfth century against banditry and other rural criminals, and against the lawless nobility or to support one or another claimant to a crown. These organizations were intended to be temporary, but became a long-standing fixture of Spain. The first recorded case of the formation of an occurred when the towns and the peasantry of the north united to police the pilgrim road to Santiago de Compostela in Galicia, and protect the pilgrims against robber knights. Throughout the Middle Ages such alliances were frequently formed by combinations of towns to protect the roads connecting them, and were occasionally extended to political purposes. Among the most powerful was the league of North Castilian and Basque ports, the Hermandad de las marismas: Toledo, Talavera, and Villarreal. As one of their first acts after end of the War of the Castilian Succession in 1479, Ferdinand II of Aragon and Isabella I of Castile established the centrally-organized and efficient Holy Brotherhood as a national police force. They adapted an existing brotherhood to the purpose of a general police acting under officials appointed by themselves, and endowed with great powers of summary jurisdiction even in capital cases. The original brotherhoods continued to serve as modest local police-units until their final suppression in 1835. The Vehmic courts of Germany provided some policing in the absence of strong state institutions. Such courts had a chairman who presided over a session and lay judges who passed judgement and carried out law enforcement tasks. Among the responsibilities that lay judges had were giving formal warnings to known troublemakers, issuing warrants, and carrying out executions. In the medieval Islamic Caliphates, police were known as . Bodies termed existed perhaps as early as the Caliphate of Uthman. The Shurta is known to have existed in the Abbasid and Umayyad Caliphates. Their primary roles were to act as police and internal security forces but they could also be used for other duties such as customs and tax enforcement, rubbish collection, and acting as bodyguards for governors. From the 10th century, the importance of the Shurta declined as the army assumed internal security tasks while cities became more autonomous and handled their own policing needs locally, such as by hiring watchmen. In addition, officials called were responsible for supervising bazaars and economic activity in general in the medieval Islamic world. In France during the Middle Ages, there were two Great Officers of the Crown of France with police responsibilities: The Marshal of France and the Grand Constable of France. The military policing responsibilities of the Marshal of France were delegated to the Marshal's provost, whose force was known as the Marshalcy because its authority ultimately derived from the Marshal. The marshalcy dates back to the Hundred Years' War, and some historians trace it back to the early 12th century. Another organisation, the Constabulary (), was under the command of the Constable of France. The constabulary was regularised as a military body in 1337. Under Francis I (reigned 1515–1547), the Maréchaussée was merged with the Constabulary. The resulting force was also known as the , or, formally, the Constabulary and Marshalcy of France. The English system of maintaining public order since the Norman conquest was a private system of tithings known as the mutual pledge system. This system was introduced under Alfred the Great. Communities were divided into groups of ten families called tithings, each of which was overseen by a chief tithingman. Every household head was responsible for the good behavior of his own family and the good behavior of other members of his tithing. Every male aged 12 and over was required to participate in a tithing. Members of tithings were responsible for raising "hue and cry" upon witnessing or learning of a crime, and the men of his tithing were responsible for capturing the criminal. The person the tithing captured would then be brought before the chief tithingman, who would determine guilt or innocence and punishment. All members of the criminal's tithing would be responsible for paying the fine. A group of ten tithings was known as a "hundred" and every hundred was overseen by an official known as a reeve. Hundreds ensured that if a criminal escaped to a neighboring village, he could be captured and returned to his village. If a criminal was not apprehended, then the entire hundred could be fined. The hundreds were governed by administrative divisions known as shires, the rough equivalent of a modern county, which were overseen by an official known as a shire-reeve, from which the term Sheriff evolved. The shire-reeve had the power of , meaning he could gather the men of his shire to pursue a criminal. Following the Norman conquest of England in 1066, the tithing system was tightened with the frankpledge system. By the end of the 13th century, the office of constable developed. Constables had the same responsibilities as chief tithingmen and additionally as royal officers. The constable was elected by his parish every year. Eventually, constables became the first 'police' official to be tax-supported. In urban areas, watchmen were tasked with keeping order and enforcing nighttime curfew. Watchmen guarded the town gates at night, patrolled the streets, arrested those on the streets at night without good reason, and also acted as firefighters. Eventually the office of justice of the peace was established, with a justice of the peace overseeing constables. There was also a system of investigative "juries". The Assize of Arms of 1252, which required the appointment of constables to summon men to arms, quell breaches of the peace, and to deliver offenders to the sheriff or reeve, is cited as one of the earliest antecedents of the English police. The Statute of Winchester of 1285 is also cited as the primary legislation regulating the policing of the country between the Norman Conquest and the Metropolitan Police Act 1829. From about 1500, private watchmen were funded by private individuals and organisations to carry out police functions. They were later nicknamed 'Charlies', probably after the reigning monarch King Charles II. Thief-takers were also rewarded for catching thieves and returning the stolen property. They were private individuals usually hired by crime victims. The earliest English use of the word police seems to have been the term Polles mentioned in the book The Second Part of the Institutes of the Lawes of England published in 1642. Early modern The first centrally organised and uniformed police force was created by the government of King Louis XIV in 1667 to police the city of Paris, then the largest city in Europe. The royal edict, registered by the of Paris on March 15, 1667 created the office of ("lieutenant general of police"), who was to be the head of the new Paris police force, and defined the task of the police as "ensuring the peace and quiet of the public and of private individuals, purging the city of what may cause disturbances, procuring abundance, and having each and everyone live according to their station and their duties". This office was first held by Gabriel Nicolas de la Reynie, who had 44 ('police commissioners') under his authority. In 1709, these commissioners were assisted by ('police inspectors'). The city of Paris was divided into 16 districts policed by the , each assigned to a particular district and assisted by a growing bureaucracy. The scheme of the Paris police force was extended to the rest of France by a royal edict of October 1699, resulting in the creation of lieutenants general of police in all large French cities and towns. After the French Revolution, Napoléon I reorganized the police in Paris and other cities with more than 5,000 inhabitants on February 17, 1800 as the Prefecture of Police. On March 12, 1829, a government decree created the first uniformed police in France, known as ('city sergeants'), which the Paris Prefecture of Police's website claims were the first uniformed policemen in the world. In feudal Japan, samurai warriors were charged with enforcing the law among commoners. Some Samurai acted as magistrates called , who acted as judges, prosecutors, and as chief of police. Beneath them were other Samurai serving as , or assistant magistrates, who conducted criminal investigations, and beneath them were Samurai serving as , who were responsible for patrolling the streets, keeping the peace, and making arrests when necessary. The were responsible for managing the . and were typically drawn from low-ranking samurai families. This system typically did not apply to the Samurai themselves. Samurai clans were expected to resolve disputes among each other through negotiation, or when that failed through duels. Only rarely did Samurai bring their disputes to a magistrate or answer to police. Assisting the were the , non-Samurai who went on patrol with them and provided assistance, the , non-Samurai from the lowest outcast class, often former criminals, who worked for them as informers and spies, and or , chōnin, often former criminals, who were hired by local residents and merchants to work as police assistants in a particular neighborhood.Botsman, Dani (2005). Punishment and Power in the Making of Modern Japan. Princeton University Press. ISBN 9780691114910, p. 94 In Sweden, local governments were responsible for law and order by way of a royal decree issued by Magnus III in the 13th century. The cities financed and organized groups of watchmen who patrolled the streets. In the late 1500s in Stockholm, patrol duties were in large part taken over by a special corps of salaried city guards. The city guard was organized, uniformed and armed like a military unit and was responsible for interventions against various crimes and the arrest of suspected criminals. These guards were assisted by the military, fire patrolmen, and a civilian unit that did not wear a uniform, but instead wore a small badge around the neck. The civilian unit monitored compliance with city ordinances relating to e.g. sanitation issues, traffic and taxes. In rural areas, the King's bailiffs were responsible for law and order until the establishment of counties in the 1630s.Bergsten, Magnus; Furuhagen, Björn (2 March 2002). "Ordning på stan". sv:Populär Historia (in Swedish). Retrieved 17 August 2015. Up to the early 18th century, the level of state involvement in law enforcement in Britain was low. Although some law enforcement officials existed in the form of constables and watchmen, there was no organized police force. A professional police force like the one already present in France would have been ill-suited to Britain, which saw examples such as the French one as a threat to the people's liberty and balanced constitution in favor of an arbitrary and tyrannical government. Law enforcement was mostly up to the private citizens, who had the right and duty to prosecute crimes in which they were involved or in which they were not. At the cry of 'murder!' or 'stop thief!' everyone was entitled and obliged to join the pursuit. Once the criminal had been apprehended, the parish constables and night watchmen, who were the only public figures provided by the state and who were typically part-time and local, would make the arrest. As a result, the state set a reward to encourage citizens to arrest and prosecute offenders. The first of such rewards was established in 1692 of the amount of £40 for the conviction of a highwayman and in the following years it was extended to burglars, coiners and other forms of offense. The reward was to be increased in 1720 when, after the end of the War of the Spanish Succession and the consequent rise of criminal offenses, the government offered £100 for the conviction of a highwayman. Although the offer of such a reward was conceived as an incentive for the victims of an offense to proceed to the prosecution and to bring criminals to justice, the efforts of the government also increased the number of private thief-takers. Thief-takers became infamously known not so much for what they were supposed to do, catching real criminals and prosecuting them, as for "setting themselves up as intermediaries between victims and their attackers, extracting payments for the return of stolen goods and using the threat of prosecution to keep offenders in thrall". Some of them, such as Jonathan Wild, became infamous at the time for staging robberies in order to receive the reward."Browse - Central Criminal Court". Oldbaileyonline.org. In 1737, George II began paying some London and Middlesex watchmen with tax monies, beginning the shift to government control. In 1749, Judge Henry Fielding began organizing a force of quasi-professional constables known as the Bow Street Runners. The Bow Street Runners are considered to have been Britain's first dedicated police force. They represented a formalization and regularization of existing policing methods, similar to the unofficial 'thief-takers'. What made them different was their formal attachment to the Bow Street magistrates' office, and payment by the magistrate with funds from central government. They worked out of Fielding's office and court at No. 4 Bow Street, and did not patrol but served writs and arrested offenders on the authority of the magistrates, travelling nationwide to apprehend criminals. Fielding wanted to regulate and legalize law enforcement activities due to the high rate of corruption and mistaken or malicious arrests seen with the system that depended mainly on private citizens and state rewards for law enforcement. Henry Fielding's work was carried on by his brother, Justice John Fielding, who succeeded him as magistrate in the Bow Street office. Under John Fielding, the institution of the Bow Street Runners gained more and more recognition from the government, although the force was only funded intermittently in the years that followed. In 1763, the Bow Street Horse Patrol was established to combat highway robbery, funded by a government grant. The Bow Street Runners served as the guiding principle for the way that policing developed over the next 80 years. Bow Street was a manifestation of the move towards increasing professionalisation and state control of street life, beginning in London. The Macdaniel affair, a 1754 British political scandal in which a group of thief-takers was found to be falsely prosecuting innocent men in order to collect reward money from bounties, added further impetus for a publicly salaried police force that did not depend on rewards. Nonetheless, In 1828, there were privately financed police units in no fewer than 45 parishes within a 10-mile radius of London. The word police was borrowed from French into the English language in the 18th century, but for a long time it applied only to French and continental European police forces. The word, and the concept of police itself, were "disliked as a symbol of foreign oppression". Before the 19th century, the first use of the word police recorded in government documents in the United Kingdom was the appointment of Commissioners of Police for Scotland in 1714 and the creation of the Marine Police in 1798. Modern Scotland and Ireland Following early police forces established in 1779 and 1788 in Glasgow, Scotland, the Glasgow authorities successfully petitioned the government to pass the Glasgow Police Act establishing the City of Glasgow Police in 1800. Other Scottish towns soon followed suit and set up their own police forces through acts of parliament. In Ireland, the Irish Constabulary Act of 1822 marked the beginning of the Royal Irish Constabulary. The Act established a force in each barony with chief constables and inspectors general under the control of the civil administration at Dublin Castle. By 1841 this force numbered over 8,600 men. London In 1797, Patrick Colquhoun was able to persuade the West Indies merchants who operated at the Pool of London on the River Thames to establish a police force at the docks to prevent rampant theft that was causing annual estimated losses of £500,000 worth of cargo. The idea of a police, as it then existed in France, was considered as a potentially undesirable foreign import. In building the case for the police in the face of England's firm anti-police sentiment, Colquhoun framed the political rationale on economic indicators to show that a police dedicated to crime prevention was "perfectly congenial to the principle of the British constitution". Moreover, he went so far as to praise the French system, which had reached "the greatest degree of perfection" in his estimation. With the initial investment of £4,200, the new force the Marine Police began with about 50 men charged with policing 33,000 workers in the river trades, of whom Colquhoun claimed 11,000 were known criminals and "on the game". The force was part funded by the London Society of West India Planters and Merchants. The force was a success after its first year, and his men had "established their worth by saving £122,000 worth of cargo and by the rescuing of several lives". Word of this success spread quickly, and the government passed the Depredations on the Thames Act 1800 on 28 July 1800, establishing a fully funded police force the Thames River Police together with new laws including police powers; now the oldest police force in the world. Colquhoun published a book on the experiment, The Commerce and Policing of the River Thames. It found receptive audiences far outside London, and inspired similar forces in other cities, notably, New York City, Dublin, and Sydney. Colquhoun's utilitarian approach to the problem – using a cost-benefit argument to obtain support from businesses standing to benefit – allowed him to achieve what Henry and John Fielding failed for their Bow Street detectives. Unlike the stipendiary system at Bow Street, the river police were full-time, salaried officers prohibited from taking private fees. His other contribution was the concept of preventive policing; his police were to act as a highly visible deterrent to crime by their permanent presence on the Thames. Colquhoun's innovations were a critical development leading up to Robert Peel's "new" police three decades later. Metropolitan London was fast reaching a size unprecedented in world history, due to the onset of the Industrial Revolution. It became clear that the locally maintained system of volunteer constables and "watchmen" was ineffective, both in detecting and preventing crime. A parliamentary committee was appointed to investigate the system of policing in London. Upon Sir Robert Peel being appointed as Home Secretary in 1822, he established a second and more effective committee, and acted upon its findings. Royal assent to the Metropolitan Police Act 1829 was given and the Metropolitan Police Service was established on September 29, 1829 in London. Peel, widely regarded as the father of modern policing, was heavily influenced by the social and legal philosophy of Jeremy Bentham, who called for a strong and centralised, but politically neutral, police force for the maintenance of social order, for the protection of people from crime and to act as a visible deterrent to urban crime and disorder. Peel decided to standardise the police force as an official paid profession, to organise it in a civilian fashion, and to make it answerable to the public. Due to public fears concerning the deployment of the military in domestic matters, Peel organised the force along civilian lines, rather than paramilitary. To appear neutral, the uniform was deliberately manufactured in blue, rather than red which was then a military colour, along with the officers being armed only with a wooden truncheon and a rattle to signal the need for assistance. Along with this, police ranks did not include military titles, with the exception of Sergeant. To distance the new police force from the initial public view of it as a new tool of government repression, Peel publicised the so-called Peelian principles, which set down basic guidelines for ethical policing: Whether the police are effective is not measured on the number of arrests but on the deterrence of crime. Above all else, an effective authority figure knows trust and accountability are paramount. Hence, Peel's most often quoted principle that "The police are the public and the public are the police." The 1829 Metropolitan Police Act created a modern police force by limiting the purview of the force and its powers, and envisioning it as merely an organ of the judicial system. Their job was apolitical; to maintain the peace and apprehend criminals for the courts to process according to the law. This was very different from the "continental model" of the police force that had been developed in France, where the police force worked within the parameters of the absolutist state as an extension of the authority of the monarch and functioned as part of the governing state. In 1863, the Metropolitan Police were issued with the distinctive custodian helmet, and in 1884 they switched to the use of whistles that could be heard from much further away. The Metropolitan Police became a model for the police forces in many countries, including the United States and most of the British Empire. Bobbies can still be found in many parts of the Commonwealth of Nations. Australia In Australia, organized law enforcement emerged soon after British colonization began in 1788. The first law enforcement organizations were the Night Watch and Row Boat Guard, which were formed in 1789 to police Sydney. Their ranks were drawn from well-behaved convicts deported to Australia. The Night Watch was replaced by the Sydney Foot Police in 1790. In New South Wales, rural law enforcement officials were appointed by local justices of the peace during the early to mid 19th century, and were referred to as "bench police" or "benchers". A mounted police force was formed in 1825. The first police force having centralised command as well as jurisdiction over an entire colony was the South Australia Police, formed in 1838 under Henry Inman. However, whilst the New South Wales Police Force was established in 1862, it was made up from a large number of policing and military units operating within the then Colony of New South Wales and traces its links back to the Royal Marines. The passing of the Police Regulation Act of 1862 essentially tightly regulated and centralised all of the police forces operating throughout the Colony of New South Wales. Each Australian state and territory maintains its own police force, while the Australian Federal Police enforces laws at the federal level. The New South Wales Police Force remains the largest police force in Australia in terms of personnel and physical resources. It is also the only police force that requires its recruits to undertake university studies at the recruit level and has the recruit pay for their own education. Brazil In 1566, the first police investigator of Rio de Janeiro was recruited. By the 17th century, most captaincies already had local units with law enforcement functions. On July 9, 1775 a Cavalry Regiment was created in the state of Minas Gerais for maintaining law and order. In 1808, the Portuguese royal family relocated to Brazil, because of the French invasion of Portugal. King João VI established the ('General Police Intendancy') for investigations. He also created a Royal Police Guard for Rio de Janeiro in 1809. In 1831, after independence, each province started organizing its local "military police", with order maintenance tasks. The Federal Railroad Police was created in 1852, Federal Highway Police, was established in 1928, and Federal Police in 1967. Canada During the early days of English and French colonization, municipalities hired watchmen and constables to provide security. Established in 1729, the Royal Newfoundland Constabulary (RNC) was the first policing service founded in Canada. The establishment of modern policing services in the Canadas occurred during the 1830s, modelling their services after the London Metropolitan Police, and adopting the ideas of the Peelian principles. The Toronto Police Service was established in 1834, whereas the Service de police de la Ville de Québec was established in 1840. A national police service, the Dominion Police, was founded in 1868. Initially the Dominion Police provided security for parliament, but its responsibilities quickly grew. In 1870, Rupert's Land and the North-Western Territory were incorporated into the country. In an effort to police its newly acquired territory, the Canadian government established the North-West Mounted Police in 1873 (renamed Royal North-West Mounted Police in 1904). In 1920, the Dominion Police, and the Royal Northwest Mounted Police were amalgamated into the Royal Canadian Mounted Police (RCMP). The RCMP provides federal law enforcement; and law enforcement in eight provinces, and all three territories. The provinces of Ontario, and Quebec maintain their own provincial police forces, the Ontario Provincial Police (OPP), and the Sûreté du Québec (SQ). Policing in Newfoundland and Labrador is provided by the RCMP, and the RNC. The aforementioned services also provides municipal policing, although larger Canadian municipalities may establish their own police service. Lebanon In Lebanon, the current police force were established in 1861, with creation of the Gendarmerie. India In India, the police are under the control of respective States and union territories and is known to be under State Police Services (SPS). The candidates selected for the SPS are usually posted as Deputy Superintendent of Police or Assistant Commissioner of Police once their probationary period ends. On prescribed satisfactory service in the SPS, the officers are nominated to the Indian Police Service. The service color is usually dark blue and red, while the uniform color is Khaki. United States In Colonial America, the county sheriff was the most important law enforcement official. For instance, the New York Sheriff's Office was founded in 1626, and the Albany County Sheriff's Department in the 1660s. The county sheriff, who was an elected official, was responsible for enforcing laws, collecting taxes, supervising elections, and handling the legal business of the county government. Sheriffs would investigate crimes and make arrests after citizens filed complaints or provided information about a crime, but did not carry out patrols or otherwise take preventative action. Villages and cities typically hired constables and marshals, who were empowered to make arrests and serve warrants. Many municipalities also formed a night watch, or group of citizen volunteers who would patrol the streets at night looking for crime or fires. Typically, constables and marshals were the main law enforcement officials available during the day while the night watch would serve during the night. Eventually, municipalities formed day watch groups. Rioting was handled by local militias. In the 1700s, the Province of Carolina (later North- and South Carolina) established slave patrols in order to prevent slave rebellions and enslaved people from escaping. By 1785 the Charleston Guard and Watch had "a distinct chain of command, uniforms, sole responsibility for policing, salary, authorized use of force, and a focus on preventing crime." In 1789 the United States Marshals Service was established, followed by other federal services such as the U.S. Parks Police (1791) and U.S. Mint Police (1792). The first city police services were established in Philadelphia in 1751, Richmond, Virginia in 1807, Boston in 1838, and New York in 1845. The U.S. Secret Service was founded in 1865 and was for some time the main investigative body for the federal government. In the American Old West, law enforcement was carried out by local sheriffs, rangers, constables, and federal marshals. There were also town marshals responsible for serving civil and criminal warrants, maintaining the jails, and carrying out arrests for petty crime. In recent years, in addition to federal, state, and local forces, some special districts have been formed to provide extra police protection in designated areas. These districts may be known as neighborhood improvement districts, crime prevention districts, or security districts. Development of theory Michel Foucault wrote that the contemporary concept of police as a paid and funded functionary of the state was developed by German and French legal scholars and practitioners in public administration and statistics in the 17th and early 18th centuries, most notably with Nicolas Delamare's Traité de la Police ("Treatise on the Police"), first published in 1705. The German Polizeiwissenschaft (Science of Police) first theorized by Philipp von Hörnigk, a 17th-century Austrian political economist and civil servant, and much more famously by Johann Heinrich Gottlob Justi, who produced an important theoretical work known as Cameral science on the formulation of police. Foucault cites Magdalene Humpert author of Bibliographie der Kameralwissenschaften (1937) in which the author makes note of a substantial bibliography was produced of over 4,000 pieces of the practice of Polizeiwissenschaft. However, this may be a mistranslation of Foucault's own work since
the introduction shows six of these cabinets). The processors used in the DECSYSTEM-20 (2040, 2050, 2060, 2065), commonly but incorrectly called "KL20", use internal memory, mounted in the same cabinet as the CPU. The 10xx models also have different packaging; they come in the original tall PDP-10 cabinets, rather than the short ones used later on for the DECsystem-20. The differences between the 10xx and 20xx models were primarily which operating system they ran, either TOPS-10 or TOPS-20. Apart from that, differences are more cosmetic than real; some 10xx systems have "20-style" internal memory and I/O, and some 20xx systems have "10-style" external memory and an I/O bus. In particular, all ARPAnet TOPS-20 systems had an I/O bus because the AN20 IMP interface was an I/O bus device. Both could run either TOPS-10 or TOPS-20 microcode and thus the corresponding operating system. Model B The later Model B version of the 2060 processors removes the 256 kiloword limit on the virtual address space by supporting up to 32 "sections" of up to 256 kilowords each, along with substantial changes to the instruction set. The two versions are effectively different CPUs. The first operating system that takes advantage of the Model B's capabilities is TOPS-20 release 3, and user mode extended addressing is offered in TOPS-20 release 4. TOPS-20 versions after release 4.1 only run on a Model B. TOPS-10 versions 7.02 and 7.03 also use extended addressing when run on a 1090 (or 1091) Model B processor running TOPS-20 microcode. MCA25 The final upgrade to the KL10 was the MCA25 upgrade of a 2060 to 2065 (or a 1091 to 1095), which gave some performance increases for programs which run in multiple sections. Massbus The I/O architecture of the 20xx series KL machines is based on a DEC bus design called the Massbus. While many attributed the success of the PDP-11 to DEC's decision to make the PDP-11 Unibus an open architecture, DEC reverted to prior philosophy with the KL, making Massbus both unique and proprietary. Consequently, there were no aftermarket peripheral manufacturers who made devices for the Massbus, and DEC chose to price their own Massbus devices, notably the RP06 disk drive, at a substantial premium above comparable IBM-compatible devices. CompuServe for one, designed its own alternative disk controller that could operate on the Massbus, but connect to IBM style 3330 disk subsystems. Front-end processors The KL class machines have a PDP-11/40 front-end processor for system start-up and monitoring. The PDP-11 is booted from a dual-ported RP06 disk drive (or alternatively from an 8" floppy disk drive or DECtape), and then commands can be given to the PDP-11 to start the main processor, which is typically booted from the same RP06 disk drive as the PDP-11. The PDP-11 performs watchdog functions once the main processor is running. Communication with IBM mainframes, including Remote Job Entry (RJE), was accomplished via a DN61 or DN-64 front-end processor, using a PDP-11/40or PDP-11/34a. KS10 The KS10 is a lower-cost PDP-10 built using AMD 2901 bit-slice chips, with an Intel 8080A microprocessor as a control processor. The KS10 design was crippled to be a Model A even though most of the necessary data paths needed to support the Model B architecture are present. This was no doubt intended to segment the market, but it greatly shortened the KS10's product life. The KS system uses a similar boot procedure to the KL10. The 8080 control processor loads the microcode from an RM03, RM80, or RP06 disk or magnetic tape and then starts the main processor. The 8080 switches modes after the operating system boots and controls the console and remote diagnostic serial ports. Magnetic tape drives Two models of tape drives were supported by the TM10 Magnetic Tape Control subsystem: TU20 Magnetic Tape Transport – 45 ips (inches/second) TU30 Magnetic Tape Transport – 75 ips (inches/second) TU45 Magnetic Tape Transport – 75 ips (inches/second) A mix of up to eight of these could be supported, providing seven-track &/or nine-track devices. The TU20 and TU30 each came in A (9 track) and B (7 track) versions, and all of the aforementioned tape drives could read/write from/to 200 BPI, 556 BPI and 800 BPI IBM-compatible tapes. The TM10 Magtape controller was available in two submodels: TM10A did cycle-stealing to/from PDP-10 memory using the KA10 Arithmetic Processor TM10B accessed PDP-10 memory using a DF10 Data Channel, without "cycle stealing" from the KA10 Arithmetic Processor Instruction set architecture From the first PDP-6s to the KL-10 and KS-10, the user-mode instruction set architecture is largely the same. This section covers that architecture. The only major change to the architecture is the addition of multi-section extended addressing in the KL-10; extended addressing, which changes the process of generating the effective address of an instruction, is briefly discussed at the end. Addressing The PDP-10 has 36-bit words and 18-bit word addresses. In supervisor mode, instruction addresses correspond directly to physical memory. In user mode, addresses are translated to physical memory. Earlier models give a user process a "high" and a "low" memory: addresses with a 0 top bit used one base register, and higher addresses use another. Each segment is contiguous. Later architectures have paged memory access, allowing non-contiguous address spaces. The CPU's general-purpose registers can also be addressed as memory locations 0–15. Registers There are 16 general-purpose, 36-bit registers. The right half of these registers (other than register 0) may be used for indexing. A few instructions operate on pairs of registers. The "PC Word" consists of a 13-bit condition register (plus 5 always zero bits) in the left half and an 18-bit Program Counter in the right half. The condition register, which records extra bits from the results of arithmetic operations (e.g. overflow), can be accessed by only a few instructions. In the original KA-10 systems, these registers are simply the first 16 words of main memory. The "fast registers" hardware option implements them as registers in the CPU, still addressable as the first 16 words of memory. Some software takes advantage of this by using the registers as an instruction cache by loading code into the registers and then jumping to the appropriate address; this is used, for example, in Maclisp to implement one version of the garbage collector. Later models all have registers in the CPU. Supervisor mode There
registers. The right half of these registers (other than register 0) may be used for indexing. A few instructions operate on pairs of registers. The "PC Word" consists of a 13-bit condition register (plus 5 always zero bits) in the left half and an 18-bit Program Counter in the right half. The condition register, which records extra bits from the results of arithmetic operations (e.g. overflow), can be accessed by only a few instructions. In the original KA-10 systems, these registers are simply the first 16 words of main memory. The "fast registers" hardware option implements them as registers in the CPU, still addressable as the first 16 words of memory. Some software takes advantage of this by using the registers as an instruction cache by loading code into the registers and then jumping to the appropriate address; this is used, for example, in Maclisp to implement one version of the garbage collector. Later models all have registers in the CPU. Supervisor mode There are two operational modes, supervisor and user mode. Besides the difference in memory referencing described above, supervisor-mode programs can execute input/output operations. Communication from user-mode to supervisor-mode is done through Unimplemented User Operations (UUOs): instructions which are not defined by the hardware, and are trapped by the supervisor. This mechanism is also used to emulate operations which may not have hardware implementations in cheaper models. Data types The major datatypes which are directly supported by the architecture are two's complement 36-bit integer arithmetic (including bitwise operations), 36-bit floating-point, and halfwords. Extended, 72-bit, floating point is supported through special instructions designed to be used in multi-instruction sequences. Byte pointers are supported by special instructions. A word structured as a "count" half and a "pointer" half facilitates the use of bounded regions of memory, notably stacks. Instructions The instruction set is very symmetrical. Every instruction consists of a 9-bit opcode, a 4-bit register code, and a 23-bit effective address field, which consists in turn of a 1-bit indirect bit, a 4-bit register code, and an 18-bit offset. Instruction execution begins by calculating the effective address. It adds the contents of the given register (if not register zero) to the offset; then, if the indirect bit is 1, an "indirect word", containing an indirect bit, register code, and offset in the same positions as in instructions, is fetched at the calculated address and the effective address calculation is repeated using that word, adding the register (if not register zero) to the offset, until an indirect word with a zero indirect bit is reached. The resulting effective address can be used by the instruction either to fetch memory contents, or simply as a constant. Thus, for example, MOVEI A,3(C) adds 3 to the 18 lower bits of register C and puts the result in register A, without touching memory. There are three main classes of instruction: arithmetic, logical, and move; conditional jump; conditional skip (which may have side effects). There are also several smaller classes. The arithmetic, logical, and move operations include variants which operate immediate-to-register, memory-to-register, register-to-memory, register-and-memory-to-both or memory-to-memory. Since registers may be addressed as part of memory, register-to-register operations are also defined. (Not all variants are useful, though they are well-defined.) For example, the ADD operation has as variants ADDI (add an 18-bit Immediate constant to a register), ADDM (add register contents to a Memory location), ADDB (add to Both, that is, add register contents to memory and also put the result in the register). A more elaborate example is HLROM (Half Left to Right, Ones to Memory), which takes the Left half of the register contents, places them in the Right half of the memory location, and replaces the left half of the memory location with Ones. Halfword instructions are also used for linked lists: HLRZ is the Lisp CAR operator; HRRZ is CDR. The conditional jump operations examine register contents and jump to a given location depending on the result of the comparison. The mnemonics for these instructions all start with JUMP, JUMPA meaning "jump always" and JUMP meaning "jump never" – as a consequence of the symmetric design of the instruction set, it contains several no-ops such as JUMP. For example, JUMPN A,LOC jumps to the address LOC if the contents of register A is non-zero. There are also conditional jumps based on the processor's condition register using the JRST instruction. On the KA10 and KI10, JRST is faster than JUMPA, so the standard unconditional jump is JRST. The conditional skip operations compare register and memory contents and skip the next instruction (which is often an unconditional jump) depending on the result of the comparison. A simple example is CAMN A,LOC which compares the contents of register A with the contents of location LOC and skips the next instruction if they are not equal. A more elaborate example is TLCE A,LOC (read "Test Left Complement, skip if Equal"), which using the contents of LOC as a mask, selects the corresponding bits in the left half of register A. If all those bits are Equal to zero, skip the next instruction; and in any case, replace those bits by their boolean complement. Some smaller instruction classes include the shift/rotate instructions and the procedure call instructions. Particularly notable are the stack instructions PUSH and POP, and the corresponding stack call instructions PUSHJ and POPJ. The byte instructions use a special format of indirect word to extract and store arbitrary-sized bit fields, possibly advancing a pointer to the next unit. Extended addressing In processors supporting extended addressing, the address space is divided into "sections". An 18-bit address is a "local address", containing an offset within a section, and a "global address" is 30 bits, divided into a 12-bit section number at the bottom of the upper 18 bits and an 18-bit offset within that section in the lower 18 bits. A register can contain either a "local index", with an 18-bit unsigned displacement or local address in the lower 18 bits, or a "global index", with a 30-bit unsigned displacement or global address in the lower 30 bits. An indirect word can either be a "local indirect word", with its uppermost bit set, the next 12 bits reserved, and the remaining bits being an indirect bit, a 4-bit register code, and an 18-bit displacement, or a "global indirect word", with its uppermost bit clear, the next bit being an indirect bit, the next 4 bits being a register code, and the remaining 30 bits being a displacement. The process of calculating the effective address generates a 12-bit section number and an 18-bit offset within that segment. Software The original PDP-10 operating system was simply called "Monitor", but was later renamed TOPS-10. Eventually the PDP-10 system itself was renamed the DECsystem-10. Early versions of Monitor and TOPS-10 formed the basis of Stanford's WAITS operating system and the CompuServe time-sharing system. Over time, some PDP-10 operators began running operating systems assembled from major components developed outside DEC. For example, the main Scheduler might come from one university, the Disk Service from another, and so on. The commercial timesharing services such as CompuServe, On-Line Systems (OLS), and Rapidata maintained sophisticated inhouse systems programming groups so that they could modify the operating system as needed for their own businesses without being dependent on DEC or others. There are also strong user communities such as DECUS through which users can share software that they have developed. BBN developed their own alternative operating system, TENEX, which fairly quickly became popular in the research community. DEC later ported TENEX to the KL10, enhanced it considerably, and named it TOPS-20, forming the DECSYSTEM-20 line. MIT, which had developed CTSS, Compatible Time-Sharing System to run on their IBM 709 (and later a modified IBM 7094 system), also developed ITS, Incompatible Timesharing System to run on their PDP-6 (and later a modified PDP-10); the naming was related, since the IBM and the DEC/PDP hardware were different, i.e. "incompatible" (despite each having a 36-bit CPU). The ITS name, selected by Tom Knight, "was a play on" the CTSS name. Tymshare developed TYMCOM-X, derived from TOPS-10 but using a page-based file system like TOPS-20. Clones In 1971 to 1972, researchers at Xerox PARC were frustrated by top company management's refusal to let them buy a PDP-10. Xerox had just bought Scientific Data Systems (SDS) in 1969, and wanted PARC to use an SDS machine. Instead, a group led by Charles P.
the earlier KL10 Model A processors, used in the earlier DECsystem-10s running on KL10 processors, and the later KL10 Model Bs, used for the DECSYSTEM-20s. Model As used the original PDP-10 memory bus, with external memory modules. The later Model B processors used in the DECSYSTEM-20 used internal memory, mounted in the same cabinet as the CPU. The Model As also had different packaging; they came in the original tall PDP-10 cabinets, rather than the short ones used later on for the DECSYSTEM-20. The last released implementation of DEC's 36-bit architecture was the single cabinet DECSYSTEM-2020, using a KS10 processor. The DECSYSTEM-20 was primarily designed and used as a small mainframe for timesharing. That is, multiple users would concurrently log on to individual user accounts and share use of the main processor to compile and run applications. Separate disk allocations were maintained for all users by the operating system, and various levels of protection could be maintained by for System, Owner, Group, and World users. A model 2060, for example, could typically host up to 40 to 60 simultaneous users before exhibiting noticeably reduced response time. Remaining machines The Living Computer Museum of Seattle, Washington maintains a 2065 running TOPS-10, which is available to
DECSYSTEM-20 was sometimes called PDP-20, although this designation was never used by DEC. Models The following models were produced: DECSYSTEM-2020: KS10 bit-slice processor with up to 512 kilowords of solid state RAM (The ADP OnSite version of the DECSYSTEM-2020 supported 1 MW of RAM) DECSYSTEM-2040: KL10 ECL processor with up to 1024 kilowords of magnetic core RAM DECSYSTEM-2050: KL10 ECL processor with 2k words of cache and up to 1024 kilowords of RAM DECSYSTEM-2060: KL10 ECL processor with 2k words of cache and up to 4096 kilowords of solid state memory DECSYSTEM-2065: DECSYSTEM-2060 with MCA25 pager (double-sized (1024 entry) two-way associative hardware page table) The only significant difference the user could see between a DECsystem-10 and a DECSYSTEM-20 was the operating system and the color of the paint. Most (but not all) machines sold to run TOPS-10 were painted "Blasi Blue", whereas most TOPS-20 machines were painted "Terracotta" (often mistakenly called "Chinese Red" or orange; the actual name of the color on the paint cans was Terra Cotta). There were some significant internal differences between the earlier KL10 Model A processors, used in the earlier DECsystem-10s running on KL10 processors, and the later KL10 Model Bs, used for the DECSYSTEM-20s. Model As used the original PDP-10 memory bus, with external memory modules. The later Model B
more powerful, but based on the same concepts as the 12-bit PDP-5/PDP-8 series. One customer of these early PDP machines was Atomic Energy of Canada. The installation at Chalk River, Ontario included an early PDP-4 with a display system and a new PDP-5 as interface to the research reactor instrumentation and control. PDP-5 It was the world's first commercially produced minicomputer and DEC's first 12-bit machine (1963). The instruction set was later expanded in the PDP-8 to handle more bit rotations and to increase the maximum memory size from 4K words to 32K words. It was one of the first computer series with more than 1,000 built. PDP-6 This 36-bit machine, DEC's first large PDP computer, came in 1964 with the first DEC-supported timesharing system. 23 were installed. Although the PDP-6 was "disappointing to management," it introduced the instruction set and was the prototype for the far more successful PDP-10 and DEC System-20, of which hundreds were sold. PDP-7 Replacement for the PDP-4; DEC's first wire-wrapped machine. It was introduced in 1964, and a second version, the 7A, was subsequently added. A total of 120 7 & 7A systems were sold. The first version of Unix, and the first version of B, a predecessor of C, were written for the PDP-7 at Bell Labs, as was the first version (by DEC) of MUMPS. PDP-8 12-bit machine (1965) with a tiny instruction set; DEC's first major commercial success and the start of the minicomputer revolution. Many were purchased (at discount prices, a DEC tradition, which also included free manuals for anyone who asked during the Ken Olsen years) by schools, university departments, and research laboratories. Over 50,000 units among various models of the family (A, E, F, I, S, L, M) were sold. Later models are also used in the DECmate word processor and the VT-78 workstation. LINC-8 A hybrid of the LINC and PDP-8 computers; two instruction sets; 1966. Progenitor of the PDP-12. PDP-9 Successor to the PDP-7; DEC's first micro-programmed machine (1966). It features a speed increase of approximately twice that of the PDP-7. The PDP-9 is also one of the first small or medium scale computers to have a keyboard monitor system based on DIGITAL's own small magnetic tape units (DECtape). The PDP-9 established minicomputers as the leading edge of the computer industry. PDP-10 Also marketed as the DECsystem-10, this 36-bit timesharing machine (1966) was quite successful over several different implementations (KA, KI, KL, KS) and models. The instruction set is a slightly elaborated form of that of the PDP-6. The KL was also used for the DECSYSTEM-20. The KS was used for the 2020, DEC's entry in the distributed processing market, introduced as "the world's lowest cost mainframe computer system." PDP-11 The archetypal minicomputer (1970); a 16-bit machine and another commercial success for DEC. The LSI-11 is a four-chip PDP-11 used primarily for embedded systems. The 32-bit VAX series is descended from the PDP-11, and early VAX models have a PDP-11 compatibility mode. The 16-bit PDP-11 instruction set has been very influential, with processors ranging from the Motorola 68000 to the Renesas H8 and Texas Instruments MSP430, inspired by its highly orthogonal, general-register oriented instruction set and rich addressing modes. The PDP-11 family was extremely long-lived, spanning 20 years and many different implementations and technologies. PDP-12 12-bit machine (1969), descendant of the LINC-8 and thus of the PDP-8. It can execute the instruction set of either system. See LINC and PDP-12 User Manual. With slight redesign, and different livery, officially followed by, and marketed as, the "Lab-8". PDP-13 Designation was not used. PDP-14 A machine with 12-bit instructions, intended as an industrial controller (PLC; 1969). It has no data memory or data registers; instructions can test Boolean input signals, set or clear Boolean output signals, jump conditional or unconditionally, or call a subroutine. Later versions (for example,
DIGITAL's own small magnetic tape units (DECtape). The PDP-9 established minicomputers as the leading edge of the computer industry. PDP-10 Also marketed as the DECsystem-10, this 36-bit timesharing machine (1966) was quite successful over several different implementations (KA, KI, KL, KS) and models. The instruction set is a slightly elaborated form of that of the PDP-6. The KL was also used for the DECSYSTEM-20. The KS was used for the 2020, DEC's entry in the distributed processing market, introduced as "the world's lowest cost mainframe computer system." PDP-11 The archetypal minicomputer (1970); a 16-bit machine and another commercial success for DEC. The LSI-11 is a four-chip PDP-11 used primarily for embedded systems. The 32-bit VAX series is descended from the PDP-11, and early VAX models have a PDP-11 compatibility mode. The 16-bit PDP-11 instruction set has been very influential, with processors ranging from the Motorola 68000 to the Renesas H8 and Texas Instruments MSP430, inspired by its highly orthogonal, general-register oriented instruction set and rich addressing modes. The PDP-11 family was extremely long-lived, spanning 20 years and many different implementations and technologies. PDP-12 12-bit machine (1969), descendant of the LINC-8 and thus of the PDP-8. It can execute the instruction set of either system. See LINC and PDP-12 User Manual. With slight redesign, and different livery, officially followed by, and marketed as, the "Lab-8". PDP-13 Designation was not used. PDP-14 A machine with 12-bit instructions, intended as an industrial controller (PLC; 1969). It has no data memory or data registers; instructions can test Boolean input signals, set or clear Boolean output signals, jump conditional or unconditionally, or call a subroutine. Later versions (for example, the PDP-14/30) are based on PDP-8 physical packaging technology. I/O is line voltage. PDP-15 DEC's final 18-bit machine (1970). It is the only 18-bit machine constructed from TTL integrated circuits rather than discrete transistors, and, like every DEC 18-bit system (except mandatory on the PDP-1, absent on the PDP-4) has an optional integrated vector graphics terminal, DEC's first improvement on its early-designed 34n where n equalled the PDP's number. Later versions of the PDP-15 run a real-time multi-user OS called "XVM". The final model, the PDP-15/76 uses a small PDP-11 to allow Unichannel peripherals to be used. PDP-16 A "roll-your-own" sort of computer using Register Transfer Modules, mainly intended for industrial control systems with more capability than the PDP-14. The PDP-16/M was introduced in 1972 as a standard version of the PDP-16. Related computers TX-0 designed by MIT's Lincoln Laboratory, important as influence for DEC products including Ben Gurley's design for the PDP-1 LINC (Laboratory Instrument Computer), originally designed by MIT's Lincoln Laboratory, some built by DEC. Not in the PDP family, but important as progenitor of the PDP-12. The LINC and the PDP-8 can be considered the first minicomputers, and perhaps the first personal computers as well. The PDP-8 and PDP-11 are the most popular of the PDP series of machines. Digital never made a PDP-20, although the term was sometimes used for a PDP-10 running TOPS-20 (officially known as a DECSYSTEM-20). Several unlicensed clones of the PDP-11. TOAD-1 and TOAD-2, Foonly, and Systems Concepts PDP-10/DECSYSTEM-20-compatible machines. Notes References C. Gordon Bell, J. Craig Mudge, John E. McNamara, Computer Engineering: A DEC View of Hardware Systems Design (Digital, 1978) Bell, C.G., Grason, J., and Newell, A., Designing Computers and Digital Systems. Digital Press, Maynard, Mass., 1972. Conversations with David M. Razler (dmrazler@razler.net), owner/restorer of PDP-7s,8s,9s and 15s until the cost of hauling around 2 tons of DEC gear led him to sell off or give away everything he owned. External links Mark Crispin's 1986 list of PDP's Several PDP and LAB's, still runnable in a German computer museum DEC's PDP-6 was the world's first commercial time-sharing system Gordon Bell interview at the Smithsonian DEC PRODUCT TIMELINE Description and Use of Register Transfer Modules on Gordon Bell's site at Microsoft. pdp12.lofty.com shows a recently restored PDP-12 http://www.soemtron.org/pdp7.html information about the PDP-7 and PDP7A including some manuals and a customer list covering 99 of the 120 systems
primary mirrors, with the resolving power equivalent to a optical aperture. Superlative primary mirrors The largest optical telescope in the world as of 2009 to use a non-segmented single-mirror as its primary mirror is the 8.2 m (26.9 ft) Subaru telescope of the National Astronomical Observatory of Japan, located in Mauna Kea Observatory on Hawaii since 1997; however, this is not the largest diameter single mirror in a telescope, the U.S./German/Italian Large Binocular Telescope has two 8.4 m (27.6 ft) mirrors (which can be used together for interferometric mode). Both of these are smaller than the 10 m segmented primary mirrors on the dual Keck telescope. The Hubble Space Telescope has a 2.4 m (7 ft 10 in) primary mirror. Radio and submillimeter telescopes use much larger dishes or antennae, which do not have to be made as precisely as the mirrors used in optical telescopes. The Arecibo Telescope used a 305 m dish, which was the world largest single-dish radio telescope fixed to the ground. The Green Bank Telescope has the world's largest steerable single radio dish with 100 m in
seven 8.4 meter primary mirrors, with the resolving power equivalent to a optical aperture. Superlative primary mirrors The largest optical telescope in the world as of 2009 to use a non-segmented single-mirror as its primary mirror is the 8.2 m (26.9 ft) Subaru telescope of the National Astronomical Observatory of Japan, located in Mauna Kea Observatory on Hawaii since 1997; however, this is not the largest diameter single mirror in a telescope, the U.S./German/Italian Large Binocular Telescope has two 8.4 m (27.6 ft) mirrors (which can be used together for interferometric mode). Both of these are smaller than the 10 m segmented primary mirrors on the dual Keck telescope. The Hubble Space Telescope has a 2.4 m (7 ft 10 in) primary mirror. Radio and submillimeter telescopes use much larger dishes or antennae, which do not have to be made as precisely as the mirrors used in optical telescopes. The Arecibo Telescope used a 305 m dish, which was the world largest single-dish radio telescope fixed to the ground. The Green Bank Telescope has the world's largest steerable single radio dish with 100 m in diameter. There are larger radio arrays, composed of multiple dishes which have better
(1900–1965) Yakov Lvovich Alpert – Russia, United States (1911–2010) Ralph Asher Alpher – United States (1921–2007) Semen Altshuler – Vitebsk (1911–1983) Luis Walter Alvarez – United States (1911–1988) Nobel laureate Viktor Ambartsumian – Soviet Union, Armenia (1908–1996) André-Marie Ampère – France (1775–1836) Anja Cetti Andersen – Denmark (born 1965) Hans Henrik Andersen – Denmark (1937–2012) Philip Warren Anderson – United States (1923–2020) Nobel laureate Carl David Anderson – United States (1905–1991) Nobel laureate Herbert L. Anderson – United States (1914–1988) Elephter Andronikashvili – Georgia (1910–1989) Anders Jonas Ångström – Sweden (1814–1874) Alexander Animalu, Nigeria (born 1938) Edward Victor Appleton – U.K. (1892–1965) Nobel laureate François Arago – France (1786–1853) Archimedes – Syracuse, Greece (ca. 287–212 BC) Manfred von Ardenne – Germany (1907–1997) Aristarchus of Samos – Samos, Greece (310–ca. 230 BC) Aristotle – Athens, Greece (384–322 BC) Nima Arkani-Hamed – United States (born 1972) Lev Artsimovich – Moscow (1909–1973) Aryabhata – Pataliputra, India (476–550) Neil Ashby – United States (born 1934) Maha Ashour-Abdalla – Egypt, United States (1943–2016) Gurgen Askaryan – Soviet Union (1928–1997) Alain Aspect – France (born 1947) Marcel Audiffren – France Avicenna – Persia (980–1037) Amedeo Avogadro – Italy (1776–1856) David Awschalom – United States (born 1956) APJ Abdul Kalam – India B Xiaoyi Bao – Canada Mani Lal Bhaumik – United States (born 1931) Tom Baehr-Jones – United States (born 1980) Gilbert Ronald Bainbridge – U.K. (1925–2003) Cornelis Bakker – Netherlands (1904–1960) Aiyalam Parameswaran Balachandran – India (born 1938) V Balakrishnan – India (born 1943) Milla Baldo-Ceolin – Italy (1924–2011) Johann Jakob Balmer – Switzerland (1825–1898) Tom Banks – United States (born 1949) Riccardo Barbieri – Italy (born 1944) Marcia Barbosa – Brazil (born 1960) John Bardeen – United States (1908–1991) double Nobel laureate William A. Bardeen – United States (born 1941) Charles Glover Barkla – U.K. (1877–1944) Nobel laureate Amanda Barnard – Australia (born 1971) Boyd Bartlett – United States (1897–1965) Asım Orhan Barut – Malatya, Turkey (1926–1994) Heinz Barwich – Germany (1911–1966) Nikolay Basov – Russia (1922–2001) Nobel laureate Laura Maria Caterina Bassi – Italy (1711–1778) Zoltán Lajos Bay – Hungary (1900–1992) Karl Bechert – Germany (1901–1981) Henri Becquerel – France (1852–1908) Nobel laureate Johannes Georg Bednorz – Germany (born 1950) Nobel laureate Isaac Beeckman – Netherlands (1588–1637) Alexander Graham Bell – Scotland, Canada, U.S.A. (1847–1922) John Stewart Bell – U.K. (1928–1990) Jocelyn Bell Burnell – Northern Ireland, U.K. (born 1943) Carl M. Bender – United States (born 1943) Abraham Bennet – England (1749–1799) Daniel Bernoulli – Switzerland (1700–1782) Hans Bethe – Germany, United States (1906–2005) Nobel laureate Homi J. Bhabha – India (1909–1966) Lars Bildsten – United States (1964) James Binney – England (born 1950) Gerd Binnig – Germany (born 1947) Nobel laureate Jean-Baptiste Biot – France (1774–1862) Raymond T. Birge – United States (1887–1980) Abū Rayhān al-Bīrūnī – Persia (973–1048) Vilhelm Bjerknes – Norway (1862–1951) James Bjorken – United States (born 1934) Patrick Blackett – U.K. (1897–1974) Nobel laureate Felix Bloch – Switzerland (1905–1983) Nobel laureate Nicolaas Bloembergen – Netherlands, United States (1920–2017) Nobel laureate Walter Boas – Germany, Australia (1904–1982) Céline Bœhm – France (born 1974) Nikolay Bogolyubov – Soviet Union, Russia (1909–1992) David Bohm – United States (1917–1992) Aage Bohr – Denmark (1922–2009) Nobel laureate Niels Bohr – Denmark (1885–1962) Nobel laureate Martin Bojowald – Germany (born 1973) Ludwig Boltzmann – Austria (1844–1906) Eugene T. Booth – United States (1912–2004) Max Born – Germany, U.K. (1882–1970) Nobel laureate Rudjer Josip Boscovich – Croatia (1711–1787) Jagadish Chandra Bose – India (1858–1937) Margrete Heiberg Bose – Denmark (1866–1952) Satyendra Nath Bose – India (1894–1974) Johannes Bosscha – Netherlands (1831–1911) Walther Bothe – Germany (1891–1957) Nobel laureate Edward Bouchet – United States (1852–1918) Mark Bowick – United States (born 1957) Robert Boyle – Ireland, England (1627–1691) Willard S. Boyle – Canada, United States (1924–2011) Nobel laureate William Henry Bragg – U.K. (1862–1942) Nobel laureate William Lawrence Bragg – U.K., Australia (1890–1971) Nobel laureate Tycho Brahe – Denmark (1546–1601) Howard Brandt – United States (1939–2014) Walter Houser Brattain – United States (1902–1987) Nobel laureate Karl Ferdinand Braun – Germany (1850–1918) Nobel laureate David Brewster – U.K. (1781–1868) Percy Williams Bridgman – United States (1882–1961) Nobel laureate Léon Nicolas Brillouin – France (1889–1969) Marcel Brillouin – France (1854–1948) Bertram Brockhouse – Canada (1918–2003) Nobel laureate Louis-Victor de Broglie – France (1892–1987) Nobel laureate William Fuller Brown, Jr. – United States (1904–1983) Ernst Brüche – Germany (1900–1985) Hermann Brück – Germany (1905–2000) Ari Brynjolfsson – Iceland (1927–2013) Hans Buchdahl – Germany, Australia (1918–2010) Gersh Budker – Soviet Union (1918–1977) Silke Bühler-Paschen – Austria (born 1967) Johannes Martinus Burgers – Netherlands (1895–1981) Friedrich Burmeister – Germany (1890–1969) Bimla Buti – India (born 1933) Christophorus Buys Ballot – Netherlands (1817–1890) C Nicola Cabibbo – Italy (1935–2010) Nicolás Cabrera – Spain (1913–1989) Orion Ciftja - United States Curtis Callan – United States (born 1942) Annie Jump Cannon – United States (1863–1941) Fritjof Capra – Austria, United States (born 1939) Marcela Carena – Argentina (born 1962) Ricardo Carezani – Argentina, United States (born 1921) Nicolas Léonard Sadi Carnot – France (1796–1832) David Carroll – United States (born 1963) Brandon Carter – Australia (born 1942) Hendrik Casimir – Netherlands (1909–2000) Henry Cavendish – U.K. (1731–1810) James Chadwick – U.K. (1891–1974) Nobel laureate Owen Chamberlain – United States (1920–2006) Nobel laureate Moses H. W. Chan – Hong Kong (born 1946) Subrahmanyan Chandrasekhar – India, United States (1910–1995) Nobel laureate Georges Charpak – France (1924–2010) Nobel laureate Émilie du Châtelet – France (1706–1749) Swapan Chattopadhyay – India (born 1951) Pavel Alekseyevich Cherenkov – Imperial Russia, Soviet Union (1904–1990) Nobel laureate Maxim Chernodub – Russia, France (born 1973) Geoffrey Chew – United States (1924–2019) Boris Chirikov – Soviet Union, Russia (1928–2008) Juansher Chkareuli – Georgia (born 1940) Ernst Chladni – Germany (1756–1827) Steven Chu – United States (born 1948) Nobel laureate Giovanni Ciccotti – Italy (born 1943) Benoît Clapeyron – France (1799–1864) George W. Clark – United States Rudolf Clausius – Germany (1822–1888) Gerald B. Cleaver – United States Richard Clegg – United Kingdom Gari Clifford - British-American physicist, biomedical engineer, academic, researcher John Cockcroft – United Kingdom (1897–1967) Nobel laureate Claude Cohen-Tannoudji – France (born 1933) Nobel laureate Arthur Compton – United States (1892–1962) Nobel laureate Karl Compton – United States (1887–1954) Edward Condon – United States (1902–1974) Leon Cooper – United States (born 1930) Nobel laureate Alejandro Corichi – Mexico (born 1967) Gaspard-Gustave Coriolis – France (1792–1843) Allan McLeod Cormack – South Africa, United States (1924–1998) Eric Allin Cornell – United States (born 1961) Nobel laureate Marie Alfred Cornu – France (1841–1902) Charles-Augustin de Coulomb – France (1736–1806) Ernest Courant – United States (1920–2020) Brian Cox – U.K. (born 1968) Charles Critchfield – United States (1910–1994) James Cronin – United States (1931–2016) Nobel laureate Sir William Crookes – U.K. (1832–1919) Paul Crowell – United States Marie Curie – Poland, France (1867–1934) twice Nobel laureate Pierre Curie – France (1859–1906) Nobel laureate Predrag Cvitanović – Croatia (born 1946) D Jean le Rond d'Alembert – France (1717–1783) Gustaf Dalén – Sweden (1869–1937) Nobel laureate Jean Dalibard – France (born 1958) Richard Dalitz – U.K., United States (1925–2006) John Dalton – U.K. (1766–1844) Sanja Damjanović – Montenegro (born 1972) Ranjan Roy Daniel – India (1923–2005) Charles Galton Darwin – U.K. (1887–1962) Ashok Das – India, United States (born 1953) James C. Davenport – United States (born 1938) Paul Davies – Australia (born 1946) Raymond Davis, Jr. – United States (1914–2006) Nobel laureate Clinton Davisson – United States (1881–1958) Nobel laureate Peter Debije – Netherlands (1884–1966) Hans Georg Dehmelt – Germany, United States (1922–2017) Nobel laureate Max Delbrück – Germany, United States (1906–1981) Democritus – Abdera (ca. 460–360 BC) David M. Dennison – United States (1900–1976) Beryl May Dent – U.K. (1900–1977) David Deutsch – Israel, U.K. (born 1953) James Dewar – U.K. (1842–1923) Scott Diddams – United States Ulrike Diebold – Austria (born 1961) Robbert Dijkgraaf – Netherlands (born 1960) Viktor Dilman – Russia (born 1926) Savas Dimopoulos – United States (born 1952) Paul Dirac – Switzerland, U.K. (1902–1984) Nobel laureate Revaz Dogonadze – Soviet Union, Georgia (1931–1985) Amos Dolbear – United States (1837–1910) Robert Döpel – Germany (1895–1982) Christian Doppler – Austria (1803–1853) Henk Dorgelo – Netherlands (1894–1961) Friedrich Ernst Dorn – Germany (1848–1916) Michael R. Douglas – United States (born 1961) Jonathan Dowling – United States (1955–2020) Claudia Draxl – Germany (born 1959) Sidney Drell – United States (1926–2016) Mildred Dresselhaus – United States (1930–2017) Paul Drude – Germany (1863–1906) F. J. Duarte – United States (born 1954) Émilie du Châtelet – France (1706–1749) Pierre Louis Dulong – France (1785–1838) Janette Dunlop – Scotland (1891–1971) Samuel T. Durrance – United States (born 1943) Freeman Dyson – U.K., United States (1923–2020) Wolf laureate Arthur Jeffrey Dempster – Canada (1886–1950) E Joseph H. Eberly – United States (born 1935) William Eccles – U.K. (1875–1966) Carl Eckart – United States (1902–1973) Arthur Stanley Eddington – U.K. (1882–1944) Paul Ehrenfest – Austria-Hungary, Netherlands (1880–1933) Felix Ehrenhaft – Austria-Hungary, United States (1879–1952) Manfred Eigen – Germany (1927–2019) Albert Einstein – Germany, Italy, Switzerland, United States (1879–1955) Nobel laureate Laura Eisenstein – (1942–1985) professor of physics at University of Illinois Terence James Elkins – Australia, United States (born 1936) John Ellis – U.K. (born 1946) Paul John Ellis – U.K., United States (1941–2005) Richard Keith Ellis – U.K., United States (born 1949) Arpad Elo – Hungary (1903–1992) François Englert – Belgium (born 1932) Nobel laureate David Enskog – Sweden (1884–1947) Loránd Eötvös – Austria-Hungary (1848–1919) Frederick J. Ernst – United States (born 1933) Leo Esaki – Japan (born 1925) Nobel laureate Ernest Esclangon – France (1876–1954) Louis Essen – U.K. (1908–1997) Leonhard Euler – Switzerland (1707–1783) Denis Evans – Australia (born 1951) Paul Peter Ewald – Germany, United States (1888–1985) James Alfred Ewing – U.K. (1855–1935) Franz S. Exner – Austria (1849–1926) F Ludvig Faddeev – Russia (1934–2017) Daniel Gabriel Fahrenheit – Prussia (1686–1736) Kazimierz Fajans – Poland, United States (1887–1975) James E. Faller – United States Michael Faraday – U.K. (1791–1867) Eugene Feenberg – United States (1906–1977) Mitchell Feigenbaum – United States (1944–2019) Gerald Feinberg – United States (1933–1992) Enrico Fermi – Italy (1901–1954) Nobel laureate Albert Fert – France (born 1938) Nobel laureate Herman Feshbach – United States (1917–2000) Richard Feynman – United States (1918–1988) Nobel laureate Wolfgang Finkelnburg – Germany (1905–1967) David Finkelstein – United States (1929–2016) Johannes Fischer – Germany (born 1887) Willy Fischler – Belgium (born 1949) Val Logsdon Fitch – United States (1923–2015) Nobel laureate George Francis FitzGerald – Ireland (1851–1901) Hippolyte Fizeau – France (1819–1896) Georgy Flyorov – Rostov-on-Don (1913–1990) Vladimir Fock – Imperial Russia, Soviet Union (1898–1974) Adriaan Fokker – Netherlands (1887–1972) Arthur Foley – America (1867–1945) James David Forbes – U.K. (1809–1868) Jeff Forshaw – U.K. (born 1968) Léon Foucault – France (1819–1868) Joseph Fourier – France (1768–1830) Ralph H. Fowler – U.K. (1889–1944) William Alfred Fowler – United
(1904–1968) Sylvester James Gates – United States (born 1950) Carl Friedrich Gauss – Germany (1777–1855) Pamela L. Gay – United States (born 1973) Joseph Louis Gay-Lussac – France (1778–1850) Hans Geiger – Germany (1882–1945) Andre Geim – Russian/British (born 1958) Nobel laureate Murray Gell-Mann – United States (1929–2019) Nobel laureate Pierre-Gilles de Gennes – France (1932–2007) Nobel laureate Howard Georgi – United States (born 1947) Walter Gerlach – Germany (1889–1979) Christian Gerthsen – Denmark, Germany (1894–1956) Ezra Getzler – Australia (born 1962) Andrea M. Ghez – United States (born 1955) Nobel laureate Riccardo Giacconi – Italy, United States (1931–2018) Nobel laureate Ivar Giaever – Norway, United States (born 1929) Nobel laureate Josiah Willard Gibbs – United States (1839–1903) Valerie Gibson – U.K. (born 19??) William Gilbert – England (1544–1603) Piara Singh Gill – India (1911–2002) Naomi Ginsberg – United States (born 1979) Vitaly Lazarevich Ginzburg – Soviet Union, Russia (1916–2009) Nobel laureate Marvin D. Girardeau – United States (1930–2015) Donald Arthur Glaser – United States (1926–2013) Nobel laureate Sheldon Glashow – United States (born 1932) Nobel laureate G. N. Glasoe – United States (1902–1987) Roy Jay Glauber – United States (1925–2018) Nobel laureate James Glimm – United States (born 1934) Karl Glitscher – Germany (1886–1945) Peter Goddard – U.K. (born 1945) Maria Goeppert-Mayer – Germany, United States (1906–1972) Nobel laureate Gerald Goertzel – United States (1920–2002) Marvin Leonard Goldberger – United States (1922–2014) Maurice Goldhaber – Austria, United States (1911–2011) Jeffrey Goldstone – U.K., United States (born 1933) Sixto González – Puerto Rico, United States (born 1965) Ravi Gomatam – India (born 1950) Lev Gor'kov – United States (1929–2016) Samuel Goudsmit – Netherlands, United States (1902–1978) Leo Graetz – Germany (1856–1941) Willem 's Gravesande – Netherlands (1688–1742) Michael Green (physicist) – Britain (born 1946) Daniel Greenberger – United States (born 1932) Brian Greene – United States (born 1963) John Gribbin – U.K. (born 1946) Vladimir Gribov – Russia (1930–1997) David J. Griffiths – United States (born 1942) David Gross – United States (born 1941) Nobel laureate Frederick Grover – United States (1876–1973) Peter Grünberg – Germany (1939–2018) Nobel laureate Charles Édouard Guillaume – Switzerland (1861–1931) Nobel laureate Ayyub Guliyev – Azerbaijan (born 1954) Feza Gürsey – Turkey (1921–1992) Alan Guth – United States (born 1947) Martin Gutzwiller – Switzerland (1925–2014) H Rudolf Haag – Germany (1922–2016) Wander Johannes de Haas – Netherlands (1878–1960) Alain Haché – Canada (born 1970) Carl Richard Hagen – United States (born 1937) Otto Hahn – Germany (1879–1968) Edwin Hall – United States (1855–1938) John Lewis Hall – United States (born 1934) Nobel laureate Alexander Hamilton – UK, Australia (born 1967) William Rowan Hamilton – Ireland (1805–1865) Theodor Wolfgang Hänsch – Germany (born 1941) Nobel laureate Peter Andreas Hansen – Denmark (1795–1874) W.W. Hansen – United States (1909–1949) Serge Haroche – France (born 1944) Nobel laureate Paul Harteck – Germany (1902–1985) John G. Hartnett – Australia (born 1952) Douglas Hartree – U.K. (1897–1958) Friedrich Hasenöhrl – Austria, Hungary (1874–1915) Lene Vestergaard Hau – Vejle, Denmark (born 1959) Stephen Hawking – U.K. (1942–2018) Wolf laureate Ibn al-Haytham – Iraq (965–1039) Evans Hayward – United States (1922–2020) Oliver Heaviside – U.K. (1850–1925) Werner Heisenberg – Germany (1901–1976) Nobel laureate Walter Heitler – Germany, Ireland (1904–1981) Hermann von Helmholtz – Germany (1821–1894) Charles H. Henry – United States (1937–2016) Joseph Henry – United States (1797–1878) John Herapath – U.K. (1790–1868) Carl Hermann – Germany (1898–1961) Gustav Ludwig Hertz – Germany (1887–1975) Nobel laureate Heinrich Rudolf Hertz – Germany (1857–1894) Karl Herzfeld – Austria, United States (1892–1978) Victor Francis Hess – Austria, United States (1883–1964) Nobel laureate Mahmoud Hessaby – Iran (1903–1992) Antony Hewish – U.K. (1924–2021) Nobel laureate Paul G. Hewitt – United States (born 1931) Peter Higgs – U.K. (born 1929) Nobel laureate George William Hill – United States (1838–1914) Gustave-Adolphe Hirn – France (1815–1890) Carol Hirschmugl - United States, professor of physics, laboratory director Dorothy Crowfoot Hodgkin – England (1910–1994) Robert Hofstadter – United States (1915–1990) Nobel laureate Helmut Hönl – Germany (1903–1981) Pervez Hoodbhoy – Pakistan (born 1950) Gerardus 't Hooft – Netherlands (born 1946) Nobel laureate Robert Hooke – England (1635–1703) John Hopkinson – United Kingdom (1849–1898) Johann Baptiste Horvath – Slovakia (1732–1799) William V. Houston – United States (1900–1968) Charlotte (née Riefenstahl) Houtermans – Germany (1899–1993) Fritz Houtermans – Netherlands, Germany, Austria (1903–1966) Archibald Howie – U.K. (born 1934) Fred Hoyle – U.K. (1915–2001) John Hubbard – U.K. (1931–1980) John H. Hubbell – United States (1925–2007) Edwin Powell Hubble – United States (1889–1953) Russell Alan Hulse – United States (born 1950) Nobel laureate Friedrich Hund – Germany (1896–1997) Tahir Hussain – Pakistan (1923–2010) Andrew D. Huxley – U.K. (born 1966) Christiaan Huygens – Netherlands (1629–1695) I Arthur Iberall – United States (1918–2002) Sumio Iijima – Japan (born 1939) John Iliopoulos – Greece (born 1940) Ataç İmamoğlu – Turkey, United States (born 1962) Elmer Imes – United States (1883–1941) Abram Ioffe – Russia (1880–1960) Nathan Isgur – United States, Canada (1947–2001) Ernst Ising – Germany (1900–1998) Jamal Nazrul Islam – Bangladesh (1939–2013) Werner Israel – Canada (born 1931) J Roman Jackiw – Poland, United States (born 1939) Shirley Ann Jackson – United States (born 1946) Boris Jacobi – Germany, Russia (1801–1874) Gregory Jaczko – United States (born 1970) Chennupati Jagadish – India, Australia (born 1957) Jainendra Jain – India (born 1960) Ratko Janev – North Macedonia (1939–2019) Andreas Jaszlinszky – Hungary (1715–1783) Ali Javan – Iran (1928–2016) Edwin Jaynes – United States (1922–1998) Antal István Jákli – Hungary (born 1958) Sir James Jeans – UK (1877–1946) Johannes Hans Daniel Jensen – Germany (1907–1973) Nobel laureate Deborah S. Jin – United States (born 1968) Anthony M. Johnson – United States (born 1954) Irène Joliot-Curie – France (1897–1956) Lorella Jones – United States (1943–1995) Pascual Jordan – Germany (1902–1980) Vania Jordanova - United States, physicist, space weather and geomagnetic storms Brian David Josephson – UK (born 1940) Nobel laureate James Prescott Joule – UK (1818–1889) Adolfas Jucys – Lithuania (1904–1974) Chang Kee Jung – South Korea, United States K Menas Kafatos – Greece, United States (born 1945) Takaaki Kajita – Japan (born 1959) Nobel laureate Michio Kaku – United States (born 1947) Theodor Kaluza – Germany (1885–1954) Heike Kamerlingh Onnes – Netherlands (1853–1926) Nobel laureate William R. Kanne – United States Charles K. Kao – China, Hong Kong, U.K., United States (1933–2018) Nobel laureate Pyotr Kapitsa – Russian Empire, Soviet Union (1894–1984) Nobel laureate Theodore von Kármán – Hungary, United States (1881–1963) aeronautical engineer Alfred Kastler – France (1902–1984) Nobel laureate Amrom Harry Katz – United States (1915–1997) Moshe Kaveh – Israel (born 1943) President of Bar-Ilan University Predhiman Krishan Kaw – India (1948–2017) Heinrich Kayser – Germany (1853–1940) Willem Hendrik Keesom – Netherlands (1876–1956) Edwin C. Kemble – United States (1889–1984) Henry Way Kendall – United States (1926–1999) Nobel laureate Johannes Kepler – Germany (1571–1630) John Kerr – Scotland (1824–1907) Wolfgang Ketterle – Germany (born 1957) Nobel laureate Isaak Markovich Khalatnikov – Soviet Union (1919–2021) Jim Al-Khalili – UK (born 1962) Abdul Qadeer Khan – Pakistan (1936–2021) Yulii Borisovich Khariton – Soviet Union, Russia (1904–1996) Erhard Kietz – Germany, United States (1909–1982) Jack Kilby – United States (1923–2005) electronics engineer, Nobel laureate Toichiro Kinoshita – Japan, United States (born 1925) Gustav Kirchhoff – Germany (1824–1887) Oskar Klein – Sweden (1894–1977) Hagen Kleinert – Germany (born 1941) Klaus von Klitzing – Germany (born 1943) Nobel laureate Jens Martin Knudsen – Denmark (1930–2005) Martin Knudsen – Denmark (1871–1949) Makoto Kobayashi – Japan (born 1944) Nobel laureate Arthur Korn – Germany (1870–1945) Masatoshi Koshiba – Japan (1926–2020) Nobel laureate Matthew Koss – United States (born 1961) Walther Kossel – Germany (1888–1956) Ashutosh Kotwal – United States (born 1965) Lew Kowarski – France (1907–1979) Hendrik Kramers – Netherlands (1894–1952) Serguei Krasnikov – Russia (born 1961) Adolf Kratzer – Germany (1893–1983) Lawrence M. Krauss – United States (born 1954) Herbert Kroemer – Germany (born 1928) Nobel laureate August Krönig – Germany (1822–1879) Ralph Kronig – Germany, United States (1904–1995) Nikolay Sergeevich Krylov – Soviet Union (1917–1947) Ryogo Kubo – Japan (1920–1995) Daya Shankar Kulshreshtha – India (born 1951) Igor Vasilyevich Kurchatov – Soviet Union (1903–1960) Behram Kursunoglu – Turkey (1922–2003) Polykarp Kusch – Germany (1911–1993) Nobel laureate L James W. LaBelle – United States Joseph-Louis Lagrange – France (1736–1813) Willis Lamb – United States (1913–2008) Nobel laureate Lev Davidovich Landau – Imperial Russia, Soviet Union (1908–1968) Nobel laureate Rolf Landauer – United States (1927–1999) Grigory Landsberg – Vologda (1890–1957) Kenneth Lane – United States Paul Langevin – France (1872–1946) Irving Langmuir – United States (1881–1957) Pierre-Simon Laplace – France (1749–1827) Joseph Larmor – U.K. (1857–1942) Cesar Lattes – Brazil (1924–2005) Max von Laue – Germany (1879–1960) Nobel laureate Robert Betts Laughlin – United States (born 1950) Nobel laureate Mikhail Lavrentyev – Kazan (1900–1980) Melvin Lax – United States (1922–2002) Ernest Lawrence – United States (1901–1958) Nobel laureate TH Laby – Australia (1880–1946) Pyotr Nikolaevich Lebedev – Imperial Russia (1866–1912) Leon Max Lederman – United States (1922–2018) Nobel laureate Benjamin Lee – Korea, United States (1935–1977) David Lee – United States (born 1931) Nobel laureate Tsung-Dao Lee – China, United States (born 1926) Nobel laureate Anthony James Leggett – U.K., United States (born 1938) Nobel laureate Gottfried Wilhelm Leibniz – Germany (1646–1716) Robert B. Leighton – United States (1919–1997) Georges Lemaître – Belgium (1894–1966) Philipp Lenard – Hungary, Germany (1862–1947) Nobel laureate John Lennard-Jones – U.K. (1894–1954) John Leslie – U.K. (1766–1832) Walter Lewin – Netherlands, United States (born 1936) Martin Lewis Perl – United States (1927–2014) Robert von Lieben – Austria-Hungary (1878–1913) Alfred-Marie Liénard – France (1869–1958) Evgeny Lifshitz – Soviet Union (1915–1985) David Lindley – United States (born 1956) John Linsley – United States (1925–2002) Chris Lintott – U.K. (born 1980) Gabriel Jonas Lippmann – France, Luxemburg (1845–1921) Nobel laureate Antony Garrett Lisi – United States (born 1968) Karl L. Littrow – Austria (1811–1877) Seth Lloyd – United States (born 1960) Oliver Lodge – U.K. (1851–1940) Maurice Loewy – Austria, France (1833–1907) Robert K. Logan – United States (born 1939) Mikhail Lomonosov – Denisovka (1711–1765) Alfred Lee Loomis – United States (1887–1975) Ramón E. López – United States (born 1959) Hendrik Lorentz – Netherlands (1853–1928) Nobel laureate Ludvig Lorenz – Denmark (1829–1891) Johann Josef Loschmidt – Austria (1821–1895) Oleg Losev – Tver (1903–1942) Archibald Low – U.K. (1888–1956) Per-Olov Löwdin – Sweden (1916–2000) Lucretius – Rome (98?–55BC) Aleksandr Mikhailovich Lyapunov – Imperial Russia (1857–1918) Joseph Lykken – United States (born 1957) M Arthur B. McDonald – Canada (born 1943) Nobel laureate Carolina Henriette Mac Gillavry – Netherlands (1904–1993) Ernst Mach – Austria-Hungary (1838–1916) Ray Mackintosh – U.K. Luciano Maiani – Italy, San Marino (born 1941) Theodore Maiman – United States (1927–2007) Arthur Maitland – U.K. (1925–1994) Ettore Majorana – Italy (1906–1938 presumed dead) Sudhansu Datta Majumdar – India (1915–1997) Richard Makinson – Australia (1913–1979) Juan Martín Maldacena – Argentina (born 1968) Étienne-Louis Malus – France (1775–1812) Leonid Isaakovich Mandelshtam – Imperial Russia, Soviet Union (1879–1944) Franz Mandl – U.K. (1923–2009) Charles Lambert Manneback – Belgium (1894–1975) Peter Mansfield – U.K. (1933–2017) Carlo Marangoni – Italy (1840–1925) M. Cristina Marchetti – Italy, United States (born 1955) Guglielmo Marconi – Italy (1874–1937) Nobel laureate Henry Margenau – Germany, United States (1901–1977) Nina Marković – Croatia, United States William Markowitz – United States (1907–1998) Robert Marshak – United States (1916–1992) Walter Marshall – U.K. (1932–1996) Toshihide Maskawa – Japan (1940–2021) Nobel laureate Harrie Massey – Australia (1908–1983) John Cromwell Mather – United States (born 1946) Nobel laureate James Clerk Maxwell – U.K. (1831–1879) Brian May – U.K. (born 1947) Maria Goeppert Mayer – Germany, United States (1906–1972) Ronald E. McNair – United States (1950–1986) Simon van der Meer – Netherlands (1925–2011) Nobel laureate Lise Meitner – Austria (1878–1968) Fulvio Melia – United States (born 1956) Macedonio Melloni – Italy (1798–1854) Adrian Melott – United States (born 1947) Thomas Corwin Mendenhall – United States (1841–1924) M. G. K. Menon – India (1928–2016) David Merritt – United States Albert Abraham Michelson – United States (1852–1931) Nobel laureate Arthur Alan Middleton – United States Stanislav Mikheyev – Russia (1940–2011) Robert Andrews Millikan – United States (1868–1953) Nobel laureate Arthur Milne – U.K. (1896–1950) Shiraz Minwalla – India (born 1972) Rabindra Nath Mohapatra – India, United States (born 1944) Kathryn Moler – United States Merritt Moore – United States (born 1988) Tanya Monro – Australia (born 1973) John J. Montgomery – United States (1858–1911) Jagadeesh Moodera – India, United States (born 1950) Henry Moseley – U.K. (1887–1915) Rudolf Mössbauer – Germany (1929–2011) Nobel laureate Nevill Mott – U.K. (1905–1996) Nobel laureate Ben Roy Mottelson – Denmark, United States (born 1926) Nobel laureate Amédée Mouchez – Spain, France (1821–1892) Ali Moustafa – Egypt (1898–1950) José Enrique Moyal – Palestine, France, U.K., United States, Australia (1910–1998) Karl Alexander Müller – Switzerland (born 1927) Nobel laureate Richard A. Muller – United States (born 1944) Robert S. Mulliken – United States (1896–1986) Pieter van Musschenbroek – Netherlands (1692–1762) N Yoichiro Nambu – Japan, United States (1921–2015) Nobel laureate Meenakshi Narain - experimental physicist, Professor of Physics at Brown University Jayant Narlikar – India (born 1938) Seth Neddermeyer – United States (1907–1988) Louis Néel – France (1904–2000) Nobel laureate Yuval Ne'eman – Israel (1925–2006) Ann Nelson – United States (1958–2019) John von Neumann – Austria-Hungary, United States (1903–1957) Simon Newcomb – United States (1835–1909) Sir Isaac Newton – England (1642–1727) Edward P. Ney – United States (1920–1996) Kendal Nezan – France, Kurdistan (born 1949) Holger Bech Nielsen – Denmark (born 1941) Leopoldo Nobili – Italy (1784–1835) Emmy Noether – Germany (1882–1935) Lothar Nordheim – Germany (1899–1985) Gunnar Nordström – Finland (1881–1923) Johann Gottlieb Nörremberg – Germany (1787–1862) Konstantin Novoselov – Soviet Union, U.K. (born 1974) Nobel laureate H. Pierre Noyes – United States (1923–2016) John Nye – U.K. (1923–2019) O Yuri Oganessian – Russia (born 1933) Georg Ohm – Germany (1789–1854) Hideo Ohno – Japan (born 1954) Susumu Okubo – Japan, United States (1930–2015) Sir Mark Oliphant – Australia (1901–2000) David Olive – U.K. (1937–2012) Gerard K. O'Neill – United States (1927–1992) Lars Onsager – Norway (1903–1976) Robert Oppenheimer – United States (1904–1967) Nicole Oresme – France (1325–1382) Yuri Orlov – Soviet Union, United States (1924–2020) Leonard Salomon Ornstein – Netherlands (1880–1941) Egon Orowan – Austria-Hungary, United States (1901–1989) Hans Christian Ørsted – Denmark (1777–1851) Douglas Dean Osheroff – United States (born 1945) Nobel laureate Mikhail Vasilievich Ostrogradsky – Russia (1801–1862) P Thanu Padmanabhan – India (1957–2021) Heinz Pagels – United States (1939–1988) Abraham Pais – Netherlands, United States (1918–2000) Wolfgang K. H. Panofsky – Germany, United States (1919–2007) Blaise Pascal – France (1623–1662) John Pasta – United States (1918–1984) Jogesh Pati – United States (born 1937) Petr Paucek – United States Stephen Paul – United States (1953–2012) Wolfgang Paul – Germany (1913–1993) Nobel laureate Wolfgang Pauli – Austria-Hungary (1900–1958) Nobel laureate Ruby Payne-Scott – Australia (1912-1981) George B. Pegram – United States (1876–1958) Rudolf Peierls – Germany, U.K. (1907–1995) Jean Peltier – France (1785–1845) Roger Penrose, mathematician – U.K. (born 1931) Wolf laureate Arno Allan Penzias, electrical engineer – U.S.A. (born 1933) Nobel laureate Martin Lewis Perl – United States (1927–2014) Nobel laureate Saul Perlmutter – United States (born 1959) Nobel laureate Jean Baptiste Perrin – France (1870–1942) Nobel laureate Konstantin Petrzhak – Soviet Union, Russia (1907–1998) Bernhard Philberth – Germany (1927–2010) William Daniel Phillips – United States (born 1948) Nobel laureate Max Planck – Germany (1858–1947) Nobel laureate Joseph Plateau – Belgium (1801–1883) Milton S. Plesset – United States (1908–1991) Ward Plummer – United States (1940–2020) Boris Podolsky – Taganrog (1896–1966) Henri Poincaré, mathematician – France (1854–1912) Eric Poisson – Canada (born 1965) Siméon Denis Poisson – France (1781–1840) mathematician Balthasar van der Pol – Netherlands (1889–1959) electrical engineer Joseph Polchinski – United States (1954–2018) Hugh David Politzer – United States (born 1949) Nobel laureate John Polkinghorne – U.K. (1930–2021) Alexander M. Polyakov – Russia, United States (born 1945) Bruno Pontecorvo – Italy, Soviet Union (1913–1993) Heraclides Ponticus – Greece (387–312 BC) Heinz Pose – Germany (1905–1975) Cecil Frank Powell – U.K. (1903–1969) Nobel laureate John Henry Poynting – U.K. (1852–1914) Ludwig Prandtl – Germany (1875–1953) Willibald Peter Prasthofer – Austria (1917–1993) Ilya Prigogine – Belgium (1917–2003) Alexander Prokhorov – Soviet, Russian (1916–2002) Nobel laureate William Prout – U.K. (1785–1850) Luigi Puccianti – Italy (1875–1952) Ivan Pulyuy – Ukraine (1845–1918) Mihajlo Idvorski Pupin – Serbia, United States (1858–1935) Edward Mills Purcell – United States (1912–1997) Nobel laureate Q Helen Quinn – Australia, United States (born 1943) R Raúl Rabadán – United States Gabriele Rabel – Austria, United Kingdom (1880–1963) Isidor Isaac Rabi – Austria, United States (1898–1988) Nobel laureate Giulio Racah – Italian-Israeli (1909–1965) James Rainwater – United States (1917–1986) Nobel laureate Mark G. Raizen – New York City United States (born 1955) Alladi Ramakrishnan – India (1923–2008) Chandrasekhara Venkata Raman – India (1888–1970) Nobel laureate Edward Ramberg – United States (1907–1995) Carl Ramsauer – Germany (1879–1955) Norman Foster Ramsey, Jr. – United States (1915–2011) Nobel laureate Lisa Randall – United States (born 1962) Riccardo Rattazzi – Italy (born 1964) Lord Rayleigh – U.K. (1842–1919) Nobel laureate René Antoine Ferchault de Réaumur – France (1683–1757) Sidney Redner – Canada, United States (born 1951) Martin John Rees – U.K. (born 1942) Hubert Reeves – Canada (born 1932) Tullio Regge – Italy (1931–2014) Frederick Reines – United States (1918–1998) Nobel laureate Louis Rendu – France (1789–1859) Osborne Reynolds – U.K. (1842–1912) Owen Willans Richardson – U.K. (1879–1959) Nobel laureate Robert Coleman Richardson – United States (1937–2013) Nobel laureate Burton Richter – United States (1931–2018) Nobel laureate Floyd K. Richtmyer – United States (1881–1939) Robert D. Richtmyer – (1910–2003) Charlotte Riefenstahl – Germany (1899–1993) Nikolaus Riehl – Germany (1901–1990) Adam Riess – United States (born 1969) Nobel laureate Karl-Heinrich Riewe – Germany Walther Ritz – Switzerland (1878–1909) Étienne-Gaspard Robert – Belgium (1763–1837) Heinrich Rohrer – Switzerland (1933–2013) Nobel laureate Joseph Romm – United States (born 1960) Wilhelm Conrad Röntgen – Germany (1845–1923) Nobel laureate Clemens C. J. Roothaan – Netherlands (1918–2019) Nathan Rosen – United States, Israel (1909–1995) Marshall Rosenbluth – United States (1927–2003) Yasha Rosenfeld – Israel (1948–2002) Carl-Gustav Arvid Rossby – Sweden, United States (1898–1957) Bruno Rossi – Italy, United States (1905–1993) Joseph Rotblat – Poland, U.K. (1908–2005) Carlo Rovelli – Italy (born 1956) Subrata Roy (scientist) – India, United States Carlo Rubbia – Italy (born 1934) Nobel laureate Vera Rubin – United States (1928–2016) Serge Rudaz – Canada, United States (born 1954) David Ruelle – Belgium, France (born 1935) Ernst August Friedrich Ruska – Germany (1906–1988) Nobel laureate Ernest Rutherford – New Zealand, U.K. (1871–1937) Janne Rydberg – Sweden (1854–1919) Martin Ryle – U.K. (1918–1984) Nobel laureate S Mendel Sachs – United States (1927–2012) Rainer K. Sachs – Germany and United States (1932- ) Robert G. Sachs – United States (1916–1999) Carl Sagan – United States (1934–1996) Georges-Louis le Sage – Switzerland (1724–1803) Georges Sagnac – France (1869–1926) Megh Nad Saha – Bengali India (1893–1956) Shoichi Sakata – Japan (1911–1970) Andrei Dmitrievich Sakharov – Soviet Union (1929–1989) Oscar Sala – Brazil (1922–2010) Abdus Salam – Pakistan (1926–1996) Nobel laureate Edwin Ernest Salpeter – Austria, Australia, United States (1924–2008) Anthony Ichiro Sanda – Japan, United States (born 1944) Antonella De Santo – Italy,
formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even posttranslational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein. Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex. Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells. Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution proteins also undergo variation in structure through thermal vibration and the collision with other molecules. Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane. A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons. Protein domains Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units. Domains usually also have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules (e.g. the SH3 domain binds to proline-rich sequences in other proteins). Sequence motif Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database. Cellular functions Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome. The chief characteristic of proteins that also allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (>1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine. Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions also regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can also bind to, or even be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks. As interactions between proteins are reversible, and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types. Enzymes The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme). The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site. Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes. Cell signaling and ligand binding Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell. Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high. Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, but must also release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom. Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins. Transmembrane proteins can also serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions. Structural proteins Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells. Some globular proteins can also play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size. Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They also generate the forces exerted by contracting muscles and play essential roles in intracellular transport. Protein evolution A key question in molecular biology is how proteins evolve, i.e. how can mutations (or rather changes in amino acid sequence) lead to new structures and functions? Most amino acids in a protein can be changed without disrupting activity or function, as can be seen from numerous homologous proteins across species (as collected in specialized databases for protein families, e.g. PFAM). In order to prevent dramatic consequences of mutations, a gene may be duplicated before it can mutate freely. However, this can also lead to complete loss of gene function and thus pseudo-genes. More commonly, single amino acid changes have limited consequences although some can change protein function substantially, especially in enzymes. For instance, many enzymes can change their substrate specificity by one or a few mutations. Changes in substrate specificity are facilitated by substrate promiscuity, i.e. the ability of many enzymes to bind and process multiple substrates. When mutations occur, the specificity of an enzyme can increase (or decrease) and thus its enzymatic activity. Thus, bacteria (or other organisms) can adapt to different food sources, including unnatural substrates such as plastic. Methods of study The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism. In silico studies use computational methods to study proteins. Protein purification To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity. The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing. For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed
proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction. Structure Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation. Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states. Biochemists often refer to four distinct aspects of a protein's structure: Primary structure: the amino acid sequence. A protein is a polyamide. Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the α-helix, β-sheet and turns. Because secondary structures are local, many regions of different secondary structure can be present in the same protein molecule. Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even posttranslational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein. Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex. Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells. Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution proteins also undergo variation in structure through thermal vibration and the collision with other molecules. Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane. A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons. Protein domains Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units. Domains usually also have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules (e.g. the SH3 domain binds to proline-rich sequences in other proteins). Sequence motif Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database. Cellular functions Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome. The chief characteristic of proteins that also allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (>1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine. Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions also regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can also bind to, or even be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks. As interactions between proteins are reversible, and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types. Enzymes The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme). The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site. Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes. Cell signaling and ligand binding Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell. Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high. Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, but must also release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom. Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins. Transmembrane proteins can also serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions. Structural proteins Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells. Some globular proteins can also play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size. Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They also generate the forces exerted by contracting muscles and play essential roles in intracellular transport. Protein evolution A key question in molecular biology is how proteins evolve, i.e. how can mutations (or rather changes in amino acid sequence) lead to new structures and functions? Most amino acids in a protein can be changed without disrupting activity or function, as can be seen from numerous homologous proteins across species (as collected in specialized databases for protein families, e.g. PFAM). In order to prevent dramatic consequences of mutations, a gene may be duplicated before it can mutate freely. However, this can also lead to complete loss of gene function and thus pseudo-genes. More commonly, single amino acid changes have limited consequences although some can change protein function substantially, especially in enzymes. For instance, many enzymes can change their substrate specificity by one or a few mutations. Changes in substrate specificity are facilitated by substrate promiscuity, i.e. the ability of many enzymes to bind and process multiple substrates. When mutations occur, the specificity of an enzyme can increase (or decrease) and thus its enzymatic activity. Thus, bacteria (or other organisms) can adapt to different food sources, including unnatural substrates such as plastic. Methods of study The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism. In silico studies use computational methods to study proteins. Protein purification To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity. The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing. For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of different tags have been developed to help researchers purify specific proteins from complex mixtures. Cellular localization The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP). The fused protein's position within the cell can be cleanly and efficiently visualized using microscopy, as shown in the figure opposite. Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose. Other possibilities exist, as well. For example, immunohistochemistry usually utilizes an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation. While this technique does not prove colocalization of a compartment of known density and the protein of interest, it does increase the likelihood, and is more amenable to large-scale studies. Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique also uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest. Through another genetic engineering application known as site-directed
a mixture of very large numbers (perhaps of the order of the Avogadro constant, 6 x 1023) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities. History The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" (Russian: «Курс истинной физической химии») before the students of Petersburg University. In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations". Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances. This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs' phase rule. The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie, founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff. Together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901–1909. Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy, such as infrared spectroscopy, microwave spectroscopy, electron paramagnetic resonance and nuclear magnetic resonance spectroscopy, is probably the most important 20th century development. Further development in physical chemistry may be attributed to discoveries in nuclear chemistry, especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry, as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), and herein lies the practical importance of contemporary physical chemistry. See Group contribution method, Lydersen method, Joback method, Benson group increment theory, quantitative structure–activity relationship Journals Some journals that deal with physical chemistry include Zeitschrift für Physikalische Chemie (1887); Journal of Physical Chemistry A (from 1896 as Journal of Physical Chemistry, renamed in 1997); Physical Chemistry Chemical Physics (from 1999, formerly Faraday Transactions with a history dating back to 1905); Macromolecular Chemistry and Physics (1947); Annual Review of Physical Chemistry (1950); Molecular Physics (1957); Journal of Physical Organic Chemistry (1988); Journal of Physical Chemistry B (1997); ChemPhysChem (2000); Journal of Physical Chemistry C (2007); and Journal of Physical Chemistry Letters (from 2010, combined letters previously
degree of freedom (or variance) can be correlated with one another with help of phase rule. Reactions of electrochemical cells. Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics. Key concepts The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems. One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them. Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter. Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics, which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine, and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid. It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium. Which reactions do occur and how fast is the subject of chemical kinetics, another branch of physical chemistry. A key idea in chemical kinetics is that for reactants
a spool is related to the spool's perimeter; if the length of the string was exact, it would equal the perimeter. Formulas The perimeter is the distance around a shape. Perimeters for more general shapes can be calculated, as any path, with , where is the length of the path and is an infinitesimal line element. Both of these must be replaced by algebraic forms in order to be practically calculated. If the perimeter is given as a closed piecewise smooth plane curve with then its length can be computed as follows: A generalized notion of perimeter, which includes hypersurfaces bounding volumes in -dimensional Euclidean spaces, is described by the theory of Caccioppoli sets. Polygons Polygons are fundamental to determining perimeters, not only because they are the simplest shapes but also because the perimeters of many shapes are calculated by approximating them with sequences of polygons tending to these shapes. The first mathematician known to have used this kind of reasoning is Archimedes, who approximated the perimeter of a circle by surrounding it with regular polygons. The perimeter of a polygon equals the sum of the lengths of its sides (edges). In particular, the perimeter of a rectangle of width and length equals An equilateral polygon is a polygon which has all sides of the same length (for example, a rhombus is a 4-sided equilateral polygon). To calculate the perimeter of an equilateral polygon, one must multiply the common length of the sides by the number of sides. A regular polygon may be characterized by the number of its sides and by its circumradius, that is to say, the constant distance between its centre and each of its vertices. The length of its sides can be calculated using trigonometry. If is
first mathematician known to have used this kind of reasoning is Archimedes, who approximated the perimeter of a circle by surrounding it with regular polygons. The perimeter of a polygon equals the sum of the lengths of its sides (edges). In particular, the perimeter of a rectangle of width and length equals An equilateral polygon is a polygon which has all sides of the same length (for example, a rhombus is a 4-sided equilateral polygon). To calculate the perimeter of an equilateral polygon, one must multiply the common length of the sides by the number of sides. A regular polygon may be characterized by the number of its sides and by its circumradius, that is to say, the constant distance between its centre and each of its vertices. The length of its sides can be calculated using trigonometry. If is a regular polygon's radius and is the number of its sides, then its perimeter is A splitter of a triangle is a cevian (a segment from a vertex to the opposite side) that divides the perimeter into two equal lengths, this common length being called the semiperimeter of the triangle. The three splitters of a triangle all intersect each other at the Nagel point of the triangle. A cleaver of a triangle is a segment from the midpoint of a side of a triangle to the opposite side such that the perimeter is divided into two equal lengths. The three cleavers of a triangle all intersect each other at the triangle's Spieker center. Circumference of a circle The perimeter of a circle, often called the circumference, is proportional to its diameter and its radius. That is to say, there exists a constant number pi, (the Greek p for perimeter), such that if is the circle's perimeter and its diameter then, In terms of the radius of the circle, this formula becomes, To calculate a circle's perimeter, knowledge of its radius or diameter and the number suffices. The problem is that is not rational (it cannot be expressed as the quotient of two integers), nor is it algebraic (it is not a root of a polynomial equation with rational coefficients). So, obtaining an accurate approximation of is important in the calculation. The computation of the digits of is relevant to many fields, such as mathematical analysis, algorithmics and computer science. Perception of perimeter The perimeter and the area are two main measures of geometric figures. Confusing them is a common error, as well as believing that the greater one of them is, the greater the other must be. Indeed, a commonplace observation is that an enlargement (or a reduction) of a shape make its area grow (or decrease) as well as its perimeter. For example, if a field is drawn on a 1/ scale map, the actual field perimeter can be calculated multiplying the drawing perimeter by . The real area is times the area of the shape on the map. Nevertheless, there is no relation between the area and the perimeter of an ordinary shape. For example,
into more than two liquid phases and the concept of phase separation extends to solids, i.e., solids can form solid solutions or crystallize into distinct crystal phases. Metal pairs that are mutually soluble can form alloys, whereas metal pairs that are mutually insoluble cannot. As many as eight immiscible liquid phases have been observed. Mutually immiscible liquid phases are formed from water (aqueous phase), hydrophobic organic solvents, perfluorocarbons (fluorous phase), silicones, several different metals, and also from molten phosphorus. Not all organic solvents are completely miscible, e.g. a mixture of ethylene glycol and toluene may separate into two distinct organic phases. Phases do not need to macroscopically separate spontaneously. Emulsions and colloids are examples of immiscible phase pair combinations that do not physically separate. Phase equilibrium Left to equilibration, many compositions will form a uniform single phase, but depending on the temperature and pressure even a single substance may separate into two or more distinct phases. Within each phase, the properties are uniform but between the two phases properties differ. Water in a closed jar with an air space over it forms a two-phase system. Most of the water is in the liquid phase, where it is held by the mutual attraction of water molecules. Even at equilibrium molecules are constantly in motion and, once in a while, a molecule in the liquid phase gains enough kinetic energy to break away from the liquid phase and enter the gas phase. Likewise, every once in a while a vapor molecule collides with the liquid surface and condenses into the liquid. At equilibrium, evaporation and condensation processes exactly balance and there is no net change in the volume of either phase. At room temperature and pressure, the water jar reaches equilibrium when the air over the water has a humidity of about 3%. This percentage increases as the temperature goes up. At 100 °C and atmospheric pressure, equilibrium is not reached until the air is 100% water. If the liquid is heated a little over 100 °C, the transition from liquid to gas will occur not only at the surface but throughout the liquid volume: the water boils. Number of phases For a given composition, only certain phases are possible at a given temperature and pressure. The number and type of phases that will form is hard to predict and is usually determined by experiment. The results of such experiments can be plotted in phase diagrams. The phase diagram shown here is for a single component system. In this simple system, phases that are possible, depends only on pressure and temperature. The markings show points where two or more phases can co-exist in equilibrium. At temperatures and pressures away from the markings, there will be only one phase at equilibrium. In the diagram, the blue line marking the boundary between liquid and gas does not continue indefinitely, but terminates at a point called the critical point. As the temperature and pressure approach the critical point, the properties of the liquid and gas become progressively more similar. At the critical point, the liquid and gas become indistinguishable. Above the critical point, there are no longer separate liquid and gas phases: there is only a generic fluid phase referred to as a supercritical fluid. In water, the critical point occurs at around 647 K (374 °C or 705
in the organization of matter, such as a change from liquid to solid or a more subtle change from one crystal structure to another, this latter usage is similar to the use of "phase" as a synonym for state of matter. However, the state of matter and phase diagram usages are not commensurate with the formal definition given above and the intended meaning must be determined in part from the context in which the term is used. Types of phases Distinct phases may be described as different states of matter such as gas, liquid, solid, plasma or Bose–Einstein condensate. Useful mesophases between solid and liquid form other states of matter. Distinct phases may also exist within a given state of matter. As shown in the diagram for iron alloys, several phases exist for both the solid and liquid states. Phases may also be differentiated based on solubility as in polar (hydrophilic) or non-polar (hydrophobic). A mixture of water (a polar liquid) and oil (a non-polar liquid) will spontaneously separate into two phases. Water has a very low solubility (is insoluble) in oil, and oil has a low solubility in water. Solubility is the maximum amount of a solute that can dissolve in a solvent before the solute ceases to dissolve and remains in a separate phase. A mixture can separate into more than two liquid phases and the concept of phase separation extends to solids, i.e., solids can form solid solutions or crystallize into distinct crystal phases. Metal pairs that are mutually soluble can form alloys, whereas metal pairs that are mutually insoluble cannot. As many as eight immiscible liquid phases have been observed. Mutually immiscible liquid phases are formed from water (aqueous phase), hydrophobic organic solvents, perfluorocarbons (fluorous phase), silicones, several different metals, and also from molten phosphorus. Not all organic solvents are completely miscible, e.g. a mixture of ethylene glycol and toluene may separate into two distinct organic phases. Phases do not need to macroscopically separate spontaneously. Emulsions and colloids are examples of immiscible phase pair combinations that do not physically separate. Phase equilibrium Left to equilibration, many compositions will form a uniform single phase, but depending on the temperature and pressure even a single substance may separate into two or more distinct phases. Within each phase, the properties are uniform but between the two phases properties differ. Water in a closed jar with an air space over it forms a two-phase system. Most of the water is in the liquid phase, where it is held by the mutual attraction of water molecules. Even at equilibrium molecules are constantly in motion and, once in a while, a molecule in the liquid phase gains enough kinetic energy to break away from the liquid phase and enter the gas phase. Likewise, every once in a while a vapor molecule collides with the liquid surface and condenses into the liquid. At equilibrium, evaporation and condensation processes exactly balance and there is no net change in the volume of either phase. At room temperature and pressure, the water jar reaches equilibrium when the air over the water has a humidity of about 3%. This percentage increases as the temperature goes up. At 100 °C and atmospheric pressure, equilibrium is not reached until the air is 100% water. If the liquid is heated a little over 100 °C, the transition from liquid to gas will occur not only at the surface but throughout the liquid volume: the water boils. Number of phases For a given composition, only certain phases are possible at a given temperature and pressure. The number and type of phases that will form is hard to predict and is usually determined by experiment. The results of such experiments can be plotted in phase diagrams. The phase diagram shown here is for a single component system. In this simple system, phases that are possible, depends only on pressure and temperature. The markings show points where two or more phases can co-exist in equilibrium. At temperatures and pressures away from the markings, there will be only one phase at equilibrium. In the diagram, the blue line marking the boundary between liquid and gas does not continue indefinitely, but terminates at a point called the critical point. As the temperature and pressure approach the critical point, the properties of the
History of neurophysics – history of the branch of biophysics dealing with the nervous system. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of computational physics – history of the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists. History of condensed matter physics – history of the study of the physical properties of condensed phases of matter. History of cryogenics – history of cryogenics is the study of the production of very low temperature (below −150 °C, −238 °F or 123K) and the behavior of materials at those temperatures. History of Dynamics – history of the study of the causes of motion and changes in motion History of econophysics – history of the interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics History of electromagnetism – history of the branch of science concerned with the forces that occur between electrically charged particles. History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods History of materials physics – history of the use of physics to describe materials in many different ways such as force, heat, light and mechanics. History of mathematical physics – history of the application of mathematics to problems in physics and the development of mathematical methods for such applications and for the formulation of physical theories. History of mechanics – history of the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. History of biomechanics – history of the study of the structure and function of biological systems such as humans, animals, plants, organs, and cells by means of the methods of mechanics. History of classical mechanics – history of the one of the two major sub-fields of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces. History of continuum mechanics – history of the branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. History of fluid mechanics – history of the study of fluids and the forces on them. History of quantum mechanics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of thermodynamics – history of the branch of physical science concerned with heat and its relation to other forms of energy and work. History of nuclear physics – history of the field of physics that studies the building blocks and interactions of atomic nuclei. History of optics – history of the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. History of particle physics – history of the branch of physics that studies the existence and interactions of particles that are the constituents of what is usually referred to as matter or radiation. History of psychophysics – history of the quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they affect. History of plasma physics – history of the state of matter similar to gas in which a certain portion of the particles are ionized. History of polymer physics – history of the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation and polymerization of polymers and monomers respectively. History of quantum physics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of theory of relativity – History of statics – history of the branch of mechanics concerned with the analysis of loads (force, torque/moment) on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity. History of solid state physics – history of the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. History of vehicle dynamics – history of the dynamics of vehicles, here assumed to be ground vehicles. History of chemistry – history of the physical science of atomic matter (matter that is composed of chemical elements), especially its chemical reactions, but also including its properties, structure, composition, behavior, and changes as they relate the chemical reactions History of analytical chemistry – history of the study of the separation, identification, and quantification of the chemical components of natural and artificial materials. History of astrochemistry – history of the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation. History of cosmochemistry – history of the study of the chemical composition of matter in the universe and the processes that led to those compositions History of atmospheric chemistry – history of the branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology, and other disciplines History of biochemistry – history of the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemistry governs all living organisms and living processes. History of agrochemistry – history of the study of both chemistry and biochemistry which are important in agricultural production, the processing of raw products into foods and beverages, and in environmental monitoring and remediation. History of bioinorganic chemistry – history of the examines the role of metals in biology. History of bioorganic chemistry – history of the rapidly growing scientific discipline that combines organic chemistry and biochemistry. History of biophysical chemistry – history of the new branch of chemistry that covers a broad spectrum of research activities involving biological systems. History of environmental chemistry – history of the scientific study of the chemical and biochemical phenomena that occur in natural places. History of immunochemistry – history of the branch of chemistry that involves the study of the reactions and components on the immune system. History of medicinal chemistry – history of the discipline at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis, and development for market of pharmaceutical agents (drugs). History of pharmacology – history of the branch of medicine and biology concerned with the study of drug action. History of natural product chemistry – history of the chemical compound or substance produced by a living organism – history of the found in nature that usually has a pharmacological or biological activity for use in pharmaceutical drug discovery and drug design. History of neurochemistry – history of the specific study of neurochemicals, which include neurotransmitters and other molecules such as neuro-active drugs that influence neuron function. History of computational chemistry – history of the branch of chemistry that uses principles of computer science to assist in solving chemical problems. History of chemo-informatics – history of the use of computer and informational techniques, applied to a range of problems in the field of chemistry. History of molecular mechanics – history of the uses Newtonian mechanics to model molecular systems. History of Flavor chemistry – history of someone who uses chemistry to engineer artificial and natural flavors. History of Flow chemistry – history of the chemical reaction is run in a continuously flowing stream rather than in batch production. History of geochemistry – history of the study of the mechanisms behind major geological systems using chemistry History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology History of ocean chemistry – history of the study of the chemistry of marine environments including the influences of different variables. History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds. History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes, and nuclear properties. History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). History of organic chemistry – history of the study of the structure, properties, composition, reactions, and preparation (by synthesis or by other means) of carbon-based compounds, hydrocarbons, and their derivatives. History of petrochemistry – history of the branch of chemistry that studies the transformation of crude oil (petroleum) and natural gas into useful products or raw materials. History of organometallic chemistry – history of the study of chemical compounds containing bonds between carbon and a metal. History of photochemistry – history of the study of chemical reactions that proceed with the absorption of light by atoms or molecules.. History of physical chemistry – history of the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of physical laws and concepts. History of chemical kinetics – history of the study of rates of chemical processes. History of chemical thermodynamics – history of the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. History of electrochemistry – history of the branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution. History of Femtochemistry – history of the Femtochemistry is the science that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond, hence the name). History of mathematical chemistry – history of the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. History of mechanochemistry – history of the coupling of the mechanical and the chemical phenomena on a molecular scale and includes mechanical breakage, chemical behavior of mechanically stressed solids (e.g., stress-corrosion cracking), tribology, polymer degradation under shear, cavitation-related phenomena (e.g., sonochemistry and sonoluminescence), shock wave chemistry and physics, and even the burgeoning field of molecular machines. History of physical organic chemistry – history of the study of the interrelationships between structure and reactivity in organic molecules. History of quantum chemistry – history of the branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems. History of sonochemistry – history of the study of the effect of sonic waves and wave properties on chemical systems. History of stereochemistry – history of the study of the relative spatial arrangement of atoms within molecules. History of supramolecular chemistry – history of the area of chemistry beyond the molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components. History of thermochemistry – history of the study of the energy and heat associated with chemical reactions and/or physical transformations. History of phytochemistry – history of the strict sense of the word
galactic astronomy – history of the study of our own Milky Way galaxy and all its contents. History of physical cosmology – history of the study of the largest-scale structures and dynamics of the universe and is concerned with fundamental questions about its formation and evolution. History of planetary science – history of the scientific study of planets (including Earth), moons, and planetary systems, in particular those of the Solar System and the processes that form them. History of stellar astronomy – history of the natural science that deals with the study of celestial objects (such as stars, planets, comets, nebulae, star clusters, and galaxies) and phenomena that originate outside the atmosphere of Earth (such as cosmic background radiation) History of atmospheric physics – history of the study of the application of physics to the atmosphere History of atomic, molecular, and optical physics – history of the study of how matter and light interact History of biophysics – history of the study of physical processes relating to biology History of medical physics – history of the application of physics concepts, theories and methods to medicine. History of neurophysics – history of the branch of biophysics dealing with the nervous system. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of computational physics – history of the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists. History of condensed matter physics – history of the study of the physical properties of condensed phases of matter. History of cryogenics – history of cryogenics is the study of the production of very low temperature (below −150 °C, −238 °F or 123K) and the behavior of materials at those temperatures. History of Dynamics – history of the study of the causes of motion and changes in motion History of econophysics – history of the interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics History of electromagnetism – history of the branch of science concerned with the forces that occur between electrically charged particles. History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods History of materials physics – history of the use of physics to describe materials in many different ways such as force, heat, light and mechanics. History of mathematical physics – history of the application of mathematics to problems in physics and the development of mathematical methods for such applications and for the formulation of physical theories. History of mechanics – history of the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. History of biomechanics – history of the study of the structure and function of biological systems such as humans, animals, plants, organs, and cells by means of the methods of mechanics. History of classical mechanics – history of the one of the two major sub-fields of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces. History of continuum mechanics – history of the branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. History of fluid mechanics – history of the study of fluids and the forces on them. History of quantum mechanics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of thermodynamics – history of the branch of physical science concerned with heat and its relation to other forms of energy and work. History of nuclear physics – history of the field of physics that studies the building blocks and interactions of atomic nuclei. History of optics – history of the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. History of particle physics – history of the branch of physics that studies the existence and interactions of particles that are the constituents of what is usually referred to as matter or radiation. History of psychophysics – history of the quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they affect. History of plasma physics – history of the state of matter similar to gas in which a certain portion of the particles are ionized. History of polymer physics – history of the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation and polymerization of polymers and monomers respectively. History of quantum physics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of theory of relativity – History of statics – history of the branch of mechanics concerned with the analysis of loads (force, torque/moment) on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity. History of solid state physics – history of the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. History of vehicle dynamics – history of the dynamics of vehicles, here assumed to be ground vehicles. History of chemistry – history of the physical science of atomic matter (matter that is composed of chemical elements), especially its chemical reactions, but also including its properties, structure, composition, behavior, and changes as they relate the chemical reactions History of analytical chemistry – history of the study of the separation, identification, and quantification of the chemical components of natural and artificial materials. History of astrochemistry – history of the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation. History of cosmochemistry – history of the study of the chemical composition of matter in the universe and the processes that led to those compositions History of atmospheric chemistry – history of the branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology, and other disciplines History of biochemistry – history of the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemistry governs all living organisms and living processes. History of agrochemistry – history of the study of both chemistry and biochemistry which are important in agricultural production, the processing of raw products into foods and beverages, and in environmental monitoring and remediation. History of bioinorganic chemistry – history of the examines the role of metals in biology. History of bioorganic chemistry – history of the rapidly growing scientific discipline that combines organic chemistry and biochemistry. History of biophysical chemistry – history of the new branch of chemistry that covers a broad spectrum of research activities involving biological systems. History of environmental chemistry – history of the scientific study of the chemical and biochemical phenomena that occur in natural places. History of immunochemistry – history of the branch of chemistry that involves the study of the reactions and components on the immune system. History of medicinal chemistry – history of the discipline at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis, and development for market of pharmaceutical agents (drugs). History of pharmacology – history of the branch of medicine and biology concerned with the study of drug action. History of natural product chemistry – history of the chemical compound or substance produced by a living organism – history of the found in nature that usually has a pharmacological or biological activity for use in pharmaceutical drug discovery and drug design. History of neurochemistry – history of the specific study of neurochemicals, which include neurotransmitters and other molecules such as neuro-active drugs that influence neuron function. History of computational chemistry – history of the branch of chemistry that uses principles of computer science to assist in solving chemical problems. History of chemo-informatics – history of the use of computer and informational techniques, applied to a range of problems in the field of chemistry. History of molecular mechanics – history of the uses Newtonian mechanics to model molecular systems. History of Flavor chemistry – history of someone who uses chemistry to engineer artificial and natural flavors. History of Flow chemistry – history of the chemical reaction is run in a continuously flowing stream rather than in batch production. History of geochemistry – history of the study of the mechanisms behind major geological systems using chemistry History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology History of ocean chemistry – history of the study of the chemistry of marine environments including the influences of different variables. History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds. History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes, and nuclear properties. History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). History of organic chemistry – history of the study of the structure, properties, composition, reactions, and preparation (by synthesis or by other means) of carbon-based compounds, hydrocarbons, and their derivatives. History of petrochemistry – history of the branch of chemistry that studies the transformation of crude oil (petroleum) and natural gas into useful products or raw materials. History of organometallic chemistry – history of the study of chemical compounds containing bonds between carbon and a metal. History of photochemistry – history of the study of chemical reactions that proceed with the absorption of light by atoms or molecules.. History of physical chemistry – history of the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of physical laws and concepts. History of chemical kinetics – history of the study of rates of chemical processes. History of chemical thermodynamics – history of the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. History of electrochemistry – history of the branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (a metal
2–4 grams per gallon were needed). The "low percentage" solution ultimately led to the discovery of tetraethyllead (TEL) in December 1921, a product of the research of Midgley and Boyd and the defining component of leaded gasoline. This innovation started a cycle of improvements in fuel efficiency that coincided with the large-scale development of oil refining to provide more products in the boiling range of gasoline. Ethanol could not be patented but TEL could, so Kettering secured a patent for TEL and began promoting it instead of other options. The dangers of compounds containing lead were well-established by then and Kettering was directly warned by Robert Wilson of MIT, Reid Hunt of Harvard, Yandell Henderson of Yale, and Erik Krause of the University of Potsdam in Germany about its use. Krause had worked on tetraethyllead for many years and called it "a creeping and malicious poison" that had killed a member of his dissertation committee. On 27 October 1924, newspaper articles around the nation told of the workers at the Standard Oil refinery near Elizabeth, New Jersey who were producing TEL and were suffering from lead poisoning. By 30 October, the death toll had reached five. In November, the New Jersey Labor Commission closed the Bayway refinery and a grand jury investigation was started which had resulted in no charges by February 1925. Leaded gasoline sales were banned in New York City, Philadelphia, and New Jersey. General Motors, DuPont, and Standard Oil, who were partners in Ethyl Corporation, the company created to produce TEL, began to argue that there were no alternatives to leaded gasoline that would maintain fuel efficiency and still prevent engine knocking. After several industry-funded flawed studies reported that TEL-treated gasoline was not a public health issue, the controversy subsided. United States, 1930–1941 In the five years prior to 1929, a great amount of experimentation was conducted on different testing methods for determining fuel resistance to abnormal combustion. It appeared engine knocking was dependent on a wide variety of parameters including compression, ignition timing, cylinder temperature, air-cooled or water-cooled engines, chamber shapes, intake temperatures, lean or rich mixtures, and others. This led to a confusing variety of test engines that gave conflicting results, and no standard rating scale existed. By 1929, it was recognized by most aviation gasoline manufacturers and users that some kind of antiknock rating must be included in government specifications. In 1929, the octane rating scale was adopted, and in 1930, the first octane specification for aviation fuels was established. In the same year, the U.S. Army Air Force specified fuels rated at 87 octane for its aircraft as a result of studies it had conducted. During this period, research showed that hydrocarbon structure was extremely important to the antiknocking properties of fuel. Straight-chain paraffins in the boiling range of gasoline had low antiknock qualities while ring-shaped molecules such as aromatic hydrocarbons (for example benzene) had higher resistance to knocking. This development led to the search for processes that would produce more of these compounds from crude oils than achieved under straight distillation or thermal cracking. Research by the major refiners led to the development of processes involving isomerization of cheap and abundant butane to isobutane, and alkylation to join isobutane and butylenes to form isomers of octane such as "isooctane", which became an important component in aviation fuel blending. To further complicate the situation, as engine performance increased, the altitude that aircraft could reach also increased, which resulted in concerns about the fuel freezing. The average temperature decrease is per increase in altitude, and at , the temperature can approach . Additives like benzene, with a freezing point of , would freeze in the gasoline and plug fuel lines. Substituted aromatics such as toluene, xylene, and cumene, combined with limited benzene, solved the problem. By 1935, there were seven different aviation grades based on octane rating, two Army grades, four Navy grades, and three commercial grades including the introduction of 100-octane aviation gasoline. By 1937, the Army established 100-octane as the standard fuel for combat aircraft, and to add to the confusion, the government now recognized 14 different grades, in addition to 11 others in foreign countries. With some companies required to stock 14 grades of aviation fuel, none of which could be interchanged, the effect on the refiners was negative. The refining industry could not concentrate on large capacity conversion processes for so many different grades and a solution had to be found. By 1941, principally through the efforts of the Cooperative Fuel Research Committee, the number of grades for aviation fuels was reduced to three: 73, 91, and 100 octane. The development of 100-octane aviation gasoline on an economic scale was due in part to Jimmy Doolittle, who had become Aviation Manager of Shell Oil Company. He convinced Shell to invest in refining capacity to produce 100-octane on a scale that nobody needed since no aircraft existed that required a fuel that nobody made. Some fellow employees would call his effort "Doolittle's million-dollar blunder" but time would prove Doolittle correct. Before this, the Army had considered 100-octane tests using pure octane but at $25 a gallon, the price prevented this from happening. In 1929 Stanavo Specification Board, Inc. was organized by the Standard Oil companies of California, Indiana, and New Jersey to improve aviation fuels and oils and by 1935 had placed their first 100 octane fuel on the market, Stanavo Ethyl Gasoline 100. It was used by the Army, engine manufacturers and airlines for testing and for air racing and record flights. By 1936 tests at Wright Field using the new, cheaper alternatives to pure octane proved the value of 100 octane fuel, and both Shell and Standard Oil would win the contract to supply test quantities for the Army. By 1938 the price was down to 17.5 cents a gallon, only 2.5 cents more than 87 octane fuel. By the end of WW II the price would be down to 16 cents a gallon. In 1937, Eugene Houdry developed the Houdry process of catalytic cracking, which produced a high-octane base stock of gasoline which was superior to the thermally cracked product since it did not contain the high concentration of olefins. In 1940, there were only 14 Houdry units in operation in the U.S.; by 1943, this had increased to 77, either of the Houdry process or of the Thermofor Catalytic or Fluid Catalyst type. The search for fuels with octane ratings above 100 led to the extension of the scale by comparing power output. A fuel designated grade 130 would produce 130 percent as much power in an engine as it would running on pure iso-octane. During WW II, fuels above 100-octane were given two ratings, a rich and a lean mixture, and these would be called 'performance numbers' (PN). 100-octane aviation gasoline would be referred to as 130/100 grade. World War II Germany Oil and its byproducts, especially high-octane aviation gasoline, would prove to be a driving concern for how Germany conducted the war. As a result of the lessons of World War I, Germany had stockpiled oil and gasoline for its blitzkrieg offensive and had annexed Austria, adding 18,000 barrels per day of oil production, but this was not sufficient to sustain the planned conquest of Europe. Because captured supplies and oil fields would be necessary to fuel the campaign, the German high command created a special squad of oil-field experts drawn from the ranks of domestic oil industries. They were sent in to put out oil-field fires and get production going again as soon as possible. But capturing oil fields remained an obstacle throughout the war. During the Invasion of Poland, German estimates of gasoline consumption turned out to be vastly too low. Heinz Guderian and his Panzer divisions consumed nearly of gasoline on the drive to Vienna. When they were engaged in combat across open country, gasoline consumption almost doubled. On the second day of battle, a unit of the XIX Corps was forced to halt when it ran out of gasoline. One of the major objectives of the Polish invasion was their oil fields but the Soviets invaded and captured 70 percent of the Polish production before the Germans could reach it. Through the German-Soviet Commercial Agreement (1940), Stalin agreed in vague terms to supply Germany with additional oil equal to that produced by now Soviet-occupied Polish oil fields at Drohobych and Boryslav in exchange for hard coal and steel tubing. Even after the Nazis conquered the vast territories of Europe, this did not help the gasoline shortage. This area had never been self-sufficient in oil before the war. In 1938, the area that would become Nazi-occupied produced 575,000 barrels per day. In 1940, total production under German control amounted to only . By the spring of 1941 and the depletion of German gasoline reserves, Adolf Hitler saw the invasion of Russia to seize the Polish oil fields and the Russian oil in the Caucasus as the solution to the German gasoline shortage. As early as July 1941, following the 22 June start of Operation Barbarossa, certain Luftwaffe squadrons were forced to curtail ground support missions due to shortages of aviation gasoline. On 9 October, the German quartermaster general estimated that army vehicles were short of gasoline requirements. Virtually all of Germany's aviation gasoline came from synthetic oil plants that hydrogenated coals and coal tars. These processes had been developed during the 1930s as an effort to achieve fuel independence. There were two grades of aviation gasoline produced in volume in Germany, the B-4 or blue grade and the C-3 or green grade, which accounted for about two-thirds of all production. B-4 was equivalent to 89-octane and the C-3 was roughly equal to the U.S. 100-octane, though lean mixture was rated around 95-octane and was poorer than the U.S. version. Maximum output achieved in 1943 reached 52,200 barrels a day before the Allies decided to target the synthetic fuel plants. Through captured enemy aircraft and analysis of the gasoline found in them, both the Allies and the Axis powers were aware of the quality of the aviation gasoline being produced and this prompted an octane race to achieve the advantage in aircraft performance. Later in the war, the C-3 grade was improved to where it was equivalent to the U.S. 150 grade (rich mixture rating). Japan Japan, like Germany, had almost no domestic oil supply and by the late 1930s, produced only 7% of its own oil while importing the rest – 80% from the United States. As Japanese aggression grew in China (USS Panay incident) and news reached the American public of Japanese bombing of civilian centers, especially the bombing of Chungking, public opinion began to support a U.S. embargo. A Gallup poll in June 1939 found that 72 percent of the American public supported an embargo on war materials to Japan. This increased tensions between the U.S. and Japan, and it led to the U.S. placing restrictions on exports. In July 1940, the U.S. issued a proclamation that banned the export of 87 octane or higher aviation gasoline to Japan. This ban did not hinder the Japanese as their aircraft could operate with fuels below 87 octane and if needed they could add TEL to increase the octane. As it turned out, Japan bought 550 percent more sub-87 octane aviation gasoline in the five months after the July 1940 ban on higher octane sales. The possibility of a complete ban of gasoline from America created friction in the Japanese government as to what action to take to secure more supplies from the Dutch East Indies and demanded greater oil exports from the exiled Dutch government after the Battle of the Netherlands. This action prompted the U.S. to move its Pacific fleet from Southern California to Pearl Harbor to help stiffen British resolve to stay in Indochina. With the Japanese invasion of French Indochina in September 1940, came great concerns about the possible Japanese invasion of the Dutch Indies to secure their oil. After the U.S. banned all exports of steel and iron scrap, the next day Japan signed the Tripartite Pact and this led Washington to fear that a complete U.S. oil embargo would prompt the Japanese to invade the Dutch East Indies. On 16 June 1941 Harold Ickes, who was appointed Petroleum Coordinator for National Defense, stopped a shipment of oil from Philadelphia to Japan in light of the oil shortage on the East coast due to increased exports to Allies. He also telegrammed all oil suppliers on the East coast not to ship any oil to Japan without his permission. President Roosevelt countermanded Ickes' orders telling Ickes that the "... I simply have not got enough Navy to go around and every little episode in the Pacific means fewer ships in the Atlantic". On 25 July 1941, the U.S. froze all Japanese financial assets and licenses would be required for each use of the frozen funds including oil purchases that could produce aviation gasoline. On 28 July 1941, Japan invaded southern Indochina. The debate inside the Japanese government as to its oil and gasoline situation was leading to invasion of the Dutch East Indies but this would mean war with the U.S., whose Pacific fleet was a threat to their flank. This situation led to the decision to attack the U.S. fleet at Pearl Harbor before proceeding with the Dutch East Indies invasion. On 7 December 1941, Japan attacked Pearl Harbor, and the next day the Netherlands declared war on Japan, which initiated the Dutch East Indies campaign. But the Japanese missed a golden opportunity at Pearl Harbor. "All of the oil for the fleet was in surface tanks at the time of Pearl Harbor," Admiral Chester Nimitz, who became Commander in Chief of the Pacific Fleet, was later to say. "We had about of oil out there and all of it was vulnerable to .50 caliber bullets. Had the Japanese destroyed the oil," he added, "it would have prolonged the war another two years." United States Early in 1944, William Boyd, president of the American Petroleum Institute and chairman of the Petroleum Industry War Council said: "The Allies may have floated to victory on a wave of oil in World War I, but in this infinitely greater World War II, we are flying to victory on the wings of petroleum". In December 1941 the United States had 385,000 oil wells producing 1.4 billion barrels of oil a year and 100-octane aviation gasoline capacity was at 40,000 barrels a day. By 1944, the U.S. was producing over 1.5 billion barrels a year (67 percent of world production) and the petroleum industry had built 122 new plants for the production of 100-octane aviation gasoline and capacity was over 400,000 barrels a day – an increase of more than ten-fold. It was estimated that the U.S. was producing enough 100-octane aviation gasoline to permit the dropping of of bombs on the enemy every day of the year. The record of gasoline consumption by the Army prior to June 1943 was uncoordinated as each supply service of the Army purchased its own petroleum products and no centralized system of control nor records existed. On 1 June 1943 the Army created the Fuels and Lubricants Division of the Quartermaster Corps, and from their records they tabulated that the Army (excluding fuels and lubricants for aircraft) purchased over 2.4 billion gallons of gasoline for delivery to overseas theaters between 1 June 1943, through August 1945. That figure does not include gasoline used by the Army inside the United States. Motor fuel production had declined from 701,000,000 barrels in 1941 down to 608,000,000 barrels in 1943. World War II marked the first time in U.S. history that gasoline was rationed and the government imposed price controls to prevent inflation. Gasoline consumption per automobile declined from 755 gallons per year in 1941 down to 540 gallons in 1943, with the goal of preserving rubber for tires since the Japanese had cut the U.S. off from over 90 percent of its rubber supply which had come from the Dutch East Indies and the U.S. synthetic rubber industry was in its infancy. Average gasoline prices went from a record low of $0.1275 per gallon ($0.1841 with taxes) in 1940 to $0.1448 per gallon ($0.2050 with taxes) in 1945. Even with the world's largest aviation gasoline production, the U.S. military still found that more was needed. Throughout the duration of the war, aviation gasoline supply was always behind requirements and this impacted training and operations. The reason for this shortage developed before the war even began. The free market did not support the expense of producing 100-octane aviation fuel in large volume, especially during the Great Depression. Iso-octane in the early development stage cost $30 a gallon and even by 1934, it was still $2 a gallon compared to $0.18 for motor gasoline when the Army decided to experiment with 100-octane for its combat aircraft. Though only 3 percent of U.S. combat aircraft in 1935 could take full advantage of the higher octane due to low compression ratios, the Army saw that the need for increasing performance warranted the expense and purchased 100,000 gallons. By 1937 the Army established 100-octane as the standard fuel for combat aircraft and by 1939 production was only 20,000 barrels a day. In effect, the U.S. military was the only market for 100-octane aviation gasoline and as war broke out in Europe this created a supply problem that persisted throughout the duration. With the war in Europe a reality in 1939, all predictions of 100-octane consumption were outrunning all possible production. Neither the Army nor the Navy could contract more than six months in advance for fuel and they could not supply the funds for plant expansion. Without a long-term guaranteed market, the petroleum industry would not risk its capital to expand production for a product that only the government would buy. The solution to the expansion of storage, transportation, finances, and production was the creation of the Defense Supplies Corporation on 19 September 1940. The Defense Supplies Corporation would buy, transport and store all aviation gasoline for the Army and Navy at cost plus a carrying fee. When the Allied breakout after D-Day found their armies stretching their supply lines to a dangerous point, the makeshift solution was the Red Ball Express. But even this soon was inadequate. The trucks in the convoys had to drive longer distances as the armies advanced and they were consuming a greater percentage of the same gasoline they were trying to deliver. In 1944, General George Patton's Third Army finally stalled just short of the German border after running out of gasoline. The general was so upset at the arrival of a truckload of rations instead of gasoline he was reported to have shouted: "Hell, they send us food, when they know we can fight without food but not without oil." The solution had to wait for the repairing of the railroad lines and bridges so that the more efficient trains could replace the gasoline-consuming truck convoys. United States, 1946 to present The development of jet engines burning kerosene-based fuels during WW II for aircraft produced a superior performing propulsion system than internal combustion engines could offer and the U.S. military forces gradually replaced their piston combat aircraft with jet powered planes. This development would essentially remove the military need for ever increasing octane fuels and eliminated government support for the refining industry to pursue the research and production of such exotic and expensive fuels. Commercial aviation was slower to adapt to jet propulsion and until 1958, when the Boeing 707 first entered commercial service, piston powered airliners still relied on aviation gasoline. But commercial aviation had greater economic concerns than the maximum performance that the military could afford. As octane numbers increased so did the cost of gasoline but the incremental increase in efficiency becomes less as compression ratio goes up. This reality set a practical limit to how high compression ratios could increase relative to how expensive the gasoline would become. Last produced in 1955, the Pratt & Whitney R-4360 Wasp Major was using 115/145 Aviation gasoline and producing 1 horsepower per cubic inch at 6.7 compression ratio (turbo-supercharging would increase this) and 1 pound of engine weight to produce 1.1 horsepower. This compares to the Wright Brothers engine needing almost 17 pounds of engine weight to produce 1 horsepower. The US automobile industry after WW II could not take advantage of the high octane fuels then available. Automobile compression ratios increased from an average of 5.3-to-1 in 1931 to just 6.7-to-1 in 1946. The average octane number of regular-grade motor gasoline increased from 58 to 70 during the same time. Military aircraft were using expensive turbo-supercharged engines that cost at least 10 times as much per horsepower as automobile engines and had to be overhauled every 700 to 1,000 hours. The automobile market could not support such expensive engines. It would not be until 1957 that the first US automobile manufacturer could mass-produce an engine that would produce one horsepower per cubic inch, the Chevrolet 283 hp/283 cubic inch V-8 engine option in the Corvette. At $485 this was an expensive option that few consumers could afford and would only appeal to the performance-oriented consumer market willing to pay for the premium fuel required. This engine had an advertised compression ratio of 10.5-to-1 and the 1958 AMA Specifications stated that the octane requirement was 96-100 RON. At (1959 with aluminum intake), it took of engine weight to make . In the 1950s oil refineries started to focus on high octane fuels, and then detergents were added to gasoline to clean the jets in carburetors. The 1970s witnessed greater attention to the environmental consequences of burning gasoline. These considerations led to the phasing out of TEL and its replacement by other antiknock compounds. Subsequently, low-sulfur gasoline was introduced, in part to preserve the catalysts in modern exhaust systems. Chemical analysis and production Commercial gasoline is a mixture of a large number of different hydrocarbons. Gasoline is produced to meet a host of engine performance specifications and many different compositions are possible. Hence, the exact chemical composition of gasoline is undefined. The performance specification also varies with season, requiring more volatile blends (due to added butane) during winter, in order to be able to start a cold engine. At the refinery, the composition varies according to the crude oils from which it is produced, the type of processing units present at the refinery, how those units are operated, and which hydrocarbon streams (blendstocks) the refinery opts to use when blending the final product. Gasoline is produced in oil refineries. Roughly of gasoline is derived from a barrel of crude oil. Material separated from crude oil via distillation, called virgin or straight-run gasoline, does not meet specifications for modern engines (particularly the octane rating; see below), but can be pooled to the gasoline blend. The bulk of a typical gasoline consists of a homogeneous mixture of small, relatively lightweight hydrocarbons with between 4 and 12 carbon atoms per molecule (commonly referred to as C4–C12). It is a mixture of paraffins (alkanes), olefins (alkenes), and cycloalkanes (naphthenes). The usage of the terms paraffin and olefin in place of the standard chemical nomenclature alkane and alkene, respectively, is particular to the oil industry. The actual ratio of molecules in any gasoline depends upon: the oil refinery that makes the gasoline, as not all refineries have the same set of processing units; the crude oil feed used by the refinery; the grade of gasoline (in particular, the octane rating). The various refinery streams blended to make gasoline have different characteristics. Some important streams include the following: Straight-run gasoline, commonly referred to as naphtha, is distilled directly from crude oil. Once the leading source of fuel, its low octane rating required lead additives. It is low in aromatics (depending on the grade of the crude oil stream) and contains some cycloalkanes (naphthenes) and no olefins (alkenes). Between 0 and 20 percent of this stream is pooled into the finished gasoline because the quantity of this fraction in the crude is less than fuel demand and the fraction's Research Octane Number (RON) is too low. The chemical properties (namely RON and Reid vapor pressure (RVP)) of the straight-run gasoline can be improved through reforming and isomerization. However, before feeding those units, the naphtha needs to be split into light and heavy naphtha. Straight-run gasoline can also be used as a feedstock for steam-crackers to produce olefins. Reformate, produced in a catalytic reformer, has a high octane rating with high aromatic content and relatively low olefin content. Most of the benzene, toluene, and xylene (the so-called BTX hydrocarbons) are more valuable as chemical feedstocks and are thus removed to some extent. Catalytic cracked gasoline, or catalytic cracked naphtha, produced with a catalytic cracker, has a moderate octane rating, high olefin content, and moderate aromatic content. Hydrocrackate (heavy, mid, and light), produced with a hydrocracker, has a medium to low octane rating and moderate aromatic levels. Alkylate is produced in an alkylation unit, using isobutane and olefins as feedstocks. Finished alkylate contains no aromatics or olefins and has a high MON (Motor Octane Number). Isomerate is obtained by isomerizing low-octane straight-run gasoline into iso-paraffins (non-chain alkanes, such as isooctane). Isomerate has a medium RON and MON, but no aromatics or olefins. Butane is usually blended in the gasoline pool, although the quantity of this stream is limited by the RVP specification. The terms above are the jargon used in the oil industry, and the terminology varies. Currently, many countries set limits on gasoline aromatics in general, benzene in particular, and olefin (alkene) content. Such regulations have led to an increasing preference for alkane isomers, such as isomerate or alkylate, as their octane rating is higher than n-alkanes. In the European Union, the benzene limit is set at 1% by volume for all grades of automotive gasoline. This is usually achieved by avoiding feeding C6, in particular cyclohexane, to the reformer unit, where it would be converted to benzene. Therefore, only (desulfurized) heavy virgin naphtha (HVN) is fed to the reformer unit Gasoline can also contain other organic compounds, such as organic ethers (deliberately added), plus small levels of contaminants, in particular organosulfur compounds (which are usually removed at the refinery). Physical properties Density The specific gravity of gasoline ranges from 0.71 to 0.77, with higher densities having a greater volume fraction of aromatics. Finished marketable gasoline is traded (in Europe) with a standard reference of , and its price is escalated or de-escalated according to its actual density. Because of its low density, gasoline floats on water, and therefore water cannot generally be used to extinguish a gasoline fire unless applied in a fine mist. Stability Quality gasoline should be stable for six months if stored properly, but as gasoline is a mixture rather than a single compound, it will break down slowly over time due to the separation of the components. Gasoline stored for a year will most likely be able to be burned in an internal combustion engine without too much trouble. However, the effects of long-term storage will become more noticeable with each passing month until a time comes when the gasoline should be diluted with ever-increasing amounts of freshly made fuel so that the older gasoline may be used up. If left undiluted, improper operation will occur and this may include engine damage from misfiring or the lack of proper action of the fuel within a fuel injection system and from an onboard computer attempting to compensate (if applicable to the vehicle). Gasoline should ideally be stored in an airtight container (to prevent oxidation or water vapor mixing in with the gas) that can withstand the vapor pressure of the gasoline without venting (to prevent the loss of the more volatile fractions) at a stable cool temperature (to reduce the excess pressure from liquid expansion and to reduce the rate of any decomposition reactions). When gasoline is not stored correctly, gums and solids may result, which can corrode system components and accumulate on wet surfaces, resulting in a condition called "stale fuel". Gasoline containing ethanol is especially subject to absorbing atmospheric moisture, then forming gums, solids, or two phases (a hydrocarbon phase floating on top of a water-alcohol phase). The presence of these degradation products in the fuel tank or fuel lines plus a carburetor or fuel injection components makes it harder to start the engine or causes reduced engine performance. On resumption of regular engine use, the buildup may or may not be eventually cleaned out by the flow of fresh gasoline. The addition of a fuel stabilizer to gasoline can extend the life of fuel that is not or cannot be stored properly, though removal of all fuel from a fuel system is the only real solution to the problem of long-term storage of an engine or a machine or vehicle. Typical fuel stabilizers are proprietary mixtures containing mineral spirits, isopropyl alcohol, 1,2,4-trimethylbenzene or other additives. Fuel stabilizers are commonly used for small engines, such as lawnmower and tractor engines, especially when their use is sporadic or seasonal (little to no use for one or more seasons of the year). Users have been advised to keep gasoline containers more than half full and properly capped to reduce air exposure, to avoid storage at high temperatures, to run an engine for ten minutes to circulate the stabilizer through all components prior to storage, and to run the engine at intervals to purge stale fuel from the carburetor. Gasoline stability requirements are set by the standard ASTM D4814. This standard describes the various characteristics and requirements of automotive fuels for use over a wide range of operating conditions in ground vehicles equipped with spark-ignition engines. Combustion energy content A gasoline-fueled internal combustion engine obtains energy from the combustion of gasoline's various hydrocarbons with oxygen from the ambient air, yielding carbon dioxide and water as exhaust. The combustion of octane, a representative species, performs the chemical reaction: By weight, combustion of gasoline releases about or by volume , quoting the lower heating value. Gasoline blends differ, and therefore actual energy content varies according to the season and producer by up to 1.75% more or less than the average. On average, about 74 L (19.5 US gal; 16.3 imp gal) of gasoline are available from a barrel of crude oil (about 46% by volume), varying with the quality of the crude and the grade of the gasoline. The remainder is products ranging from tar to naphtha. A high-octane-rated fuel, such as liquefied petroleum gas (LPG), has an overall lower power output at the typical 10:1 compression ratio of an engine design optimized for gasoline fuel. An engine tuned for LPG fuel via higher compression ratios (typically 12:1) improves the power output. This is because higher-octane fuels allow for a higher compression ratio without knocking, resulting in a higher cylinder temperature, which improves efficiency. Also, increased mechanical efficiency is created by a higher compression ratio through the concomitant higher expansion ratio on the power stroke, which is by far the greater effect. The higher expansion ratio extracts more work from the high-pressure gas created by the combustion process. An Atkinson cycle engine uses the timing of the valve events to produce the benefits of a high expansion ratio without the disadvantages, chiefly detonation, of a high compression ratio. A high expansion ratio is also one of the two key reasons for the efficiency of diesel engines, along with the elimination of pumping losses due to throttling of the intake airflow. The lower energy content of LPG by liquid volume in comparison to gasoline is due mainly to its lower density. This lower density is a property of the lower molecular weight of propane (LPG's chief component) compared to gasoline's blend of various hydrocarbon compounds with heavier molecular weights than propane. Conversely, LPG's energy content by weight is higher than gasoline's due to a higher hydrogen-to-carbon ratio. Molecular weights of the species in the representative octane combustion are C8H18 114, O2 32, CO2 44, H2O 18; therefore 1 kg of fuel reacts with 3.51 kg of oxygen to produce 3.09 kg of carbon dioxide and 1.42 kg of water. Octane rating Spark-ignition engines are designed to burn gasoline in a controlled process called deflagration. However, the unburned mixture may autoignite by pressure and heat alone, rather than igniting from the spark plug at exactly the right time, causing a rapid pressure rise that can damage the engine. This is often referred to as engine knocking or end-gas knock. Knocking can be reduced by increasing the gasoline's resistance to autoignition, which is expressed by its octane rating. Octane rating is measured relative to a mixture of 2,2,4-trimethylpentane (an isomer of octane) and n-heptane. There are different conventions for expressing octane ratings, so the same physical fuel may have several different octane ratings based on the measure used. One of the best known is the research octane number (RON). The octane rating of typical commercially available gasoline varies by country. In Finland, Sweden, and Norway, 95 RON is the standard for regular unleaded gasoline and 98 RON is also available as a more expensive option. In the United Kingdom, over 95% of gasoline sold has 95 RON and is marketed as Unleaded or Premium Unleaded. Super Unleaded, with 97/98 RON and branded high-performance fuels (e.g. Shell V-Power, BP Ultimate) with 99 RON make up the balance. Gasoline with 102 RON may rarely be available for racing purposes. In the United States, octane ratings in unleaded fuels vary between 85 and 87 AKI (91–92 RON) for regular, 89–90 AKI (94–95 RON) for mid-grade (equivalent to European regular), up to 90–94 AKI (95–99 RON) for premium (European premium). As South Africa's largest city, Johannesburg, is located on the Highveld at above sea level, the Automobile Association of South Africa recommends 95-octane gasoline at low altitude and 93-octane for use in Johannesburg because "The higher the altitude the lower the air pressure, and the lower the need for a high octane fuel as there is no real performance gain". Octane rating became important as the military sought higher output for aircraft engines in the late 1930s and the 1940s. A higher octane rating allows a higher compression ratio or supercharger boost, and thus higher temperatures and pressures, which translate to higher power output. Some scientists even predicted that a nation with a good supply of high-octane gasoline would have the advantage in air power. In 1943, the Rolls-Royce Merlin aero engine produced using 100 RON fuel from a modest 27-litre displacement. By the time of Operation Overlord, both the RAF and USAAF were conducting some operations in Europe using 150 RON fuel (100/150 avgas), obtained by adding 2.5% aniline to 100-octane avgas. By this time the Rolls-Royce Merlin 66 was developing using this fuel. Additives Antiknock additives Tetraethyllead Gasoline, when used in high-compression internal combustion engines, tends to auto-ignite or "detonate" causing damaging engine knocking (also called "pinging" or "pinking"). To address this problem, tetraethyllead (TEL) was widely adopted as an additive for gasoline in the 1920s. With a growing awareness of the seriousness of the extent of environmental and health damage caused by lead compounds, however, and the incompatibility of lead with catalytic converters, governments began to mandate reductions in gasoline lead. In the United States, the Environmental Protection Agency issued regulations to reduce the lead content of leaded gasoline over a series of annual phases, scheduled to begin in 1973 but delayed by court appeals until 1976. By 1995, leaded fuel accounted for only 0.6 percent of total gasoline
had considered 100-octane tests using pure octane but at $25 a gallon, the price prevented this from happening. In 1929 Stanavo Specification Board, Inc. was organized by the Standard Oil companies of California, Indiana, and New Jersey to improve aviation fuels and oils and by 1935 had placed their first 100 octane fuel on the market, Stanavo Ethyl Gasoline 100. It was used by the Army, engine manufacturers and airlines for testing and for air racing and record flights. By 1936 tests at Wright Field using the new, cheaper alternatives to pure octane proved the value of 100 octane fuel, and both Shell and Standard Oil would win the contract to supply test quantities for the Army. By 1938 the price was down to 17.5 cents a gallon, only 2.5 cents more than 87 octane fuel. By the end of WW II the price would be down to 16 cents a gallon. In 1937, Eugene Houdry developed the Houdry process of catalytic cracking, which produced a high-octane base stock of gasoline which was superior to the thermally cracked product since it did not contain the high concentration of olefins. In 1940, there were only 14 Houdry units in operation in the U.S.; by 1943, this had increased to 77, either of the Houdry process or of the Thermofor Catalytic or Fluid Catalyst type. The search for fuels with octane ratings above 100 led to the extension of the scale by comparing power output. A fuel designated grade 130 would produce 130 percent as much power in an engine as it would running on pure iso-octane. During WW II, fuels above 100-octane were given two ratings, a rich and a lean mixture, and these would be called 'performance numbers' (PN). 100-octane aviation gasoline would be referred to as 130/100 grade. World War II Germany Oil and its byproducts, especially high-octane aviation gasoline, would prove to be a driving concern for how Germany conducted the war. As a result of the lessons of World War I, Germany had stockpiled oil and gasoline for its blitzkrieg offensive and had annexed Austria, adding 18,000 barrels per day of oil production, but this was not sufficient to sustain the planned conquest of Europe. Because captured supplies and oil fields would be necessary to fuel the campaign, the German high command created a special squad of oil-field experts drawn from the ranks of domestic oil industries. They were sent in to put out oil-field fires and get production going again as soon as possible. But capturing oil fields remained an obstacle throughout the war. During the Invasion of Poland, German estimates of gasoline consumption turned out to be vastly too low. Heinz Guderian and his Panzer divisions consumed nearly of gasoline on the drive to Vienna. When they were engaged in combat across open country, gasoline consumption almost doubled. On the second day of battle, a unit of the XIX Corps was forced to halt when it ran out of gasoline. One of the major objectives of the Polish invasion was their oil fields but the Soviets invaded and captured 70 percent of the Polish production before the Germans could reach it. Through the German-Soviet Commercial Agreement (1940), Stalin agreed in vague terms to supply Germany with additional oil equal to that produced by now Soviet-occupied Polish oil fields at Drohobych and Boryslav in exchange for hard coal and steel tubing. Even after the Nazis conquered the vast territories of Europe, this did not help the gasoline shortage. This area had never been self-sufficient in oil before the war. In 1938, the area that would become Nazi-occupied produced 575,000 barrels per day. In 1940, total production under German control amounted to only . By the spring of 1941 and the depletion of German gasoline reserves, Adolf Hitler saw the invasion of Russia to seize the Polish oil fields and the Russian oil in the Caucasus as the solution to the German gasoline shortage. As early as July 1941, following the 22 June start of Operation Barbarossa, certain Luftwaffe squadrons were forced to curtail ground support missions due to shortages of aviation gasoline. On 9 October, the German quartermaster general estimated that army vehicles were short of gasoline requirements. Virtually all of Germany's aviation gasoline came from synthetic oil plants that hydrogenated coals and coal tars. These processes had been developed during the 1930s as an effort to achieve fuel independence. There were two grades of aviation gasoline produced in volume in Germany, the B-4 or blue grade and the C-3 or green grade, which accounted for about two-thirds of all production. B-4 was equivalent to 89-octane and the C-3 was roughly equal to the U.S. 100-octane, though lean mixture was rated around 95-octane and was poorer than the U.S. version. Maximum output achieved in 1943 reached 52,200 barrels a day before the Allies decided to target the synthetic fuel plants. Through captured enemy aircraft and analysis of the gasoline found in them, both the Allies and the Axis powers were aware of the quality of the aviation gasoline being produced and this prompted an octane race to achieve the advantage in aircraft performance. Later in the war, the C-3 grade was improved to where it was equivalent to the U.S. 150 grade (rich mixture rating). Japan Japan, like Germany, had almost no domestic oil supply and by the late 1930s, produced only 7% of its own oil while importing the rest – 80% from the United States. As Japanese aggression grew in China (USS Panay incident) and news reached the American public of Japanese bombing of civilian centers, especially the bombing of Chungking, public opinion began to support a U.S. embargo. A Gallup poll in June 1939 found that 72 percent of the American public supported an embargo on war materials to Japan. This increased tensions between the U.S. and Japan, and it led to the U.S. placing restrictions on exports. In July 1940, the U.S. issued a proclamation that banned the export of 87 octane or higher aviation gasoline to Japan. This ban did not hinder the Japanese as their aircraft could operate with fuels below 87 octane and if needed they could add TEL to increase the octane. As it turned out, Japan bought 550 percent more sub-87 octane aviation gasoline in the five months after the July 1940 ban on higher octane sales. The possibility of a complete ban of gasoline from America created friction in the Japanese government as to what action to take to secure more supplies from the Dutch East Indies and demanded greater oil exports from the exiled Dutch government after the Battle of the Netherlands. This action prompted the U.S. to move its Pacific fleet from Southern California to Pearl Harbor to help stiffen British resolve to stay in Indochina. With the Japanese invasion of French Indochina in September 1940, came great concerns about the possible Japanese invasion of the Dutch Indies to secure their oil. After the U.S. banned all exports of steel and iron scrap, the next day Japan signed the Tripartite Pact and this led Washington to fear that a complete U.S. oil embargo would prompt the Japanese to invade the Dutch East Indies. On 16 June 1941 Harold Ickes, who was appointed Petroleum Coordinator for National Defense, stopped a shipment of oil from Philadelphia to Japan in light of the oil shortage on the East coast due to increased exports to Allies. He also telegrammed all oil suppliers on the East coast not to ship any oil to Japan without his permission. President Roosevelt countermanded Ickes' orders telling Ickes that the "... I simply have not got enough Navy to go around and every little episode in the Pacific means fewer ships in the Atlantic". On 25 July 1941, the U.S. froze all Japanese financial assets and licenses would be required for each use of the frozen funds including oil purchases that could produce aviation gasoline. On 28 July 1941, Japan invaded southern Indochina. The debate inside the Japanese government as to its oil and gasoline situation was leading to invasion of the Dutch East Indies but this would mean war with the U.S., whose Pacific fleet was a threat to their flank. This situation led to the decision to attack the U.S. fleet at Pearl Harbor before proceeding with the Dutch East Indies invasion. On 7 December 1941, Japan attacked Pearl Harbor, and the next day the Netherlands declared war on Japan, which initiated the Dutch East Indies campaign. But the Japanese missed a golden opportunity at Pearl Harbor. "All of the oil for the fleet was in surface tanks at the time of Pearl Harbor," Admiral Chester Nimitz, who became Commander in Chief of the Pacific Fleet, was later to say. "We had about of oil out there and all of it was vulnerable to .50 caliber bullets. Had the Japanese destroyed the oil," he added, "it would have prolonged the war another two years." United States Early in 1944, William Boyd, president of the American Petroleum Institute and chairman of the Petroleum Industry War Council said: "The Allies may have floated to victory on a wave of oil in World War I, but in this infinitely greater World War II, we are flying to victory on the wings of petroleum". In December 1941 the United States had 385,000 oil wells producing 1.4 billion barrels of oil a year and 100-octane aviation gasoline capacity was at 40,000 barrels a day. By 1944, the U.S. was producing over 1.5 billion barrels a year (67 percent of world production) and the petroleum industry had built 122 new plants for the production of 100-octane aviation gasoline and capacity was over 400,000 barrels a day – an increase of more than ten-fold. It was estimated that the U.S. was producing enough 100-octane aviation gasoline to permit the dropping of of bombs on the enemy every day of the year. The record of gasoline consumption by the Army prior to June 1943 was uncoordinated as each supply service of the Army purchased its own petroleum products and no centralized system of control nor records existed. On 1 June 1943 the Army created the Fuels and Lubricants Division of the Quartermaster Corps, and from their records they tabulated that the Army (excluding fuels and lubricants for aircraft) purchased over 2.4 billion gallons of gasoline for delivery to overseas theaters between 1 June 1943, through August 1945. That figure does not include gasoline used by the Army inside the United States. Motor fuel production had declined from 701,000,000 barrels in 1941 down to 608,000,000 barrels in 1943. World War II marked the first time in U.S. history that gasoline was rationed and the government imposed price controls to prevent inflation. Gasoline consumption per automobile declined from 755 gallons per year in 1941 down to 540 gallons in 1943, with the goal of preserving rubber for tires since the Japanese had cut the U.S. off from over 90 percent of its rubber supply which had come from the Dutch East Indies and the U.S. synthetic rubber industry was in its infancy. Average gasoline prices went from a record low of $0.1275 per gallon ($0.1841 with taxes) in 1940 to $0.1448 per gallon ($0.2050 with taxes) in 1945. Even with the world's largest aviation gasoline production, the U.S. military still found that more was needed. Throughout the duration of the war, aviation gasoline supply was always behind requirements and this impacted training and operations. The reason for this shortage developed before the war even began. The free market did not support the expense of producing 100-octane aviation fuel in large volume, especially during the Great Depression. Iso-octane in the early development stage cost $30 a gallon and even by 1934, it was still $2 a gallon compared to $0.18 for motor gasoline when the Army decided to experiment with 100-octane for its combat aircraft. Though only 3 percent of U.S. combat aircraft in 1935 could take full advantage of the higher octane due to low compression ratios, the Army saw that the need for increasing performance warranted the expense and purchased 100,000 gallons. By 1937 the Army established 100-octane as the standard fuel for combat aircraft and by 1939 production was only 20,000 barrels a day. In effect, the U.S. military was the only market for 100-octane aviation gasoline and as war broke out in Europe this created a supply problem that persisted throughout the duration. With the war in Europe a reality in 1939, all predictions of 100-octane consumption were outrunning all possible production. Neither the Army nor the Navy could contract more than six months in advance for fuel and they could not supply the funds for plant expansion. Without a long-term guaranteed market, the petroleum industry would not risk its capital to expand production for a product that only the government would buy. The solution to the expansion of storage, transportation, finances, and production was the creation of the Defense Supplies Corporation on 19 September 1940. The Defense Supplies Corporation would buy, transport and store all aviation gasoline for the Army and Navy at cost plus a carrying fee. When the Allied breakout after D-Day found their armies stretching their supply lines to a dangerous point, the makeshift solution was the Red Ball Express. But even this soon was inadequate. The trucks in the convoys had to drive longer distances as the armies advanced and they were consuming a greater percentage of the same gasoline they were trying to deliver. In 1944, General George Patton's Third Army finally stalled just short of the German border after running out of gasoline. The general was so upset at the arrival of a truckload of rations instead of gasoline he was reported to have shouted: "Hell, they send us food, when they know we can fight without food but not without oil." The solution had to wait for the repairing of the railroad lines and bridges so that the more efficient trains could replace the gasoline-consuming truck convoys. United States, 1946 to present The development of jet engines burning kerosene-based fuels during WW II for aircraft produced a superior performing propulsion system than internal combustion engines could offer and the U.S. military forces gradually replaced their piston combat aircraft with jet powered planes. This development would essentially remove the military need for ever increasing octane fuels and eliminated government support for the refining industry to pursue the research and production of such exotic and expensive fuels. Commercial aviation was slower to adapt to jet propulsion and until 1958, when the Boeing 707 first entered commercial service, piston powered airliners still relied on aviation gasoline. But commercial aviation had greater economic concerns than the maximum performance that the military could afford. As octane numbers increased so did the cost of gasoline but the incremental increase in efficiency becomes less as compression ratio goes up. This reality set a practical limit to how high compression ratios could increase relative to how expensive the gasoline would become. Last produced in 1955, the Pratt & Whitney R-4360 Wasp Major was using 115/145 Aviation gasoline and producing 1 horsepower per cubic inch at 6.7 compression ratio (turbo-supercharging would increase this) and 1 pound of engine weight to produce 1.1 horsepower. This compares to the Wright Brothers engine needing almost 17 pounds of engine weight to produce 1 horsepower. The US automobile industry after WW II could not take advantage of the high octane fuels then available. Automobile compression ratios increased from an average of 5.3-to-1 in 1931 to just 6.7-to-1 in 1946. The average octane number of regular-grade motor gasoline increased from 58 to 70 during the same time. Military aircraft were using expensive turbo-supercharged engines that cost at least 10 times as much per horsepower as automobile engines and had to be overhauled every 700 to 1,000 hours. The automobile market could not support such expensive engines. It would not be until 1957 that the first US automobile manufacturer could mass-produce an engine that would produce one horsepower per cubic inch, the Chevrolet 283 hp/283 cubic inch V-8 engine option in the Corvette. At $485 this was an expensive option that few consumers could afford and would only appeal to the performance-oriented consumer market willing to pay for the premium fuel required. This engine had an advertised compression ratio of 10.5-to-1 and the 1958 AMA Specifications stated that the octane requirement was 96-100 RON. At (1959 with aluminum intake), it took of engine weight to make . In the 1950s oil refineries started to focus on high octane fuels, and then detergents were added to gasoline to clean the jets in carburetors. The 1970s witnessed greater attention to the environmental consequences of burning gasoline. These considerations led to the phasing out of TEL and its replacement by other antiknock compounds. Subsequently, low-sulfur gasoline was introduced, in part to preserve the catalysts in modern exhaust systems. Chemical analysis and production Commercial gasoline is a mixture of a large number of different hydrocarbons. Gasoline is produced to meet a host of engine performance specifications and many different compositions are possible. Hence, the exact chemical composition of gasoline is undefined. The performance specification also varies with season, requiring more volatile blends (due to added butane) during winter, in order to be able to start a cold engine. At the refinery, the composition varies according to the crude oils from which it is produced, the type of processing units present at the refinery, how those units are operated, and which hydrocarbon streams (blendstocks) the refinery opts to use when blending the final product. Gasoline is produced in oil refineries. Roughly of gasoline is derived from a barrel of crude oil. Material separated from crude oil via distillation, called virgin or straight-run gasoline, does not meet specifications for modern engines (particularly the octane rating; see below), but can be pooled to the gasoline blend. The bulk of a typical gasoline consists of a homogeneous mixture of small, relatively lightweight hydrocarbons with between 4 and 12 carbon atoms per molecule (commonly referred to as C4–C12). It is a mixture of paraffins (alkanes), olefins (alkenes), and cycloalkanes (naphthenes). The usage of the terms paraffin and olefin in place of the standard chemical nomenclature alkane and alkene, respectively, is particular to the oil industry. The actual ratio of molecules in any gasoline depends upon: the oil refinery that makes the gasoline, as not all refineries have the same set of processing units; the crude oil feed used by the refinery; the grade of gasoline (in particular, the octane rating). The various refinery streams blended to make gasoline have different characteristics. Some important streams include the following: Straight-run gasoline, commonly referred to as naphtha, is distilled directly from crude oil. Once the leading source of fuel, its low octane rating required lead additives. It is low in aromatics (depending on the grade of the crude oil stream) and contains some cycloalkanes (naphthenes) and no olefins (alkenes). Between 0 and 20 percent of this stream is pooled into the finished gasoline because the quantity of this fraction in the crude is less than fuel demand and the fraction's Research Octane Number (RON) is too low. The chemical properties (namely RON and Reid vapor pressure (RVP)) of the straight-run gasoline can be improved through reforming and isomerization. However, before feeding those units, the naphtha needs to be split into light and heavy naphtha. Straight-run gasoline can also be used as a feedstock for steam-crackers to produce olefins. Reformate, produced in a catalytic reformer, has a high octane rating with high aromatic content and relatively low olefin content. Most of the benzene, toluene, and xylene (the so-called BTX hydrocarbons) are more valuable as chemical feedstocks and are thus removed to some extent. Catalytic cracked gasoline, or catalytic cracked naphtha, produced with a catalytic cracker, has a moderate octane rating, high olefin content, and moderate aromatic content. Hydrocrackate (heavy, mid, and light), produced with a hydrocracker, has a medium to low octane rating and moderate aromatic levels. Alkylate is produced in an alkylation unit, using isobutane and olefins as feedstocks. Finished alkylate contains no aromatics or olefins and has a high MON (Motor Octane Number). Isomerate is obtained by isomerizing low-octane straight-run gasoline into iso-paraffins (non-chain alkanes, such as isooctane). Isomerate has a medium RON and MON, but no aromatics or olefins. Butane is usually blended in the gasoline pool, although the quantity of this stream is limited by the RVP specification. The terms above are the jargon used in the oil industry, and the terminology varies. Currently, many countries set limits on gasoline aromatics in general, benzene in particular, and olefin (alkene) content. Such regulations have led to an increasing preference for alkane isomers, such as isomerate or alkylate, as their octane rating is higher than n-alkanes. In the European Union, the benzene limit is set at 1% by volume for all grades of automotive gasoline. This is usually achieved by avoiding feeding C6, in particular cyclohexane, to the reformer unit, where it would be converted to benzene. Therefore, only (desulfurized) heavy virgin naphtha (HVN) is fed to the reformer unit Gasoline can also contain other organic compounds, such as organic ethers (deliberately added), plus small levels of contaminants, in particular organosulfur compounds (which are usually removed at the refinery). Physical properties Density The specific gravity of gasoline ranges from 0.71 to 0.77, with higher densities having a greater volume fraction of aromatics. Finished marketable gasoline is traded (in Europe) with a standard reference of , and its price is escalated or de-escalated according to its actual density. Because of its low density, gasoline floats on water, and therefore water cannot generally be used to extinguish a gasoline fire unless applied in a fine mist. Stability Quality gasoline should be stable for six months if stored properly, but as gasoline is a mixture rather than a single compound, it will break down slowly over time due to the separation of the components. Gasoline stored for a year will most likely be able to be burned in an internal combustion engine without too much trouble. However, the effects of long-term storage will become more noticeable with each passing month until a time comes when the gasoline should be diluted with ever-increasing amounts of freshly made fuel so that the older gasoline may be used up. If left undiluted, improper operation will occur and this may include engine damage from misfiring or the lack of proper action of the fuel within a fuel injection system and from an onboard computer attempting to compensate (if applicable to the vehicle). Gasoline should ideally be stored in an airtight container (to prevent oxidation or water vapor mixing in with the gas) that can withstand the vapor pressure of the gasoline without venting (to prevent the loss of the more volatile fractions) at a stable cool temperature (to reduce the excess pressure from liquid expansion and to reduce the rate of any decomposition reactions). When gasoline is not stored correctly, gums and solids may result, which can corrode system components and accumulate on wet surfaces, resulting in a condition called "stale fuel". Gasoline containing ethanol is especially subject to absorbing atmospheric moisture, then forming gums, solids, or two phases (a hydrocarbon phase floating on top of a water-alcohol phase). The presence of these degradation products in the fuel tank or fuel lines plus a carburetor or fuel injection components makes it harder to start the engine or causes reduced engine performance. On resumption of regular engine use, the buildup may or may not be eventually cleaned out by the flow of fresh gasoline. The addition of a fuel stabilizer to gasoline can extend the life of fuel that is not or cannot be stored properly, though removal of all fuel from a fuel system is the only real solution to the problem of long-term storage of an engine or a machine or vehicle. Typical fuel stabilizers are proprietary mixtures containing mineral spirits, isopropyl alcohol, 1,2,4-trimethylbenzene or other additives. Fuel stabilizers are commonly used for small engines, such as lawnmower and tractor engines, especially when their use is sporadic or seasonal (little to no use for one or more seasons of the year). Users have been advised to keep gasoline containers more than half full and properly capped to reduce air exposure, to avoid storage at high temperatures, to run an engine for ten minutes to circulate the stabilizer through all components prior to storage, and to run the engine at intervals to purge stale fuel from the carburetor. Gasoline stability requirements are set by the standard ASTM D4814. This standard describes the various characteristics and requirements of automotive fuels for use over a wide range of operating conditions in ground vehicles equipped with spark-ignition engines. Combustion energy content A gasoline-fueled internal combustion engine obtains energy from the combustion of gasoline's various hydrocarbons with oxygen from the ambient air, yielding carbon dioxide and water as exhaust. The combustion of octane, a representative species, performs the chemical reaction: By weight, combustion of gasoline releases about or by volume , quoting the lower heating value. Gasoline blends differ, and therefore actual energy content varies according to the season and producer by up to 1.75% more or less than the average. On average, about 74 L (19.5 US gal; 16.3 imp gal) of gasoline are available from a barrel of crude oil (about 46% by volume), varying with the quality of the crude and the grade of the gasoline. The remainder is products ranging from tar to naphtha. A high-octane-rated fuel, such as liquefied petroleum gas (LPG), has an overall lower power output at the typical 10:1 compression ratio of an engine design optimized for gasoline fuel. An engine tuned for LPG fuel via higher compression ratios (typically 12:1) improves the power output. This is because higher-octane fuels allow for a higher compression ratio without knocking, resulting in a higher cylinder temperature, which improves efficiency. Also, increased mechanical efficiency is created by a higher compression ratio through the concomitant higher expansion ratio on the power stroke, which is by far the greater effect. The higher expansion ratio extracts more work from the high-pressure gas created by the combustion process. An Atkinson cycle engine uses the timing of the valve events to produce the benefits of a high expansion ratio without the disadvantages, chiefly detonation, of a high compression ratio. A high expansion ratio is also one of the two key reasons for the efficiency of diesel engines, along with the elimination of pumping losses due to throttling of the intake airflow. The lower energy content of LPG by liquid volume in comparison to gasoline is due mainly to its lower density. This lower density is a property of the lower molecular weight of propane (LPG's chief component) compared to gasoline's blend of various hydrocarbon compounds with heavier molecular weights than propane. Conversely, LPG's energy content by weight is higher than gasoline's due to a higher hydrogen-to-carbon ratio. Molecular weights of the species in the representative octane combustion are C8H18 114, O2 32, CO2 44, H2O 18; therefore 1 kg of fuel reacts with 3.51 kg of oxygen to produce 3.09 kg of carbon dioxide and 1.42 kg of water. Octane rating Spark-ignition engines are designed to burn gasoline in a controlled process called deflagration. However, the unburned mixture may autoignite by pressure and heat alone, rather than igniting from the spark plug at exactly the right time, causing a rapid pressure rise that can damage the engine. This is often referred to as engine knocking or end-gas knock. Knocking can be reduced by increasing the gasoline's resistance to autoignition, which is expressed by its octane rating. Octane rating is measured relative to a mixture of 2,2,4-trimethylpentane (an isomer of octane) and n-heptane. There are different conventions for expressing octane ratings, so the same physical fuel may have several different octane ratings based on the measure used. One of the best known is the research octane number (RON). The octane rating of typical commercially available gasoline varies by country. In Finland, Sweden, and Norway, 95 RON is the standard for regular unleaded gasoline and 98 RON is also available as a more expensive option. In the United Kingdom, over 95% of gasoline sold has 95 RON and is marketed as Unleaded or Premium Unleaded. Super Unleaded, with 97/98 RON and branded high-performance fuels (e.g. Shell V-Power, BP Ultimate) with 99 RON make up the balance. Gasoline with 102 RON may rarely be available for racing purposes. In the United States, octane ratings in unleaded fuels vary between 85 and 87 AKI (91–92 RON) for regular, 89–90 AKI (94–95 RON) for mid-grade (equivalent to European regular), up to 90–94 AKI (95–99 RON) for premium (European premium). As South Africa's largest city, Johannesburg, is located on the Highveld at above sea level, the Automobile Association of South Africa recommends 95-octane gasoline at low altitude and 93-octane for use in Johannesburg because "The higher the altitude the lower the air pressure, and the lower the need for a high octane fuel as there is no real performance gain". Octane rating became important as the military sought higher output for aircraft engines in the late 1930s and the 1940s. A higher octane rating allows a higher compression ratio or supercharger boost, and thus higher temperatures and pressures, which translate to higher power output. Some scientists even predicted that a nation with a good supply of high-octane gasoline would have the advantage in air power. In 1943, the Rolls-Royce Merlin aero engine produced using 100 RON fuel from a modest 27-litre displacement. By the time of Operation Overlord, both the RAF and USAAF were conducting some operations in Europe using 150 RON fuel (100/150 avgas), obtained by adding 2.5% aniline to 100-octane avgas. By this time the Rolls-Royce Merlin 66 was developing using this fuel. Additives Antiknock additives Tetraethyllead Gasoline, when used in high-compression internal combustion engines, tends to auto-ignite or "detonate" causing damaging engine knocking (also called "pinging" or "pinking"). To address this problem, tetraethyllead (TEL) was widely adopted as an additive for gasoline in the 1920s. With a growing awareness of the seriousness of the extent of environmental and health damage caused by lead compounds, however, and the incompatibility of lead with catalytic converters, governments began to mandate reductions in gasoline lead. In the United States, the Environmental Protection Agency issued regulations to reduce the lead content of leaded gasoline over a series of annual phases, scheduled to begin in 1973 but delayed by court appeals until 1976. By 1995, leaded fuel accounted for only 0.6 percent of total gasoline sales and under of lead per year. From 1 January 1996, the U.S. Clean Air Act banned the sale of leaded fuel for use in on-road vehicles in the U.S. The use of TEL also necessitated other additives, such as dibromoethane. European countries began replacing lead-containing additives by the end of the 1980s, and by the end of the 1990s, leaded gasoline was banned within the entire European Union. The UAE started to switch to unleaded in the early 2000s. Reduction in the average lead content of human blood may be a major cause for falling violent crime rates around the world including South Africa. A study found a correlation between leaded gasoline usage and violent crime. Other studies found no correlation.() In August 2021, the UN Environment Programme announced that leaded petrol had been eradicated worldwide, with Algeria being the last county to deplete its reserves. UN Secretary-General António Guterres called the eradication of leaded petrol an "international success story". He also added: "Ending the use of leaded petrol will prevent more than one million premature deaths each year from heart disease, strokes and cancer, and it will protect children whose IQs are damaged by exposure to lead". Greenpeace called the announcement "the end of one toxic era". However, leaded gasoline continues to be used in aeronautic, auto racing and off-road applications. The use of leaded additives is still permitted worldwide for the formulation of some grades of aviation gasoline such as 100LL, because the required octane rating is difficult to reach without the use of leaded additives. Different additives have replaced lead compounds. The most popular additives include aromatic hydrocarbons, ethers (MTBE and ETBE), and alcohols, most commonly ethanol. Lead replacement petrol Lead replacement petrol (LRP) was developed for vehicles designed to run on leaded fuels and incompatible with unleaded fuels. Rather than tetraethyllead, it contains other metals such as potassium compounds or methylcyclopentadienyl manganese tricarbonyl (MMT); these are purported to buffer soft exhaust valves and seats so that they do not suffer recession due to the use of unleaded fuel. LRP was marketed during and after the phaseout of leaded motor fuels in the United Kingdom, Australia, South Africa, and some other countries. Consumer confusion led to a widespread mistaken preference for LRP rather than unleaded, and LRP was phased out 8 to 10 years after the introduction of unleaded. Leaded gasoline was withdrawn from sale in Britain after 31 December 1999, seven years after EEC regulations signaled the end of production for cars using leaded gasoline in member states. At this stage, a large percentage of cars from the 1980s and early 1990s which ran on leaded gasoline were still in use, along with cars that could run on unleaded fuel. However, the declining number of such cars on British roads saw many gasoline stations withdrawing LRP from sale by 2003. MMT Methylcyclopentadienyl manganese tricarbonyl (MMT) is used in Canada and the US to boost octane rating. Its use in the United States has been restricted by regulations, although it is currently allowed. Its use in the European Union is restricted by Article 8a of the Fuel Quality Directive following its testing under the Protocol for the evaluation of effects of metallic fuel-additives on the emissions performance of vehicles. Fuel stabilizers (antioxidants and metal deactivators) Gummy, sticky resin deposits result from oxidative degradation of gasoline during long-term storage. These harmful deposits arise from the oxidation of alkenes and other minor components in gasoline (see drying oils). Improvements in refinery techniques have generally reduced the susceptibility of gasolines to these problems. Previously, catalytically or thermally cracked gasolines were most susceptible to oxidation. The formation of gums is accelerated by copper salts, which can be neutralized by additives called metal deactivators. This degradation can be prevented through the addition of 5–100 ppm of antioxidants, such as phenylenediamines and other amines. Hydrocarbons with a bromine number of 10 or above can be protected with the combination of unhindered or partially hindered phenols and oil-soluble strong amine bases, such as hindered phenols. "Stale" gasoline can be detected by a colorimetric enzymatic test for organic peroxides produced by oxidation of the gasoline. Gasolines are also treated with metal deactivators, which are compounds that sequester (deactivate) metal salts that otherwise accelerate the formation of gummy residues. The metal impurities might arise from the engine itself or as contaminants in the fuel. Detergents Gasoline, as delivered at the pump, also contains additives to reduce internal engine carbon buildups, improve combustion and allow easier starting in cold climates. High levels of detergent can be found in Top Tier Detergent Gasolines. The specification for Top Tier Detergent Gasolines was developed by four automakers: GM, Honda, Toyota, and BMW. According to the bulletin, the minimal U.S. EPA requirement is not sufficient to keep engines clean. Typical detergents include alkylamines and alkyl phosphates at a level of 50–100 ppm. Ethanol European Union In the EU, 5% ethanol can be added within the common gasoline spec (EN 228). Discussions are ongoing to allow 10% blending of ethanol (available in Finnish, French and German gas stations). In Finland, most gasoline stations sell 95E10, which is 10% ethanol, and 98E5, which is 5% ethanol. Most gasoline sold in Sweden has 5–15% ethanol added. Three different ethanol blends are sold in the Netherlands—E5, E10 and hE15. The last of these differs from standard ethanol–gasoline blends in that it consists of 15% hydrous ethanol (i.e., the ethanol–water azeotrope) instead of the anhydrous ethanol traditionally used for blending with gasoline. Brazil The Brazilian National Agency of Petroleum, Natural Gas and Biofuels (ANP) requires gasoline for automobile use to have 27.5% of ethanol added to its composition. Pure hydrated ethanol is also available as a fuel. Australia Legislation requires retailers to label fuels containing ethanol on the dispenser, and limits ethanol use to 10% of gasoline in Australia. Such gasoline is commonly called E10 by major brands, and it is cheaper than regular unleaded gasoline. United States The federal Renewable Fuel Standard (RFS) effectively requires refiners and blenders to blend renewable biofuels (mostly ethanol) with gasoline, sufficient to meet a growing annual target of total gallons blended. Although the mandate does not require a specific percentage of ethanol, annual increases in the target combined with declining gasoline consumption have caused the typical ethanol content in gasoline to approach 10%. Most fuel pumps display a sticker that states that the fuel may contain up to 10% ethanol, an intentional disparity that reflects the varying actual percentage. Until late 2010, fuel retailers were only authorized to sell
and four 2-ketopentoses, stereoisomers that differ in the spatial position of the hydroxyl groups. These forms occur in pairs of optical isomers, generally labelled "" or "" by conventional rules (independently of their optical activity). Aldopentoses The aldopentoses have three chiral centers; therefore, eight (23) different stereoisomers are possible. Ribose is a constituent of RNA, and the related molecule, deoxyribose, is a constituent of DNA. Phosphorylated pentoses are important products of the pentose phosphate pathway, most importantly ribose 5-phosphate (R5P), which is used in the synthesis of nucleotides and nucleic acids, and erythrose 4-phosphate (E4P), which is used in the synthesis of aromatic amino acids. Ketopentoses The 2-ketopentoses have two chiral centers; therefore, four (22) different stereoisomers are possible. The 3-ketopentoses are rare. Cyclic form The closed or cyclic form of a pentose is created when the carbonyl group interacts with an hydroxyl in another carbon, turning the carbonyl into a hydroxyl and creating an ether bridge –O– between the two carbons. This intramolecular reaction yields a cyclic molecule, with a ring consisting of one oxygen atom and usually four carbon atoms; the cyclic compounds are then called furanoses, for having the same rings as the cyclic ether tetrahydrofuran. The closure turns the carboxyl carbon into a chiral center, which may have any of two configurations, depending on the position of the new hydroxyl. Therefore, each linear form can produce two distinct closed forms, identified by prefixes "α" and "β". Deoxypentoses The one deoxypentose has two total steroisomers. Properties In the cell, pentoses have a higher metabolic stability than hexoses. A polymer composed of pentose sugars is called a pentosan. Tests for pentoses The most important tests for pentoses rely on converting the pentose to furfural, which then reacts with a chromophore. In Tollens’ test for pentoses (not to be
carbon atoms. The chemical formula of all pentoses is , and their molecular weight is 150.13 g/mol. Pentoses are very important in biochemistry. Ribose is a constituent of RNA, and the related molecule, deoxyribose, is a constituent of DNA. Phosphorylated pentoses are important products of the pentose phosphate pathway, most importantly ribose 5-phosphate (R5P), which is used in the synthesis of nucleotides and nucleic acids, and erythrose 4-phosphate (E4P), which is used in the synthesis of aromatic amino acids. Like some other monosaccharides, pentoses exist in two forms, open-chain (linear) or closed-chain (cyclic), that easily convert into each other in water solutions. The linear form of a pentose, which usually exists only in solutions, has an open-chain backbone of five carbons. Four of these carbons have one hydroxyl functional group (–OH) each, connected by a single bond, and one has an oxygen atom connected by a double bond (=O), forming a carbonyl group (C=O). The remaining bonds of the carbon atoms are satisfied by six hydrogen atoms. Thus the structure of the linear form is H–(CHOH)x–C(=O)–(CHOH)4-x–H, where x is 0, 1, or 2. The term "pentose" sometimes is assumed to include deoxypentoses, such as deoxyribose: compounds with general formula that can be described as derived from pentoses by replacement of one or more hydroxyl groups with hydrogen atoms. Classification The aldopentoses are a subclass of the pentoses which, in the linear form, have the carbonyl at carbon 1, forming an aldehyde
from the raw gas, to prevent condensation of these volatiles in natural gas pipelines. Additionally, oil refineries produce some propane as a by-product of cracking petroleum into gasoline or heating oil. The supply of propane cannot easily be adjusted to meet increased demand, because of the by-product nature of propane production. About 90% of U.S. propane is domestically produced. The United States imports about 10% of the propane consumed each year, with about 70% of that coming from Canada via pipeline and rail. The remaining 30% of imported propane comes to the United States from other sources via ocean transport. After it is separated from the crude oil, North American propane is stored in huge salt caverns. Examples of these are Fort Saskatchewan, Alberta; Mont Belvieu, Texas; and Conway, Kansas. These salt caverns can store of propane. Properties and reactions Propane is a colorless, odorless gas. At normal pressure it liquifies below its boiling point at −42 °C and solidifies below its melting point at −187.7 °C. Propane crystallizes in the space group P21/n. The low spacefilling of 58.5% (at 90 K), due to the bad stacking properties of the molecule, is the reason for the particularly low melting point. Propane undergoes combustion reactions in a similar fashion to other alkanes. In the presence of excess oxygen, propane burns to form water and carbon dioxide. C3H8 + 5 O2 -> 3 CO2 + 4 H2O + heat When insufficient oxygen is present for complete combustion, carbon monoxide, soot (carbon), or both, are formed as well: C3H8 + 9/2 O2 -> 2 CO2 + CO + 4 H2O + heat C3H8 + 2 O2 -> 3 C + 4 H2O + heat Complete combustion of propane produces about 50 MJ/kg of heat. Propane combustion is much cleaner than that of coal or unleaded gasoline. Propane's per-BTU production of CO2 is almost as low as that of natural gas. Propane burns hotter than home heating oil or diesel fuel because of the very high hydrogen content. The presence of C–C bonds, plus the multiple bonds of propylene and butylene, produce organic exhausts besides carbon dioxide and water vapor during typical combustion. These bonds also cause propane to burn with a visible flame. Energy content The enthalpy of combustion of propane gas where all products return to standard state, for example where water returns to its liquid state at standard temperature (known as higher heating value), is (2219.2 ± 0.5) kJ/mol, or (50.33 ± 0.01) MJ/kg. The enthalpy of combustion of propane gas where products do not return to standard state, for example where the hot gases including water vapor exit a chimney, (known as lower heating value) is −2043.455 kJ/mol. The lower heat value is the amount of heat available from burning the substance where the combustion products are vented to the atmosphere; for example, the heat from a fireplace when the flue is open. Density The density of propane gas at 25 °C (77 °F) is 1.808 kg/m3, about 1.5x the density of air at the same temperature. The density of liquid propane at 25 °C (77 °F) is 0.493 g/cm3, which is equivalent to 4.11 pounds per U.S. liquid gallon or 493 g/L. Propane expands at 1.5% per 10 °F. Thus, liquid propane has a density of approximately 4.2 pounds per gallon (504 g/L) at 60 °F (15.6 °C). As the density of propane changes with temperature, this fact must be considered every time when the application is connected with safety or custody transfer operations. Uses Portable stoves Propane is a popular choice for barbecues and portable stoves because the low boiling point of makes it vaporize as soon as it is released from its pressurized container. Therefore, no carburetor or other vaporizing device is required; a simple metering nozzle suffices. Refrigerant Blends of pure, dry "isopropane" (R-290a) (isobutane/propane mixtures) and isobutane (R-600a) can be used as the circulating refrigerant in suitably constructed compressor-based refrigeration. Compared to fluorocarbons, propane has a negligible ozone depletion potential and very low global warming potential (having a value of only 3.3 times the GWP of carbon dioxide) and can serve as a functional replacement for R-12, R-22, R-134a, and other chlorofluorocarbon or hydrofluorocarbon refrigerants in conventional stationary refrigeration and air conditioning systems. Because its global warming effect is far less than current refrigerants, propane was chosen as one of five replacement refrigerants approved by the EPA in 2015, for use in systems specially designed to handle its flammability. Such substitution is widely prohibited or discouraged in motor vehicle air conditioning systems, on the grounds that using flammable hydrocarbons in systems originally designed to carry non-flammable refrigerant presents a significant risk of fire or explosion. Vendors and advocates of hydrocarbon refrigerants argue against such bans on the grounds that there have been very few such incidents relative to the number of vehicle air conditioning systems filled with hydrocarbons. Propane is also instrumental in providing off-the-grid refrigeration, as the energy source for a gas absorption refrigerator and is commonly used for camping and recreational vehicles. Domestic and industrial fuel Since it can be transported easily, it is a popular fuel for home heat and backup electrical generation in sparsely populated areas that do not have natural gas pipelines. In rural areas of North America, as well as northern Australia, propane is used to heat livestock facilities, in grain dryers, and other heat-producing appliances. When used for heating or grain drying it is usually stored in a large, permanently-placed cylinder which is refilled by a propane-delivery truck. , 6.2 million American households use propane as their primary heating fuel. In North America, local delivery trucks with an average cylinder size of , fill up large cylinders that are permanently installed on the property, or other service trucks exchange empty cylinders of propane with filled cylinders. Large tractor-trailer trucks, with an average cylinder size of , transport propane from the pipeline or refinery to the local bulk plant. The bobtail tank truck is not unique to the North American market, though the practice is not as common elsewhere, and the vehicles are generally called tankers. In many countries, propane is delivered to end-users via small or medium-sized individual cylinders, while empty cylinders are removed for refilling at a central location. There are also community propane systems, with a central cylinder feeding individual homes. Motor fuel Propane is being used increasingly for vehicle fuels. In the U.S., over 190,000 on-road vehicles use propane, and over 450,000 forklifts use it for power. It is the third most popular vehicle fuel in the world, behind gasoline and diesel fuel. In other parts of the world, propane used in vehicles is known as autogas. In 2007, approximately 13 million vehicles worldwide use autogas. The advantage of propane in cars is its liquid state at a moderate pressure. This allows fast refill times, affordable fuel cylinder construction, and price ranges typically just over half that of gasoline. Meanwhile, it is noticeably cleaner (both in handling, and in combustion), results in less engine wear (due to carbon deposits) without diluting engine oil (often extending oil-change intervals), and until recently was relatively low-cost in North America. The octane rating of propane is relatively high at 110. In the United States the propane fueling infrastructure is the most developed of all alternative vehicle fuels. Many converted vehicles have provisions for topping off from "barbecue bottles". Purpose-built vehicles are often in commercially owned fleets, and have private fueling facilities. A further saving for propane fuel vehicle operators, especially in fleets, is that theft is much more difficult than with gasoline or diesel fuels. Propane is also used as fuel for small engines, especially those used indoors or in areas with insufficient fresh air and ventilation to carry away the more toxic exhaust of an engine running on gasoline or diesel fuel. More recently, there have been lawn-care products like string trimmers, lawn mowers and leaf blowers intended for outdoor use, but fueled by propane in order to reduce air pollution. Many heavy-duty highway trucks use propane as a boost, where it is added through the turbocharger, to mix with diesel fuel droplets. Propane droplets' very high hydrogen content helps the diesel fuel to burn hotter and therefore more completely. This provides more torque, more horsepower, and a cleaner exhaust for the trucks. It is normal for a 7-liter medium-duty diesel truck engine to increase fuel economy by 20 to 33 percent when a propane boost system is used. It is cheaper because propane is much cheaper than diesel fuel. The longer distance a cross-country trucker can travel on a
Portable stoves Propane is a popular choice for barbecues and portable stoves because the low boiling point of makes it vaporize as soon as it is released from its pressurized container. Therefore, no carburetor or other vaporizing device is required; a simple metering nozzle suffices. Refrigerant Blends of pure, dry "isopropane" (R-290a) (isobutane/propane mixtures) and isobutane (R-600a) can be used as the circulating refrigerant in suitably constructed compressor-based refrigeration. Compared to fluorocarbons, propane has a negligible ozone depletion potential and very low global warming potential (having a value of only 3.3 times the GWP of carbon dioxide) and can serve as a functional replacement for R-12, R-22, R-134a, and other chlorofluorocarbon or hydrofluorocarbon refrigerants in conventional stationary refrigeration and air conditioning systems. Because its global warming effect is far less than current refrigerants, propane was chosen as one of five replacement refrigerants approved by the EPA in 2015, for use in systems specially designed to handle its flammability. Such substitution is widely prohibited or discouraged in motor vehicle air conditioning systems, on the grounds that using flammable hydrocarbons in systems originally designed to carry non-flammable refrigerant presents a significant risk of fire or explosion. Vendors and advocates of hydrocarbon refrigerants argue against such bans on the grounds that there have been very few such incidents relative to the number of vehicle air conditioning systems filled with hydrocarbons. Propane is also instrumental in providing off-the-grid refrigeration, as the energy source for a gas absorption refrigerator and is commonly used for camping and recreational vehicles. Domestic and industrial fuel Since it can be transported easily, it is a popular fuel for home heat and backup electrical generation in sparsely populated areas that do not have natural gas pipelines. In rural areas of North America, as well as northern Australia, propane is used to heat livestock facilities, in grain dryers, and other heat-producing appliances. When used for heating or grain drying it is usually stored in a large, permanently-placed cylinder which is refilled by a propane-delivery truck. , 6.2 million American households use propane as their primary heating fuel. In North America, local delivery trucks with an average cylinder size of , fill up large cylinders that are permanently installed on the property, or other service trucks exchange empty cylinders of propane with filled cylinders. Large tractor-trailer trucks, with an average cylinder size of , transport propane from the pipeline or refinery to the local bulk plant. The bobtail tank truck is not unique to the North American market, though the practice is not as common elsewhere, and the vehicles are generally called tankers. In many countries, propane is delivered to end-users via small or medium-sized individual cylinders, while empty cylinders are removed for refilling at a central location. There are also community propane systems, with a central cylinder feeding individual homes. Motor fuel Propane is being used increasingly for vehicle fuels. In the U.S., over 190,000 on-road vehicles use propane, and over 450,000 forklifts use it for power. It is the third most popular vehicle fuel in the world, behind gasoline and diesel fuel. In other parts of the world, propane used in vehicles is known as autogas. In 2007, approximately 13 million vehicles worldwide use autogas. The advantage of propane in cars is its liquid state at a moderate pressure. This allows fast refill times, affordable fuel cylinder construction, and price ranges typically just over half that of gasoline. Meanwhile, it is noticeably cleaner (both in handling, and in combustion), results in less engine wear (due to carbon deposits) without diluting engine oil (often extending oil-change intervals), and until recently was relatively low-cost in North America. The octane rating of propane is relatively high at 110. In the United States the propane fueling infrastructure is the most developed of all alternative vehicle fuels. Many converted vehicles have provisions for topping off from "barbecue bottles". Purpose-built vehicles are often in commercially owned fleets, and have private fueling facilities. A further saving for propane fuel vehicle operators, especially in fleets, is that theft is much more difficult than with gasoline or diesel fuels. Propane is also used as fuel for small engines, especially those used indoors or in areas with insufficient fresh air and ventilation to carry away the more toxic exhaust of an engine running on gasoline or diesel fuel. More recently, there have been lawn-care products like string trimmers, lawn mowers and leaf blowers intended for outdoor use, but fueled by propane in order to reduce air pollution. Many heavy-duty highway trucks use propane as a boost, where it is added through the turbocharger, to mix with diesel fuel droplets. Propane droplets' very high hydrogen content helps the diesel fuel to burn hotter and therefore more completely. This provides more torque, more horsepower, and a cleaner exhaust for the trucks. It is normal for a 7-liter medium-duty diesel truck engine to increase fuel economy by 20 to 33 percent when a propane boost system is used. It is cheaper because propane is much cheaper than diesel fuel. The longer distance a cross-country trucker can travel on a full load of combined diesel and propane fuel means they can maintain federal hours of work rules with two fewer fuel stops in a cross-country trip. Truckers, tractor pulling competitions, and farmers have been using a propane boost system for over forty years in North America. Shipping fuel International ships can reuse propane from ocean-going ships that transport LPG because as the sun evaporates the propane during the voyage, the international ship catches the evaporating propane gas and feeds it into the air intake system of the ship's diesel engines. This reduces bunker fuel consumption and the pollution produced by the ships. There is an international agreement to use either propane or CNG as a mandatory additive to the bunker fuel for all ocean traveling ships beginning in 2020. Propane is generally stored and transported in steel cylinders as a liquid with a vapor space above the liquid. The vapor pressure in the cylinder is a function of temperature. When gaseous propane is drawn at a high rate, the latent heat of vaporization required to produce the gas will cause the bottle to cool. (This is why water often condenses on the sides of the bottle and then freezes). Since lightweight, high-octane propane vaporizes before the heavier, low-octane propane, the ignition properties change as the cylinder empties. For these reasons, the liquid is often withdrawn using a dip tube. Other uses Propane is the primary flammable gas in blowtorches for soldering. Propane is used in oxy-fuel heating and cutting. Propane does not burn as hot as acetylene in its inner cone, and so it is rarely used for welding. Propane, however, has a very high number of BTUs per cubic foot in its outer cone, and so with the right torch (injector style) it can make a faster and cleaner cut than acetylene, and is much more useful for heating and bending than acetylene. Propane is used as a feedstock for the production of base petrochemicals in steam cracking. Propane is the primary fuel for hot-air balloons. It is used in semiconductor manufacture to deposit silicon carbide. Propane is commonly used in theme parks and in movie production as an inexpensive, high-energy fuel for explosions and other special effects. Propane is used as a propellant, relying on the expansion of the gas to fire the projectile. It does not ignite the gas. The use of a liquefied gas gives more shots per cylinder, compared to a compressed gas. Propane is also used as a cooking fuel. Propane is used as a propellant for many household aerosol sprays, including shaving creams and air fresheners. Propane is a promising feedstock for the production of propylene and acrylic acid. Liquified propane is used in the extraction of animal fats and vegetable oils. Purity The North American standard grade of automotive-use propane is rated HD-5 (Heavy Duty 5%). HD-5 grade has a maximum of 5 percent butane, but propane sold in Europe has a maximum allowable amount of butane of 30 percent, meaning it is not the same fuel as HD-5. The LPG used as auto fuel and cooking gas in Asia and Australia also has very high butane content. Propylene (also called propene) can be a contaminant of commercial propane. Propane containing too much propene is not suited for most vehicle fuels. HD-5 is a specification that establishes a maximum concentration of 5% propene in propane. Propane and other LP gas specifications are established in ASTM D-1835. All propane fuels include an odorant, almost always ethanethiol, so that the gas can be smelled easily in case of a leak. Propane as HD-5 was originally intended for use as vehicle fuel. HD-5 is currently being used in all propane applications. Typically in the United States and
this age were first studied. The Precambrian accounts for 88% of the Earth's geologic time. The Precambrian is an informal unit of geologic time, subdivided into three eons (Hadean, Archean, Proterozoic) of the geologic time scale. It spans from the formation of Earth about 4.6 billion years ago (Ga) to the beginning of the Cambrian Period, about million years ago (Ma), when hard-shelled creatures first appeared in abundance. Overview Relatively little is known about the Precambrian, despite it making up roughly seven-eighths of the Earth's history, and what is known has largely been discovered from the 1960s onwards. The Precambrian fossil record is poorer than that of the succeeding Phanerozoic, and fossils from the Precambrian (e.g. stromatolites) are of limited biostratigraphic use. This is because many Precambrian rocks have been heavily metamorphosed, obscuring their origins, while others have been destroyed by erosion, or remain deeply buried beneath Phanerozoic strata. It is thought that the Earth coalesced from material in orbit around the Sun at roughly 4,543 Ma, and may have been struck by another planet called Theia shortly after it formed, splitting off material that formed the Moon (see Giant impact hypothesis). A stable crust was apparently in place by 4,433 Ma, since zircon crystals from Western Australia have been dated at 4,404 ± 8 Ma. The term "Precambrian" is used by geologists and paleontologists for general discussions not requiring a more specific eon name. However, both the United States Geological Survey and the International Commission on Stratigraphy regard the term as informal. Because the span of time falling under the Precambrian consists of three eons (the Hadean, the Archean, and the Proterozoic), it is sometimes described as a supereon, but this is also an informal term, not defined by the ICS in its chronostratigraphic guide. (from “earliest”) was a synonym for pre-Cambrian, or more specifically Archean. Life forms A specific date for the origin of life has not been determined. Carbon found in 3.8 billion-year-old rocks (Archean Eon) from islands off western Greenland may be of organic origin. Well-preserved microscopic fossils of bacteria older than 3.46 billion years have been found in Western Australia. Probable fossils 100 million years older have been found in the same area. However, there is evidence that life could have evolved over 4.280 billion years ago. There is a fairly solid record of bacterial life throughout the remainder (Proterozoic Eon) of the Precambrian. Complex multicellular organisms may have appeared as early as 2100 Ma. However, the interpretation of ancient fossils is problematic, and "... some definitions of multicellularity encompass everything from simple bacterial colonies to badgers." Other possible early complex multicellular organisms include a possible 2450 Ma red alga from the Kola Peninsula, 1650 Ma carbonaceous biosignatures in north China, the 1600 Ma Rafatazmia, and a possible 1047 Ma Bangiomorpha red alga from the Canadian Arctic. The earliest fossils widely accepted as complex multicellular organisms date from the Ediacaran Period. A very diverse collection of soft-bodied forms is found in a variety of locations worldwide and date to between 635 and 542 Ma. These are referred to as Ediacaran or Vendian biota. Hard-shelled creatures appeared toward the end of that time span, marking the beginning of the Phanerozoic Eon. By the middle of the following Cambrian Period, a very diverse fauna is recorded in the Burgess Shale, including some which may represent stem groups of modern taxa. The increase in diversity of lifeforms during the early Cambrian is called the Cambrian explosion of life. While land seems to have been devoid of plants and animals, cyanobacteria and other microbes formed prokaryotic mats that covered terrestrial areas. Tracks from an animal with leg-like appendages have been found in what was mud 551 million years ago. Planetary environment and the oxygen catastrophe Evidence of the details of plate motions and other tectonic activity in the Precambrian has been poorly preserved. It is generally believed that small proto-continents existed before 4280 Ma, and that most of the Earth's landmasses collected into a single supercontinent around 1130 Ma. The supercontinent, known as Rodinia, broke up around 750 Ma. A number of glacial periods have been identified going as far back as the Huronian epoch, roughly 2400–2100 Ma. One of the best studied is the Sturtian-Varangian glaciation, around 850–635 Ma, which may have brought glacial conditions all the way to the equator, resulting in a "Snowball Earth". The atmosphere of the early Earth is not well understood. Most geologists believe it was composed primarily of nitrogen, carbon dioxide, and other relatively inert gases, and was lacking in free oxygen. There
the Phanerozoic Eon. By the middle of the following Cambrian Period, a very diverse fauna is recorded in the Burgess Shale, including some which may represent stem groups of modern taxa. The increase in diversity of lifeforms during the early Cambrian is called the Cambrian explosion of life. While land seems to have been devoid of plants and animals, cyanobacteria and other microbes formed prokaryotic mats that covered terrestrial areas. Tracks from an animal with leg-like appendages have been found in what was mud 551 million years ago. Planetary environment and the oxygen catastrophe Evidence of the details of plate motions and other tectonic activity in the Precambrian has been poorly preserved. It is generally believed that small proto-continents existed before 4280 Ma, and that most of the Earth's landmasses collected into a single supercontinent around 1130 Ma. The supercontinent, known as Rodinia, broke up around 750 Ma. A number of glacial periods have been identified going as far back as the Huronian epoch, roughly 2400–2100 Ma. One of the best studied is the Sturtian-Varangian glaciation, around 850–635 Ma, which may have brought glacial conditions all the way to the equator, resulting in a "Snowball Earth". The atmosphere of the early Earth is not well understood. Most geologists believe it was composed primarily of nitrogen, carbon dioxide, and other relatively inert gases, and was lacking in free oxygen. There is, however, evidence that an oxygen-rich atmosphere existed since the early Archean. At present, it is still believed that molecular oxygen was not a significant fraction of Earth's atmosphere until after photosynthetic life forms evolved and began to produce it in large quantities as a byproduct of their metabolism. This radical shift from a chemically inert to an oxidizing atmosphere caused an ecological crisis, sometimes called the oxygen catastrophe. At first, oxygen would have quickly combined with other elements in Earth's crust, primarily iron, removing it from the atmosphere. After the supply of oxidizable surfaces ran out, oxygen would have begun to accumulate in the atmosphere, and the modern high-oxygen atmosphere would have developed. Evidence for this lies in older rocks that contain massive banded iron formations that were laid down as iron oxides. Subdivisions A terminology has evolved covering the early years of the Earth's existence, as radiometric dating has allowed absolute dates to be assigned to specific formations and features. The Precambrian is divided into three eons: the Hadean (– Ma), Archean (- Ma) and Proterozoic (- Ma). See Timetable of the Precambrian. Proterozoic: this eon refers to the time from the lower Cambrian boundary, Ma, back through Ma. As originally used, it was a synonym for "Precambrian" and hence included everything prior to the Cambrian boundary. The Proterozoic Eon is divided into three eras: the Neoproterozoic, Mesoproterozoic and Paleoproterozoic. Neoproterozoic: The youngest geologic era of the Proterozoic Eon, from the Cambrian Period lower boundary ( Ma) back to Ma. The Neoproterozoic corresponds to Precambrian Z rocks of older North American stratigraphy. Ediacaran: The youngest geologic period within the Neoproterozoic Era. The "2012 Geologic Time Scale" dates it from to Ma. In this period the Ediacaran fauna appeared. Cryogenian: The middle period in the Neoproterozoic Era: - Ma. Tonian: the earliest period of the Neoproterozoic Era: - Ma. Mesoproterozoic: the middle era of the Proterozoic Eon, - Ma. Corresponds to "Precambrian Y" rocks of older North American stratigraphy. Paleoproterozoic: oldest era of the Proterozoic Eon, - Ma. Corresponds to "Precambrian X" rocks of older North American stratigraphy. Archean Eon: - Ma. Hadean Eon: – Ma. This term was intended originally to cover the time before any preserved rocks were deposited, although some zircon crystals from about 4400 Ma demonstrate the existence of crust in the Hadean Eon. Other records from Hadean time come from the moon and meteorites. It has been proposed that the Precambrian should be divided into eons and eras that reflect stages of planetary evolution, rather than the current scheme based upon numerical ages. Such a system could rely on events in the stratigraphic record and be demarcated by GSSPs. The Precambrian could be divided into five "natural" eons, characterized as follows: Accretion and differentiation: a period of planetary formation until giant Moon-forming impact event. Hadean: dominated by heavy bombardment from about 4.51 Ga (possibly including a Cool Early Earth period) to the end of the Late Heavy Bombardment period. Archean: a period defined by the first crustal formations (the Isua greenstone belt) until the deposition of banded iron formations due to increasing atmospheric oxygen content. Transition: a period of continued iron banded formation until the first continental red beds. Proterozoic: a period of modern plate tectonics until the first animals. Precambrian supercontinents
animals and humans and has led to the deaths of many young children. The pertussis toxin is a protein exotoxin that binds to cell receptors by two dimers and reacts with different cell types such as T lymphocytes which play a role in cell immunity. PCR is an important testing tool that can detect sequences within the gene for the pertussis toxin. Because PCR has a high sensitivity for the toxin and a rapid turnaround time, it is very efficient for diagnosing pertussis when compared to culture. Forensic applications The development of PCR-based genetic (or DNA) fingerprinting protocols has seen widespread application in forensics: In its most discriminating form, genetic fingerprinting can uniquely discriminate any one person from the entire population of the world. Minute samples of DNA can be isolated from a crime scene, and compared to that from suspects, or from a DNA database of earlier evidence or convicts. Simpler versions of these tests are often used to rapidly rule out suspects during a criminal investigation. Evidence from decades-old crimes can be tested, confirming or exonerating the people originally convicted. Forensic DNA typing has been an effective way of identifying or exonerating criminal suspects due to analysis of evidence discovered at a crime scene. The human genome has many repetitive regions that can be found within gene sequences or in non-coding regions of the genome. Specifically, up to 40% of human DNA is repetitive. There are two distinct categories for these repetitive, non-coding regions in the genome. The first category is called variable number tandem repeats (VNTR), which are 10–100 base pairs long and the second category is called short tandem repeats (STR) and these consist of repeated 2–10 base pair sections. PCR is used to amplify several well-known VNTRs and STRs using primers that flank each of the repetitive regions. The sizes of the fragments obtained from any individual for each of the STRs will indicate which alleles are present. By analyzing several STRs for an individual, a set of alleles for each person will be found that statistically is likely to be unique. Researchers have identified the complete sequence of the human genome. This sequence can be easily accessed through the NCBI website and is used in many real-life applications. For example, the FBI has compiled a set of DNA marker sites used for identification, and these are called the Combined DNA Index System (CODIS) DNA database. Using this database enables statistical analysis to be used to determine the probability that a DNA sample will match. PCR is a very powerful and significant analytical tool to use for forensic DNA typing because researchers only need a very small amount of the target DNA to be used for analysis. For example, a single human hair with attached hair follicle has enough DNA to conduct the analysis. Similarly, a few sperm, skin samples from under the fingernails, or a small amount of blood can provide enough DNA for conclusive analysis. Less discriminating forms of DNA fingerprinting can help in DNA paternity testing, where an individual is matched with their close relatives. DNA from unidentified human remains can be tested, and compared with that from possible parents, siblings, or children. Similar testing can be used to confirm the biological parents of an adopted (or kidnapped) child. The actual biological father of a newborn can also be confirmed (or ruled out). The PCR AMGX/AMGY design facilitate in amplifying DNA sequences from a very minuscule amount of genome. However it can also be used for real-time sex determination from forensic bone samples. This provides a powerful and effective way to determine gender in forensic cases and ancient specimens. Research applications PCR has been applied to many areas of research in molecular genetics: PCR allows rapid production of short pieces of DNA, even when not more than the sequence of the two primers is known. This ability of PCR augments many methods, such as generating hybridization probes for Southern or northern blot hybridization. PCR supplies these techniques with large amounts of pure DNA, sometimes as a single strand, enabling analysis even from very small amounts of starting material. The task of DNA sequencing can also be assisted by PCR. Known segments of DNA can easily be produced from a patient with a genetic disease mutation. Modifications to the amplification technique can extract segments from a completely unknown genome, or can generate just a single strand of an area of interest. PCR has numerous applications to the more traditional process of DNA cloning. It can extract segments for insertion into a vector from a larger genome, which may be only available in small quantities. Using a single set of 'vector primers', it can also analyze or extract fragments that have already been inserted into vectors. Some alterations to the PCR protocol can generate mutations (general or site-directed) of an inserted fragment. Sequence-tagged sites is a process where PCR is used as an indicator that a particular segment of a genome is present in a particular clone. The Human Genome Project found this application vital to mapping the cosmid clones they were sequencing, and to coordinating the results from different laboratories. An application of PCR is the phylogenic analysis of DNA from ancient sources, such as that found in the recovered bones of Neanderthals, from frozen tissues of mammoths, or from the brain of Egyptian mummies. In some cases the highly degraded DNA from these sources might be reassembled during the early stages of amplification. A common application of PCR is the study of patterns of gene expression. Tissues (or even individual cells) can be analyzed at different stages to see which genes have become active, or which have been switched off. This application can also use quantitative PCR to quantitate the actual levels of expression The ability of PCR to simultaneously amplify several loci from individual sperm has greatly enhanced the more traditional task of genetic mapping by studying chromosomal crossovers after meiosis. Rare crossover events between very close loci have been directly observed by analyzing thousands of individual sperms. Similarly, unusual deletions, insertions, translocations, or inversions can be analyzed, all without having to wait (or pay) for the long and laborious processes of fertilization, embryogenesis, etc. Site-directed mutagenesis: PCR can be used to create mutant genes with mutations chosen by scientists at will. These mutations can be chosen in order to understand how proteins accomplish their functions, and to change or improve protein function. Advantages PCR has a number of advantages. It is fairly simple to understand and to use, and produces results rapidly. The technique is highly sensitive with the potential to produce millions to billions of copies of a specific product for sequencing, cloning, and analysis. qRT-PCR shares the same advantages as the PCR, with an added advantage of quantification of the synthesized product. Therefore, it has its uses to analyze alterations of gene expression levels in tumors, microbes, or other disease states. PCR is a very powerful and practical research tool. The sequencing of unknown etiologies of many diseases are being figured out by the PCR. The technique can help identify the sequence of previously unknown viruses related to those already known and thus give us a better understanding of the disease itself. If the procedure can be further simplified and sensitive non-radiometric detection systems can be developed, the PCR will assume a prominent place in the clinical laboratory for years to come. Limitations One major limitation of PCR is that prior information about the target sequence is necessary in order to generate the primers that will allow its selective amplification. This means that, typically, PCR users must know the precise sequence(s) upstream of the target region on each of the two single-stranded templates in order to ensure that the DNA polymerase properly binds to the primer-template hybrids and subsequently generates the entire target region during DNA synthesis. Like all enzymes, DNA polymerases are also prone to error, which in turn causes mutations in the PCR fragments that are generated. Another limitation of PCR is that even the smallest amount of contaminating DNA can be amplified, resulting in misleading or ambiguous results. To minimize the chance of contamination, investigators should reserve separate rooms for reagent preparation, the PCR, and analysis of product. Reagents should be dispensed into single-use aliquots. Pipettors with disposable plungers and extra-long pipette tips should be routinely used. It is moreover recommended to ensure that the lab set-up follows a unidirectional workflow. No materials or reagents used in the PCR and analysis rooms should ever be taken into the PCR preparation room without thorough decontamination. Environmental samples that contain humic acids may inhibit PCR amplification and lead to inaccurate results. Variations Allele-specific PCR: a diagnostic or cloning technique based on single-nucleotide variations (SNVs not to be confused with SNPs) (single-base differences in a patient). It requires prior knowledge of a DNA sequence, including differences between alleles, and uses primers whose 3' ends encompass the SNV (base pair buffer around SNV usually incorporated). PCR amplification under stringent conditions is much less efficient in the presence of a mismatch between template and primer, so successful amplification with an SNP-specific primer signals presence of the specific SNP in a sequence. See SNP genotyping for more information. Assembly PCR or Polymerase Cycling Assembly (PCA): artificial synthesis of long DNA sequences by performing PCR on a pool of long oligonucleotides with short overlapping segments. The oligonucleotides alternate between sense and antisense directions, and the overlapping segments determine the order of the PCR fragments, thereby selectively producing the final long DNA product. Asymmetric PCR: preferentially amplifies one DNA strand in a double-stranded DNA template. It is used in sequencing and hybridization probing where amplification of only one of the two complementary strands is required. PCR is carried out as usual, but with a great excess of the primer for the strand targeted for amplification. Because of the slow (arithmetic) amplification later in the reaction after the limiting primer has been used up, extra cycles of PCR are required. A recent modification on this process, known as Linear-After-The-Exponential-PCR (LATE-PCR), uses a limiting primer with a higher melting temperature (Tm) than the excess primer to maintain reaction efficiency as the limiting primer concentration decreases mid-reaction. Convective PCR: a pseudo-isothermal way of performing PCR. Instead of repeatedly heating and cooling the PCR mixture, the solution is subjected to a thermal gradient. The resulting thermal instability driven convective flow automatically shuffles the PCR reagents from the hot and cold regions repeatedly enabling PCR. Parameters such as thermal boundary conditions and geometry of the PCR enclosure can be optimized to yield robust and rapid PCR by harnessing the emergence of chaotic flow fields. Such convective flow PCR setup significantly reduces device power requirement and operation time. Dial-out PCR: a highly parallel method for retrieving accurate DNA molecules for gene synthesis. A complex library of DNA molecules is modified with unique flanking tags before massively parallel sequencing. Tag-directed primers then enable the retrieval of molecules with desired sequences by PCR. Digital PCR (dPCR): used to measure the quantity of a target DNA sequence in a DNA sample. The DNA sample is highly diluted so that after running many PCRs in parallel, some of them do not receive a single molecule of the target DNA. The target DNA concentration is calculated using the proportion of negative outcomes. Hence the name 'digital PCR'. Helicase-dependent amplification: similar to traditional PCR, but uses a constant temperature rather than cycling through denaturation and annealing/extension cycles. DNA helicase, an enzyme that unwinds DNA, is used in place of thermal denaturation. Hot start PCR: a technique that reduces non-specific amplification during the initial set up stages of the PCR. It may be performed manually by heating the reaction components to the denaturation temperature (e.g., 95 °C) before adding the polymerase. Specialized enzyme systems have been developed that inhibit the polymerase's activity at ambient temperature,
below). Many modern thermal cyclers make use of the Peltier effect, which permits both heating and cooling of the block holding the PCR tubes simply by reversing the electric current. Thin-walled reaction tubes permit favorable thermal conductivity to allow for rapid thermal equilibrium. Most thermal cyclers have heated lids to prevent condensation at the top of the reaction tube. Older thermal cyclers lacking a heated lid require a layer of oil on top of the reaction mixture or a ball of wax inside the tube. Procedure Typically, PCR consists of a series of 20–40 repeated temperature changes, called thermal cycles, with each cycle commonly consisting of two or three discrete temperature steps (see figure below). The cycling is often preceded by a single temperature step at a very high temperature (>), and followed by one hold at the end for final product extension or brief storage. The temperatures used and the length of time they are applied in each cycle depend on a variety of parameters, including the enzyme used for DNA synthesis, the concentration of bivalent ions and dNTPs in the reaction, and the melting temperature (Tm) of the primers. The individual steps common to most PCR methods are as follows: Initialization: This step is only required for DNA polymerases that require heat activation by hot-start PCR. It consists of heating the reaction chamber to a temperature of , or if extremely thermostable polymerases are used, which is then held for 1–10 minutes. Denaturation: This step is the first regular cycling event and consists of heating the reaction chamber to for 20–30 seconds. This causes DNA melting, or denaturation, of the double-stranded DNA template by breaking the hydrogen bonds between complementary bases, yielding two single-stranded DNA molecules. Annealing: In the next step, the reaction temperature is lowered to for 20–40 seconds, allowing annealing of the primers to each of the single-stranded DNA templates. Two different primers are typically included in the reaction mixture: one for each of the two single-stranded complements containing the target region. The primers are single-stranded sequences themselves, but are much shorter than the length of the target region, complementing only very short sequences at the 3' end of each strand. It is critical to determine a proper temperature for the annealing step because efficiency and specificity are strongly affected by the annealing temperature. This temperature must be low enough to allow for hybridization of the primer to the strand, but high enough for the hybridization to be specific, i.e., the primer should bind only to a perfectly complementary part of the strand, and nowhere else. If the temperature is too low, the primer may bind imperfectly. If it is too high, the primer may not bind at all. A typical annealing temperature is about 3–5 °C below the Tm of the primers used. Stable hydrogen bonds between complementary bases are formed only when the primer sequence very closely matches the template sequence. During this step, the polymerase binds to the primer-template hybrid and begins DNA formation. Extension/elongation: The temperature at this step depends on the DNA polymerase used; the optimum activity temperature for the thermostable DNA polymerase of Taq polymerase is approximately , though a temperature of is commonly used with this enzyme. In this step, the DNA polymerase synthesizes a new DNA strand complementary to the DNA template strand by adding free dNTPs from the reaction mixture that is complementary to the template in the 5'-to-3' direction, condensing the 5'-phosphate group of the dNTPs with the 3'-hydroxy group at the end of the nascent (elongating) DNA strand. The precise time required for elongation depends both on the DNA polymerase used and on the length of the DNA target region to amplify. As a rule of thumb, at their optimal temperature, most DNA polymerases polymerize a thousand bases per minute. Under optimal conditions (i.e., if there are no limitations due to limiting substrates or reagents), at each extension/elongation step, the number of DNA target sequences is doubled. With each successive cycle, the original template strands plus all newly generated strands become template strands for the next round of elongation, leading to exponential (geometric) amplification of the specific DNA target region. The processes of denaturation, annealing and elongation constitute a single cycle. Multiple cycles are required to amplify the DNA target to millions of copies. The formula used to calculate the number of DNA copies formed after a given number of cycles is 2n, where n is the number of cycles. Thus, a reaction set for 30 cycles results in 230, or , copies of the original double-stranded DNA target region. Final elongation: This single step is optional, but is performed at a temperature of (the temperature range required for optimal activity of most polymerases used in PCR) for 5–15 minutes after the last PCR cycle to ensure that any remaining single-stranded DNA is fully elongated. Final hold: The final step cools the reaction chamber to for an indefinite time, and may be employed for short-term storage of the PCR products. To check whether the PCR successfully generated the anticipated DNA target region (also sometimes referred to as the amplimer or amplicon), agarose gel electrophoresis may be employed for size separation of the PCR products. The size of the PCR products is determined by comparison with a DNA ladder, a molecular weight marker which contains DNA fragments of known sizes, which runs on the gel alongside the PCR products. Stages As with other chemical reactions, the reaction rate and efficiency of PCR are affected by limiting factors. Thus, the entire PCR process can further be divided into three stages based on reaction progress: Exponential amplification: At every cycle, the amount of product is doubled (assuming 100% reaction efficiency). After 30 cycles, a single copy of DNA can be increased up to 1,000,000,000 (one billion) copies. In a sense, then, the replication of a discrete strand of DNA is being manipulated in a tube under controlled conditions. The reaction is very sensitive: only minute quantities of DNA must be present. Leveling off stage: The reaction slows as the DNA polymerase loses activity and as consumption of reagents, such as dNTPs and primers, causes them to become more limited. Plateau: No more product accumulates due to exhaustion of reagents and enzyme. Optimization In practice, PCR can fail for various reasons, such as sensitivity or contamination. Contamination with extraneous DNA can lead to spurious products and is addressed with lab protocols and procedures that separate pre-PCR mixtures from potential DNA contaminants. For instance, if DNA from a crime scene is analyzed, a single DNA molecule from lab personnel could be amplified and misguide the investigation. Hence the PCR-setup areas is separated from the analysis or purification of other PCR products, disposable plasticware used, and the work surface between reaction setups needs to be thoroughly cleaned. Specificity can be adjusted by experimental conditions so that no spurious products are generated. Primer-design techniques are important in improving PCR product yield and in avoiding the formation of unspecific products. The usage of alternate buffer components or polymerase enzymes can help with amplification of long or otherwise problematic regions of DNA. For instance, Q5 polymerase is said to be ~280 times less error-prone than Taq polymerase. Both the running parameters (e.g. temperature and duration of cycles), or the addition of reagents, such as formamide, may increase the specificity and yield of PCR. Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design. Applications Selective DNA isolation PCR allows isolation of DNA fragments from genomic DNA by selective amplification of a specific region of DNA. This use of PCR augments many ways, such as generating hybridization probes for Southern or northern hybridization and DNA cloning, which require larger amounts of DNA, representing a specific DNA region. PCR supplies these techniques with high amounts of pure DNA, enabling analysis of DNA samples even from very small amounts of starting material. Other applications of PCR include DNA sequencing to determine unknown PCR-amplified sequences in which one of the amplification primers may be used in Sanger sequencing, isolation of a DNA sequence to expedite recombinant DNA technologies involving the insertion of a DNA sequence into a plasmid, phage, or cosmid (depending on size) or the genetic material of another organism. Bacterial colonies (such as E. coli) can be rapidly screened by PCR for correct DNA vector constructs. PCR may also be used for genetic fingerprinting; a forensic technique used to identify a person or organism by comparing experimental DNAs through different PCR-based methods. Some PCR fingerprint methods have high discriminative power and can be used to identify genetic relationships between individuals, such as parent-child or between siblings, and are used in paternity testing (Fig. 4). This technique may also be used to determine evolutionary relationships among organisms when certain molecular clocks are used (i.e. the 16S rRNA and recA genes of microorganisms). Amplification and quantification of DNA Because PCR amplifies the regions of DNA that it targets, PCR can be used to analyze extremely small amounts of sample. This is often critical for forensic analysis, when only a trace amount of DNA is available as evidence. PCR may also be used in the analysis of ancient DNA that is tens of thousands of years old. These PCR-based techniques have been successfully used on animals, such as a forty-thousand-year-old mammoth, and also on human DNA, in applications ranging from the analysis of Egyptian mummies to the identification of a Russian tsar and the body of English king Richard III. Quantitative PCR or Real Time PCR (qPCR, not to be confused with RT-PCR) methods allow the estimation of the amount of a given sequence present in a sample—a technique often applied to quantitatively determine levels of gene expression. Quantitative PCR is an established tool for DNA quantification that measures the accumulation of DNA product after each round of PCR amplification. qPCR allows the quantification and detection of a specific DNA sequence in real time since it measures concentration while the synthesis process is taking place. There are two methods for simultaneous detection and quantification. The first method consists of using fluorescent dyes that are retained nonspecifically in between the double strands. The second method involves probes that code for specific sequences and are fluorescently labeled. Detection of DNA using these methods can only be seen after the hybridization of probes with its complementary DNA takes place. An interesting technique combination is real-time PCR and reverse transcription. This sophisticated technique, called RT-qPCR, allows for the quantification of a small quantity of RNA. Through this combined technique, mRNA is converted to cDNA, which is further quantified using qPCR. This technique lowers the possibility of error at the end point of PCR, increasing chances for detection of genes associated with genetic diseases such as cancer. Laboratories use RT-qPCR for the purpose of sensitively measuring gene regulation. The mathematical foundations for the reliable quantification of the PCR and RT-qPCR facilitate the implementation of accurate fitting procedures of experimental data in research, medical, diagnostic and infectious disease applications. Medical and diagnostic applications Prospective parents can be tested for being genetic carriers, or their children might be tested for actually being affected by a disease. DNA samples for prenatal testing can be obtained by amniocentesis, chorionic villus sampling, or even by the analysis of rare fetal cells circulating in the mother's bloodstream. PCR analysis is also essential to preimplantation genetic diagnosis, where individual cells of a developing embryo are tested for mutations. PCR can also be used as part of a sensitive test for tissue typing, vital to organ transplantation. there is even a proposal to replace the traditional antibody-based tests for blood type with PCR-based tests. Many forms of cancer involve alterations to oncogenes. By using PCR-based tests to study these mutations, therapy regimens can sometimes be individually customized to a patient. PCR permits early diagnosis of malignant diseases such as leukemia and lymphomas, which is currently the highest-developed in cancer research and is already being used routinely. PCR assays can be performed directly on genomic DNA samples to detect translocation-specific malignant cells at a sensitivity that is at least 10,000 fold higher than that of other methods. PCR is very useful in the medical field since it allows for the isolation and amplification of tumor suppressors. Quantitative PCR for example, can be used to quantify and analyze single cells, as well as recognize DNA, mRNA and protein confirmations and combinations. Infectious disease applications PCR allows for rapid and highly specific diagnosis of infectious diseases, including those caused by bacteria or viruses. PCR also permits identification of non-cultivatable or slow-growing microorganisms such as mycobacteria, anaerobic bacteria, or viruses from tissue culture assays and animal models. The basis for PCR diagnostic applications in microbiology is the detection of infectious agents and the discrimination of non-pathogenic from pathogenic strains by virtue of specific genes. Characterization and detection of infectious disease organisms have been revolutionized by PCR in the following ways: The human immunodeficiency virus (or HIV), is a difficult target to find and eradicate. The earliest tests for infection relied on the presence of antibodies to the virus circulating in the bloodstream. However, antibodies don't appear until many weeks after infection, maternal antibodies mask the infection of a newborn, and therapeutic agents to fight the infection don't affect the antibodies. PCR tests have been developed that can detect as little as one viral genome among the DNA of over 50,000 host cells. Infections can be detected earlier, donated blood can be screened directly for the virus, newborns can be immediately tested for infection, and the effects of antiviral treatments can be quantified. Some disease organisms, such as that for tuberculosis, are difficult to sample from patients and slow to be grown in the laboratory. PCR-based tests have allowed detection of small numbers of disease organisms (both live or dead), in convenient samples. Detailed genetic analysis can also be used to detect antibiotic resistance, allowing immediate and effective therapy. The effects of therapy can also be immediately evaluated. The spread of a disease organism through populations of domestic or wild animals can be monitored by PCR testing. In many cases, the appearance of new virulent sub-types can be detected and monitored. The sub-types of an organism that were responsible for earlier epidemics can also be determined by PCR analysis. Viral DNA can be detected by PCR. The primers used must be specific to the targeted sequences in the DNA of a virus, and PCR can be used for diagnostic analyses or DNA sequencing of the viral genome. The high sensitivity of PCR permits virus detection soon after infection and even before the onset of disease. Such early detection may give physicians a significant lead time in treatment. The amount of virus ("viral load") in a patient can also be quantified by PCR-based DNA quantitation techniques (see below). A variant of PCR (RT-PCR) is used for detecting viral RNA rather than DNA: in this test the enzyme reverse transcriptase is used to generate a DNA sequence which matches the viral RNA; this DNA is then amplified as per the usual PCR method. RT-PCR is widely used to detect the SARS-CoV-2 viral genome. Diseases such as pertussis (or whooping cough) are caused by the bacteria Bordetella pertussis. This bacteria is marked by a serious acute respiratory infection that affects various animals and humans and has led to the deaths of many young children. The pertussis toxin is a protein exotoxin that binds to cell receptors by two dimers and reacts with different cell types such as T lymphocytes which play a role in cell immunity. PCR is an important testing tool that can detect sequences within the gene for the pertussis toxin. Because PCR has a high sensitivity for the toxin and a rapid turnaround time, it is very efficient for diagnosing pertussis when compared to culture. Forensic applications The development of PCR-based genetic (or DNA) fingerprinting protocols has seen widespread application in forensics: In its most discriminating form, genetic fingerprinting can uniquely discriminate any one person from the entire population of the world. Minute samples of DNA can be isolated from a crime scene, and compared to that from suspects, or from a DNA database of earlier evidence or convicts. Simpler versions of these tests are often used to rapidly rule out suspects during a criminal investigation. Evidence from decades-old crimes can be tested, confirming or exonerating the people originally convicted. Forensic DNA typing has been an effective way of identifying or exonerating criminal suspects due to analysis of evidence discovered at a crime scene. The human genome has many repetitive regions that can be found within gene sequences or in non-coding regions of the genome. Specifically, up to 40% of human DNA is repetitive. There are two distinct categories for these repetitive, non-coding regions in the genome. The first category is called variable number tandem repeats (VNTR), which are 10–100 base pairs long and the second category is called short tandem repeats (STR) and these consist of repeated 2–10 base pair sections. PCR is used to amplify several well-known VNTRs and STRs using primers that flank each of the repetitive regions. The sizes of the fragments obtained from any individual for each of the STRs will indicate which alleles are present. By analyzing several STRs for an individual, a set of alleles for
polymerase III holoenzyme Family X: Pol β, λ, μ Terminal deoxynucleotidyl transferase (TDT), which lends diversity to antibody heavy chains. Family Y: DNA polymerase IV (DinB) and DNA polymerase V (UmuD'2C) - SOS repair polymerases; Pol η, ι, κ Reverse transcriptase (RT; RNA-directed DNA polymerase; RdDP) Telomerase DNA-directed RNA polymerase (DdRP, RNAP) Multi-subunit (msDdRP): RNA polymerase I, RNA polymerase II, RNA polymerase III Single-subunit (ssDdRP): T7 RNA polymerase, POLRMT Primase, PrimPol RNA replicase (RNA-directed RNA polymerase, RdRP) Viral (single-subunit) Eukaryotic cellular (cRdRP; dual-subunit) Template-less RNA elongation Polyadenylation: PAP, PNPase By structure Polymerases are generally split into two superfamilies, the "right hand" fold () and the "double psi beta barrel" (often simply "double-barrel") fold. The former is seen in almost all DNA polymerases and almost all viral single-subunit polymerases; they are marked by a conserved "palm" domain. The latter is seen in all multi-subunit RNA polymerases, in cRdRP, and in "family D" DNA polymerases found in archaea. The "X" family represented by DNA polymerase beta has only a vague "palm" shape, and is sometimes considered a different superfamily (). Primases generally don't fall into either category. Bacterial primases usually have the Toprim domain,
polymerase, POLRMT Primase, PrimPol RNA replicase (RNA-directed RNA polymerase, RdRP) Viral (single-subunit) Eukaryotic cellular (cRdRP; dual-subunit) Template-less RNA elongation Polyadenylation: PAP, PNPase By structure Polymerases are generally split into two superfamilies, the "right hand" fold () and the "double psi beta barrel" (often simply "double-barrel") fold. The former is seen in almost all DNA polymerases and almost all viral single-subunit polymerases; they are marked by a conserved "palm" domain. The latter is seen in all multi-subunit RNA polymerases, in cRdRP, and in "family D" DNA polymerases found in archaea. The "X" family represented by DNA polymerase beta has only a vague "palm" shape, and is sometimes considered a different superfamily (). Primases generally don't fall into either category. Bacterial primases usually have the Toprim domain, and are related to topoisomerases and mitochondrial helicase twinkle. Archae and eukaryotic primases form an unrelated AEP family, possibly related to
Canada Pacific Railway Company, sought the potentially lucrative charter for the project. The problem lay in that Allan and Macdonald highly, and secretly, were in cahoots with American financiers such as George W. McMullen and Jay Cooke, men who were deeply interested in the rival American undertaking, the Northern Pacific Railroad. Scandal Two groups competed for the contract to build the railway, Hugh Allan's Canada Pacific Railway Company and David Lewis Macpherson's Inter-Oceanic Railway Company. On April 2, 1873, Lucius Seth Huntington, a Liberal Member of Parliament, created an uproar in the House of Commons. He announced he had uncovered evidence that Allan and his associates had been granted the Canadian Pacific Railway contract in return for political donations of $360,000. In 1873, it became known that Allan had contributed a large sum of money to the Conservative government's re-election campaign of 1872; some sources quote a sum over $360,000. Allan had promised to keep American capital out of the railway deal, but had lied to Macdonald over this vital point, and Macdonald later discovered the lie. The Liberal party, at this time the opposition party in Parliament, accused the Conservatives of having made a tacit agreement to give the contract to Hugh Allan in exchange for money. In making such allegations, the Liberals and their allies in the press (in particular, George Brown's newspaper The Globe) presumed that most of the money had been used to bribe voters in the 1872 election. The secret ballot, then considered a novelty, had not yet been introduced in Canada. Although it was illegal to offer, solicit or accept bribes in exchange for votes, effective enforcement of this prohibition proved impossible. Despite Macdonald's claims that he was innocent, evidence came to light showing receipts of money from Allan to Macdonald and some of his political colleagues. Perhaps even more damaging to Macdonald was when the Liberals discovered a telegram, through a former employee of Allan, which was thought to have been stolen from the safe of Allan's lawyer, John Abbott. The scandal proved fatal to Macdonald's government. Macdonald's control of Parliament was already tenuous following the 1872 election. In a time when party discipline was not as strong as it is today, once Macdonald's culpability in the scandal became known he could no longer expect to retain the confidence of the House of Commons. Macdonald resigned as prime minister on 5 November 1873. He also offered his resignation as the head of the Conservative party, but it was not accepted and he was convinced to stay. Perhaps as a direct result of this scandal, the Conservative party fell in the eyes of the public and was relegated to being the Official Opposition in the federal election of 1874. This election, in which secret ballots were used for the first time, gave Alexander Mackenzie a firm mandate to succeed Macdonald as the new prime minister of Canada. Despite the short-term defeat, the scandal was not a mortal wound to Macdonald, the Conservative Party, or the Canadian Pacific Railway. An economic depression gripped Canada shortly after Macdonald left office, and although the causes of the depression were largely external to Canada many Canadians nevertheless blamed Mackenzie for the ensuing hard times. Macdonald would return as prime minister in the 1878 election thanks
the potentially lucrative charter for the project. The problem lay in that Allan and Macdonald highly, and secretly, were in cahoots with American financiers such as George W. McMullen and Jay Cooke, men who were deeply interested in the rival American undertaking, the Northern Pacific Railroad. Scandal Two groups competed for the contract to build the railway, Hugh Allan's Canada Pacific Railway Company and David Lewis Macpherson's Inter-Oceanic Railway Company. On April 2, 1873, Lucius Seth Huntington, a Liberal Member of Parliament, created an uproar in the House of Commons. He announced he had uncovered evidence that Allan and his associates had been granted the Canadian Pacific Railway contract in return for political donations of $360,000. In 1873, it became known that Allan had contributed a large sum of money to the Conservative government's re-election campaign of 1872; some sources quote a sum over $360,000. Allan had promised to keep American capital out of the railway deal, but had lied to Macdonald over this vital point, and Macdonald later discovered the lie. The Liberal party, at this time the opposition party in Parliament, accused the Conservatives of having made a tacit agreement to give the contract to Hugh Allan in exchange for money. In making such allegations, the Liberals and their allies in the press (in particular, George Brown's newspaper The Globe) presumed that most of the money had been used to bribe voters in the 1872 election. The secret ballot, then considered a novelty, had not yet been introduced in Canada. Although it was illegal to offer, solicit or accept bribes in exchange for votes, effective enforcement of this prohibition proved impossible. Despite Macdonald's claims that he was innocent, evidence came to light showing receipts of money from Allan to Macdonald and some of his political colleagues. Perhaps even more damaging to Macdonald was when the Liberals discovered a telegram, through a former employee of Allan, which was thought to have been stolen from the safe of Allan's lawyer, John Abbott. The scandal proved fatal to Macdonald's government. Macdonald's control of Parliament was already tenuous following the 1872 election. In a time when party discipline was not as strong as it is today, once Macdonald's culpability in the scandal became known he could no longer expect to retain the confidence of the House of Commons. Macdonald resigned as
in the DNA backbone between Okazaki fragments which is sealed by DNA ligase. In eukaryotic primer removal, DNA polymerase δ extends the Okazaki fragment in 5′→3′ direction, and upon encountering the RNA primer from the previous Okazaki fragment, it displaces the 5′ end of the primer into a single-stranded RNA flap, which is removed by nuclease cleavage. Cleavage of the RNA flaps involves either flap structure-specific endonuclease 1 (FEN1) cleavage of short flaps, or coating of long flaps by the single-stranded DNA binding protein replication protein A (RPA) and sequential cleavage by Dna2 nuclease and FEN1. Uses of synthetic primers Synthetic primers are chemically synthesized oligonucleotides, usually of DNA, which can be customized to anneal to a specific site on the template DNA. In solution, the primer spontaneously hybridizes with the template through Watson-Crick base pairing before being extended by DNA polymerase. The ability to create and customize synthetic primers has proven an invaluable tool necessary to a variety of molecular biological approaches involving the analysis of DNA. Both the Sanger chain termination method and the “Next-Gen” method of DNA sequencing require primers to initiate the reaction. PCR primer design The polymerase chain reaction (PCR) uses a pair of custom primers to direct DNA elongation toward each other at opposite ends of the sequence being amplified. These primers are typically between 18 and 24 bases in length and must code for only the specific upstream and downstream sites of the sequence being amplified. A primer that can bind to multiple regions along the DNA will amplify them all, eliminating the purpose of PCR. A few criteria must be brought into consideration when designing a pair of PCR primers. Pairs of primers should have similar melting temperatures since annealing during PCR occurs for both strands simultaneously, and this shared melting temperature must not be either too much higher or lower than the reaction's annealing temperature. A primer with a Tm (melting temperature) too much higher than the reaction's annealing temperature may mishybridize and extend at an incorrect location along the DNA sequence. A Tm significantly lower than the annealing temperature may fail to anneal and extend at all. Additionally, primer sequences need to be chosen to uniquely select for a region of DNA, avoiding the possibility of hybridization to a similar sequence nearby. A commonly used method for selecting a primer site is BLAST search, whereby all the possible regions to which a primer may bind can be seen. Both the nucleotide sequence as well as the primer itself can be BLAST searched. The free NCBI tool Primer-BLAST integrates primer design and BLAST search into one application, as do commercial software products such as ePrime and Beacon Designer. Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design by giving melting and annealing temperatures, etc. As of 2014, many online tools are freely available for primer design, some of which focus on specific applications of PCR. Primers with high specificity for a subset of DNA templates in the presence of many similar variants can be designed using DECIPHER. Selecting a specific region of DNA for primer binding requires some additional considerations. Regions high in mononucleotide and dinucleotide repeats should be avoided, as loop formation can occur and contribute to mishybridization. Primers should not easily anneal with other primers in the mixture; this phenomenon can lead to the production of 'primer dimer' products contaminating the end solution. Primers should also not anneal strongly to themselves, as internal hairpins and loops could hinder the annealing with the template DNA. When designing primers, additional nucleotide bases can be added to the back ends of each primer, resulting in a customized cap sequence on each end of the amplified region. One application for this practice is for use in TA cloning, a special subcloning technique similar to PCR, where efficiency can be increased by adding AG tails to the 5′ and the 3′ ends. Degenerate primers Some situations may call for the use of degenerate primers.
strand, the template DNA runs in the 5′→3′ direction. Since DNA polymerase cannot add bases in the 3′→5′ direction complementary to the template strand, DNA is synthesized ‘backward’ in short fragments moving away from the replication fork, known as Okazaki fragments. Unlike in the leading strand, this method results in the repeated starting and stopping of DNA synthesis, requiring multiple RNA primers. Along the DNA template, primase intersperses RNA primers that DNA polymerase uses to synthesize DNA from in the 5′→3′ direction. Another example of primers being used to enable DNA synthesis is reverse transcription. Reverse transcriptase is an enzyme that uses a template strand of RNA to synthesize a complementary strand of DNA. The DNA polymerase component of reverse transcriptase requires an existing 3' end to begin synthesis. Primer removal After the insertion of Okazaki fragments, the RNA primers are removed (the mechanism of removal differs between prokaryotes and eukaryotes) and replaced with new deoxyribonucleotides that fill the gaps where the RNA was present. DNA ligase then joins the fragmented strands together, completing the synthesis of the lagging strand. In prokaryotes, DNA polymerase I synthesizes the Okazaki fragment until it reaches the previous RNA primer. Then the enzyme simultaneously acts as a 5′→3′ exonuclease, removing primer ribonucleotides in front and adding deoxyribonucleotides behind until the region has been replaced by DNA, leaving a small gap in the DNA backbone between Okazaki fragments which is sealed by DNA ligase. In eukaryotic primer removal, DNA polymerase δ extends the Okazaki fragment in 5′→3′ direction, and upon encountering the RNA primer from the previous Okazaki fragment, it displaces the 5′ end of the primer into a single-stranded RNA flap, which is removed by nuclease cleavage. Cleavage of the RNA flaps involves either flap structure-specific endonuclease 1 (FEN1) cleavage of short flaps, or coating of long flaps by the single-stranded DNA binding protein replication protein A (RPA) and sequential cleavage by Dna2 nuclease and FEN1. Uses of synthetic primers Synthetic primers are chemically synthesized oligonucleotides, usually of DNA, which can be customized to anneal to a specific site on the template DNA. In solution, the primer spontaneously hybridizes with the template through Watson-Crick base pairing before being extended by DNA polymerase. The ability to create and customize synthetic primers has proven an invaluable tool necessary to a variety of molecular biological approaches involving the analysis of DNA. Both the Sanger chain termination method and the “Next-Gen” method of DNA sequencing require primers to initiate the reaction. PCR primer design The polymerase chain reaction (PCR) uses a pair of custom primers to direct DNA elongation toward each other at opposite ends of the sequence being amplified. These primers are typically between 18 and 24 bases in length and must code for only the specific upstream and downstream sites of the sequence being amplified. A primer that can bind to multiple regions along the DNA will amplify them all, eliminating the purpose of PCR. A few criteria must be brought into consideration when designing a pair of PCR primers. Pairs of primers should have similar melting temperatures since annealing during PCR occurs for both strands simultaneously, and this shared melting temperature must not be either too much higher or lower than the reaction's annealing temperature. A primer with a Tm (melting temperature) too much higher than the reaction's annealing temperature may mishybridize and extend at an incorrect location along the DNA sequence. A Tm significantly lower than the annealing temperature may fail to anneal and extend at all. Additionally, primer sequences need to be chosen to uniquely select for a region of DNA, avoiding the possibility of hybridization to a similar sequence nearby. A commonly used method for selecting a primer site is BLAST search, whereby all the possible regions to which a primer may bind can be seen. Both the nucleotide sequence as well as the primer itself can be BLAST searched. The free NCBI tool Primer-BLAST integrates primer design and BLAST search into one application, as do commercial software products such as ePrime and Beacon Designer. Computer simulations of theoretical PCR results (Electronic PCR) may be performed to assist in primer design by giving melting and annealing temperatures, etc. As of 2014, many online tools are freely available for primer design, some of which focus on specific applications of PCR. Primers with high specificity for a subset of DNA templates in the presence of many similar variants can be designed using DECIPHER. Selecting a specific region
is water-soluble. Purine also gives its name to the wider class of molecules, purines, which include substituted purines and their tautomers. They are the most widely occurring nitrogen-containing heterocycles in nature. Dietary sources Purines are found in high concentration in meat and meat products, especially internal organs such as liver and kidney. In general, plant-based diets are low in purines. High-purine plants and algae include some legumes (lentils and black eye peas) and spirulina. Examples of high-purine sources include: sweetbreads, anchovies, sardines, liver, beef kidneys, brains, meat extracts (e.g., Oxo, Bovril), herring, mackerel, scallops, game meats, yeast (beer, yeast extract, nutritional yeast) and gravy. A moderate amount of purine is also contained in red meat, beef, pork, poultry, fish and seafood, asparagus, cauliflower, spinach, mushrooms, green peas, lentils, dried peas, beans, oatmeal, wheat bran, wheat germ, and haws. Biochemistry Purines and pyrimidines make up the two groups of nitrogenous bases, including the two groups of nucleotide bases. The purine nucleotide bases are guanine (G) and adenine (A) which distinguish their corresponding deoxyribonucleotides (deoxyadenosine and deoxyguanosine) and ribonucleotides (adenosine, guanosine). These nucleotides are DNA and RNA building blocks, respectively. Purine bases also play an essential role in many metabolic and signalling processes within the compounds guanosine monophosphate (GMP) and adenosine monophosphate (AMP). In order to perform these essential cellular processes, both purines and pyrimidines are needed by the cell, and in similar quantities. Both purine and pyrimidine are self-inhibiting and activating. When purines are formed, they inhibit the enzymes required for more purine formation. This self-inhibition occurs as they also activate the enzymes needed for pyrimidine formation. Pyrimidine simultaneously self-inhibits and activates purine in similar manner. Because of this, there is nearly an equal amount of both substances in the cell at all times. Properties Purine is both a very weak acid (pKa 8.93) and an even weaker base (pKa 2.39). If dissolved in pure water, the pH will be halfway between these two pKa values. Notable purines There are many naturally occurring purines. They include the nucleobases adenine (2) and guanine (3). In DNA, these bases form hydrogen bonds with their complementary pyrimidines, thymine and cytosine, respectively. This is called complementary base pairing. In RNA, the complement of adenine is uracil instead of thymine. Other notable purines are hypoxanthine, xanthine, theophylline, theobromine, caffeine, uric acid and isoguanine. Functions Aside from the crucial roles of purines (adenine and guanine) in DNA and RNA, purines are also significant components in a number of other important biomolecules, such as ATP, GTP, cyclic AMP, NADH, and coenzyme A. Purine (1) itself, has not been found in nature, but it can be produced by organic synthesis. They may also function directly as neurotransmitters, acting upon purinergic receptors. Adenosine activates adenosine receptors. History The word purine (pure urine) was coined by the German chemist Emil Fischer in 1884. He synthesized it for the first time in 1898. The starting material for the reaction sequence was uric acid (8), which had been isolated from kidney stones by Carl Wilhelm Scheele in 1776. Uric acid (8) was reacted with PCl5 to give 2,6,8-trichloropurine (10), which was converted with HI and PH4I to give 2,6-diiodopurine (11). The product was reduced to purine (1) using zinc dust. Metabolism Many organisms have metabolic pathways to synthesize and break down purines. Purines are biologically synthesized as nucleosides (bases attached to ribose). Accumulation of modified purine nucleotides is defective to various cellular processes, especially those involving DNA and RNA. To be viable, organisms possess a number of deoxypurine phosphohydrolases, which hydrolyze these purine derivatives removing them from the active NTP and dNTP pools. Deamination of purine bases can result in accumulation of such nucleotides as ITP, dITP, XTP and dXTP. Defects in enzymes that control purine production and breakdown can
and RNA. To be viable, organisms possess a number of deoxypurine phosphohydrolases, which hydrolyze these purine derivatives removing them from the active NTP and dNTP pools. Deamination of purine bases can result in accumulation of such nucleotides as ITP, dITP, XTP and dXTP. Defects in enzymes that control purine production and breakdown can severely alter a cell's DNA sequences, which may explain why people who carry certain genetic variants of purine metabolic enzymes have a higher risk for some types of cancer. Purine biosynthesis in the three domains of life Organisms in all three domains of life, eukaryotes, bacteria and archaea, are able to carry out de novo biosynthesis of purines. This ability reflects the essentiality of purines for life. The biochemical pathway of synthesis is very similar in eukaryotes and bacterial species, but is more variable among archaeal species. A nearly complete, or complete, set of genes required for purine biosynthesis was determined to be present in 58 of the 65 archaeal species studied. However, also identified were seven archaeal species with entirely, or nearly entirely, absent purine encoding genes. Apparently the archaeal species unable to synthesize purines are able to acquire exogenous purines for growth., and are thus analogous to purine mutants of eukaryotes, e.g. purine mutants of the Ascomycete fungus Neurospora crassa, that also require exogenous purines for growth. Relationship with gout Higher levels of meat and seafood consumption are associated with an increased risk of gout, whereas a higher level of consumption of dairy products is associated with a decreased risk. Moderate intake of purine-rich vegetables or protein is not associated with an increased risk of gout. Similar results have been found with the risk of hyperuricemia. Laboratory synthesis In addition to in vivo synthesis of purines in purine metabolism, purine can also be created artificially. Purine (1) is obtained in good yield when formamide is heated in an open vessel at 170 °C for 28 hours. This remarkable reaction and others like it have been discussed in the context of the origin of life. Patented Aug. 20, 1968, the current recognized method of industrial-scale production of adenine is a modified form of the formamide method. This method heats up formamide under 120 degree Celsius conditions within a sealed flask for 5 hours to form adenine. The reaction is heavily increased in quantity by using a phosphorus oxychloride (phosphoryl chloride) or phosphorus pentachloride as an acid catalyst and sunlight or ultraviolet conditions. After the 5 hours have passed and the formamide-phosphorus oxychloride-adenine solution cools down, water is put into the flask containing the formamide and now-formed adenine. The water-formamide-adenine solution is then poured through a filtering column of activated charcoal. The water and formamide molecules, being small molecules, will pass through the charcoal and into the waste flask; the large adenine molecules, however, will attach or “adsorb” to the charcoal due to the van der waals forces that interact between the adenine and the carbon in the charcoal. Because charcoal has a large surface area, it's able to capture the majority of molecules that pass a certain size (greater than water and formamide) through it. To extract the adenine from the charcoal-adsorbed adenine, ammonia gas dissolved in water (aqua ammonia) is poured onto the activated charcoal-adenine structure to liberate the adenine
1 and 4 positions) and pyridazine (nitrogen atoms at the 1 and 2 positions). In nucleic acids, three types of nucleobases are pyrimidine derivatives: cytosine (C), thymine (T), and uracil (U). Occurrence and history The pyrimidine ring system has wide occurrence in nature as substituted and ring fused compounds and derivatives, including the nucleotides cytosine, thymine and uracil, thiamine (vitamin B1) and alloxan. It is also found in many synthetic compounds such as barbiturates and the HIV drug, zidovudine. Although pyrimidine derivatives such as alloxan were known in the early 19th century, a laboratory synthesis of a pyrimidine was not carried out until 1879, when Grimaux reported the preparation of barbituric acid from urea and malonic acid in the presence of phosphorus oxychloride. The systematic study of pyrimidines began in 1884 with Pinner, who synthesized derivatives by condensing ethyl acetoacetate with amidines. Pinner first proposed the name “pyrimidin” in 1885. The parent compound was first prepared by Gabriel and Colman in 1900, by conversion of barbituric acid to 2,4,6-trichloropyrimidine followed by reduction using zinc dust in hot water. Nomenclature The nomenclature of pyrimidines is straightforward. However, like other heterocyclics, tautomeric hydroxyl groups yield complications since they exist primarily in the cyclic amide form. For example, 2-hydroxypyrimidine is more properly named 2-pyrimidone. A partial list of trivial names of various pyrimidines exists. Physical properties Physical properties are shown in the data box. A more extensive discussion, including spectra, can be found in Brown et al. Chemical properties Per the classification by Albert six-membered heterocycles can be described as π-deficient. Substitution by electronegative groups or additional nitrogen atoms in the ring significantly increase the π-deficiency. These effects also decrease the basicity. Like pyridines, in pyrimidines the π-electron density is decreased to an even greater extent. Therefore, electrophilic aromatic substitution is more difficult while nucleophilic aromatic substitution is facilitated. An example of the last reaction type is the displacement of the amino group in 2-aminopyrimidine by chlorine and its reverse. Electron lone pair availability (basicity) is decreased compared to pyridine. Compared to pyridine, N-alkylation and N-oxidation are more difficult. The pKa value for protonated pyrimidine is 1.23 compared to 5.30 for pyridine. Protonation and other electrophilic additions will occur at only one nitrogen due to further deactivation by the second nitrogen. The 2-, 4-, and 6- positions on the pyrimidine ring are electron deficient analogous to those in pyridine and nitro- and dinitrobenzene. The 5-position is less electron deficient and substituents there are quite stable. However, electrophilic substitution is relatively facile at the 5-position, including nitration and halogenation.
heterocyclic ring systems, the synthesis of pyrimidine is not that common and is usually performed by removing functional groups from derivatives. Primary syntheses in quantity involving formamide have been reported. As a class, pyrimidines are typically synthesized by the principal synthesis involving cyclization of β-dicarbonyl compounds with N–C–N compounds. Reaction of the former with amidines to give 2-substituted pyrimidines, with urea to give 2-pyrimidinones, and guanidines to give 2-aminopyrimidines are typical. Pyrimidines can be prepared via the Biginelli reaction. Many other methods rely on condensation of carbonyls with diamines for instance the synthesis of 2-thio-6-methyluracil from thiourea and ethyl acetoacetate or the synthesis of 4-methylpyrimidine with 4,4-dimethoxy-2-butanone and formamide. A novel method is by reaction of N-vinyl and N-aryl amides with carbonitriles under electrophilic activation of the amide with 2-chloro-pyridine and trifluoromethanesulfonic anhydride: Thymine Patented 2013, the current method used for the manufacturing of thymine is done by dissolving the molecule Methyl methacrylate in a solvent of methanol. The solvent is to be maintained at a pH of 8.9-9.1by addition of a base like sodium hydroxide and a temperature of 0-10 degrees celsius. 30% Hydrogen peroxide is then added to the solution to act as an oxygen giver and form the molecule 2,3-epoxy-2-methyl methacrylate, and the solution is mixed for 2-20 hours at the maintained conditions. After this period, urea is dropped into the flask containing the solution and the solution is heated past the boiling point of the methanol and refluxed for 1-3 hours (boiled and fed back into the solution). After this period, the thymine should have formed in the solution. The solution is then concentrated by removing excess methanol through keeping the heat at 65 degrees celsius (slightly above boiling point of methanol) and allowing the methanol to vaporize out of the solution instead of reflux. The concentrated solution is then neutralized by adding hydrochloric acid, forming waste sodium chloride and the desired thymine crystals among the solution. The solution is temporarily warmed up to re- dissolve the crystals, then passed through a reverse osmosis filter to remove the sodium chloride formed and isolate the solution containing the thymine. This solution is then air-dried to yield pure thymine crystals in the form of a white powder. Reactions Because of the decreased basicity compared to pyridine, electrophilic substitution of pyrimidine is less facile. Protonation or alkylation typically takes place at only one of the ring nitrogen atoms. Mono-N-oxidation occurs by reaction with peracids. Electrophilic C-substitution of pyrimidine occurs at the 5-position, the least electron-deficient. Nitration, nitrosation, azo coupling, halogenation, sulfonation, formylation, hydroxymethylation, and aminomethylation have been observed with substituted pyrimidines. Nucleophilic C-substitution should be facilitated at the 2-, 4-, and 6-positions but there are only a few examples. Amination and hydroxylation have been observed for substituted pyrimidines. Reactions with Grignard or alkyllithium reagents yield 4-alkyl- or 4-aryl pyrimidine after aromatization. Free radical attack has been observed for pyrimidine and photochemical reactions have been observed for substituted pyrimidines. Pyrimidine can be hydrogenated to give tetrahydropyrimidine. Derivatives Nucleotides Three nucleobases found in nucleic acids, cytosine (C), thymine (T), and uracil (U), are pyrimidine derivatives: {| |- | || || |- | Cytosine (C) || Thymine (T) || Uracil (U) |} In DNA and RNA, these bases form hydrogen bonds with their complementary purines. Thus, in DNA, the purines adenine (A) and guanine (G) pair up with the pyrimidines thymine (T) and cytosine (C), respectively. In RNA, the complement of adenine (A) is uracil (U) instead of thymine (T), so the pairs that form are adenine:uracil and guanine:cytosine. Very rarely, thymine can appear in RNA, or uracil in DNA, but when the other three major pyrimidine bases are represented, some minor pyrimidine bases can also occur in nucleic acids. These minor pyrimidines are usually methylated versions of major ones and are
The PBM genre's two preeminent magazines of the period were Flagship and Paper Mayhem. Also in the mid-1980s, general gaming magazines began carrying articles on PBM and ran PBM advertisements, while the Origins Awards began a "Best PBM Game" category. PBM games up to the 1980s came from multiple sources: some were adapted from existing games and others were designed solely for postal play. In 1985, Pete Tamlyn stated that most popular games had already been attempted in postal play, noting that none had succeeded as well as Diplomacy. Tamlyn added that there was significant experimentation in adapting games to postal play at the time and that most games could be played by mail. These adapted games were typically run by a gamemaster using a fanzine to publish turn results. The 1980s were also noteworthy in that PBM games designed and published in this decade were written specifically for the genre versus adapted from other existing games. Thus they tended to be more complicated and gravitated toward requiring computer assistance. The 1990s brought changes to the PBM world. In the early 1990s, email became an option to transmit turn orders and results. These are called play-by-email (PBEM) games. Turn around time ranges for modern PBM games are wide enough that PBM magazine editors now use the term "turn-based games". Flagship stated in 2005 that "play-by-mail games are often called turn-based games now that most of them are played via the internet". In the early 1990s, the PBM industry still maintained some of the player momentum from the 1980s. For example, in 1993, Flagship listed 185 active play-by-mail games. And in 1993, the Journal of the PBM Gamer stated that "For the past several years, PBM gaming has increased in popularity." However, in 1994, David Webber, Paper Mayhem's editor in chief expressed concern about disappointing growth in the PBM community and a reduction in play by established gamers. At the same time, he noted that his analysis indicated that more PBM gamers were playing less, giving the example of an average drop from 5–6 games per player to 2–3 games, suggesting it could be due to financial reasons. In early 1997, David Webber stated that multiple PBM game moderators had noted a drop in players over the previous year. By the end of the 1990s, the number of PBM publications had also declined. Gaming Universal's final publication run ended in 1988. Paper Mayhem ceased publication unexpectedly in 1998 after Webber's death. Flagship also later ceased publication. The Internet affected the PBM world in various ways. Rick Loomis stated in 1999 that, "With the growth of the Internet, [PBM] seems to have shrunk and a lot of companies dropped out of the business in the last 4 or 5 years." Shannon Appelcline agreed, noting in 2014 that, "The advent of the Internet knocked most PBM publishers out of business." The Internet also enabled PBM to globalize between the 1990s and 2000s. Early PBM professional gaming typically occurred within single countries. In the 1990s, the largest PBM games were licensed globally, with "each country having its own licensee". By the 2000s, a few major PBM firms began operating globally, bringing about "The Globalisation of PBM" according to Sam Roads of Harlequin Games. By 2014 the PBM community had shrunk compared to previous decades. A single PBM magazine exists—Suspense and Decision—which began publication in November 2013. The PBM genre has also morphed from its original postal mail format with the onset of the digital age. In 2010, Carol Mulholland—the editor of Flagship—stated that "most turn-based games are now available by email and online". The online Suspense & Decision Games Index, as of June 2021, listed 72 active PBM, PBEM, and turn-based games. In a multiple-article examination of various online turn-based games in 2004 titled "Turning Digital", Colin Forbes concluded that "the number and diversity of these games has been enough to convince me that turn-based gaming is far from dead". Advantages and disadvantages of PBM gaming Judith Proctor noted that play-by-mail games have a number of advantages. These include (1) plenty of time—potentially days—to plan a move, (2) never lacking players to face who have "new tactics and ideas", (3) the ability to play an "incredibly complex" game against live opponents, (4) meeting diverse gamers from far-away locations, and (5) relatively low costs. In 2019, Rick McDowell, designer of Alamaze, compared PBM costs favorably with the high cost of board games at Barnes & Noble, with many of the latter going for about $70, and a top-rated game, Nemesis, costing $189. Andrew Greenberg pointed to the high number of players possible in a PBM game, comparing it to his past failure at attempting once to host a live eleven-player Dungeons and Dragons Game. Flagship noted in 2005 that "It's normal to play these ... games with international firms and a global player base. Games have been designed that can involve large numbers of players – much larger than can gather for face-to-face gaming." Finally, some PBM games can be played for years, if desired. Greenberg identified a number of drawbacks for play-by-mail games. He stated that the clearest was the cost, because most games require a setup cost and a fee per turn, and some games can become expensive. Another drawback is the lack of face-to-face interaction inherent in play-by-mail games. Finally, game complexity in some cases and occasional turn processing delays can be negatives in the genre. Description Jim Townsend identifies the two key figures in PBM games as the players and the moderators, the latter of which are companies that charge "turn fees" to players—the cost for each game turn. In 1993, Paper Mayhem—a magazine for play-by-mail gamers—described play-by-mail games thusly: PBM Games vary in the size of the games, turn around time, length of time a game lasts, and prices. An average PBM game has 10–20 players in it, but there are also games that have hundreds of players. Turn around time is the length of time it takes to get your turn back from a company. ... Some games never end. They can go on virtually forever or until you decide to drop. Many games have victory conditions that can be achieved within a year or two. Prices vary for the different PBM games, but the average price per turn is about $5.00. The earliest PBM games were played using the postal services of the respective countries. In 1990, the average turn-around time for a turn was 2–3 weeks. However, in the 1990s, email was introduced to PBM games. This was known as play-by-email (PBEM). Some games used email solely, while others, such as Hyborian War, used email as options for a portion of turn transmittal, with postal service for the remainder. Other games use digital media or web applications to allow players to make turns at speeds faster than postal mail. Given these changes, the term "turn-based games" is now being used by some commentators. Mechanics After the initial setup of a PBM game, players begin submitting turn orders. In general, players fill out an order sheet for a game and return it to the gaming company. The company processes the orders and sends back turn results to the players so they can make subsequent moves. R. Danard further separates a typical PBM turn into four parts. First, the company informs players on the results of the last turn. Next players conduct diplomatic activities, if desired. Then, they send their next turns to the gamemaster (GM). Finally, the turns are processed and the cycle is repeated. This continues until the game or a player is done. Complexity Jim Townsend stated in a 1990 issue of White Wolf Magazine that the complexity of PBM games is much higher than other types on the average. He noted that PBM games at the extreme high end can have a thousand or more players as well as thousands of units to manage, while turn printouts can range from a simple one-page result to hundreds of pages (with three to seven as the average). According to John Kevin Loth, "Novices should appreciate that some games are best played by veterans." In 1986, he highlighted the complexity of Midguard with its 100-page instruction manual and 255 possible orders. Reviewer Jim Townsend asserted that Empyrean Challenge was "the most complex game system on Earth". Other games, like Galactic Prisoners began simply and gradually increased in complexity. As of August 2021, Rick Loomis PBM Games' had four difficulty levels: easy, moderate, hard, and difficult, with games such as Nuclear Destruction and Heroic Fantasy on the easy end and Battleplan—a military strategy game—rated as difficult. Diplomacy According to Paper Mayhem assistant editor Jim Townsend, "The most important aspect of PBM games is the diplomacy. If you don't communicate with the other players you will be labeled a 'loner', 'mute', or just plain 'dead meat'. You must talk with the others to survive". The editors of Paper Mayhem add that "The interaction with other players is what makes PBM enjoyable." Commentator Rob Chapman in a 1983 Flagship article echoed this advice, recommending that players get to know their opponents. He also recommended asking direct questions of opponents on their future intentions, as their responses, true or false, provide useful information. However, he advises players to be truthful in PBM diplomacy, as a reputation for honesty is useful in the long-term. Chapman notes that "everything is negotiable" and advises players to "Keep your plans flexible, your options open – don't commit yourself, or your forces, to any long term strategy". Eric Stehle, owner and operator of Empire Games in 1997, stated that some games cannot be won alone and require diplomacy. He suggested considering the
PBM games designed and published in this decade were written specifically for the genre versus adapted from other existing games. Thus they tended to be more complicated and gravitated toward requiring computer assistance. The 1990s brought changes to the PBM world. In the early 1990s, email became an option to transmit turn orders and results. These are called play-by-email (PBEM) games. Turn around time ranges for modern PBM games are wide enough that PBM magazine editors now use the term "turn-based games". Flagship stated in 2005 that "play-by-mail games are often called turn-based games now that most of them are played via the internet". In the early 1990s, the PBM industry still maintained some of the player momentum from the 1980s. For example, in 1993, Flagship listed 185 active play-by-mail games. And in 1993, the Journal of the PBM Gamer stated that "For the past several years, PBM gaming has increased in popularity." However, in 1994, David Webber, Paper Mayhem's editor in chief expressed concern about disappointing growth in the PBM community and a reduction in play by established gamers. At the same time, he noted that his analysis indicated that more PBM gamers were playing less, giving the example of an average drop from 5–6 games per player to 2–3 games, suggesting it could be due to financial reasons. In early 1997, David Webber stated that multiple PBM game moderators had noted a drop in players over the previous year. By the end of the 1990s, the number of PBM publications had also declined. Gaming Universal's final publication run ended in 1988. Paper Mayhem ceased publication unexpectedly in 1998 after Webber's death. Flagship also later ceased publication. The Internet affected the PBM world in various ways. Rick Loomis stated in 1999 that, "With the growth of the Internet, [PBM] seems to have shrunk and a lot of companies dropped out of the business in the last 4 or 5 years." Shannon Appelcline agreed, noting in 2014 that, "The advent of the Internet knocked most PBM publishers out of business." The Internet also enabled PBM to globalize between the 1990s and 2000s. Early PBM professional gaming typically occurred within single countries. In the 1990s, the largest PBM games were licensed globally, with "each country having its own licensee". By the 2000s, a few major PBM firms began operating globally, bringing about "The Globalisation of PBM" according to Sam Roads of Harlequin Games. By 2014 the PBM community had shrunk compared to previous decades. A single PBM magazine exists—Suspense and Decision—which began publication in November 2013. The PBM genre has also morphed from its original postal mail format with the onset of the digital age. In 2010, Carol Mulholland—the editor of Flagship—stated that "most turn-based games are now available by email and online". The online Suspense & Decision Games Index, as of June 2021, listed 72 active PBM, PBEM, and turn-based games. In a multiple-article examination of various online turn-based games in 2004 titled "Turning Digital", Colin Forbes concluded that "the number and diversity of these games has been enough to convince me that turn-based gaming is far from dead". Advantages and disadvantages of PBM gaming Judith Proctor noted that play-by-mail games have a number of advantages. These include (1) plenty of time—potentially days—to plan a move, (2) never lacking players to face who have "new tactics and ideas", (3) the ability to play an "incredibly complex" game against live opponents, (4) meeting diverse gamers from far-away locations, and (5) relatively low costs. In 2019, Rick McDowell, designer of Alamaze, compared PBM costs favorably with the high cost of board games at Barnes & Noble, with many of the latter going for about $70, and a top-rated game, Nemesis, costing $189. Andrew Greenberg pointed to the high number of players possible in a PBM game, comparing it to his past failure at attempting once to host a live eleven-player Dungeons and Dragons Game. Flagship noted in 2005 that "It's normal to play these ... games with international firms and a global player base. Games have been designed that can involve large numbers of players – much larger than can gather for face-to-face gaming." Finally, some PBM games can be played for years, if desired. Greenberg identified a number of drawbacks for play-by-mail games. He stated that the clearest was the cost, because most games require a setup cost and a fee per turn, and some games can become expensive. Another drawback is the lack of face-to-face interaction inherent in play-by-mail games. Finally, game complexity in some cases and occasional turn processing delays can be negatives in the genre. Description Jim Townsend identifies the two key figures in PBM games as the players and the moderators, the latter of which are companies that charge "turn fees" to players—the cost for each game turn. In 1993, Paper Mayhem—a magazine for play-by-mail gamers—described play-by-mail games thusly: PBM Games vary in the size of the games, turn around time, length of time a game lasts, and prices. An average PBM game has 10–20 players in it, but there are also games that have hundreds of players. Turn around time is the length of time it takes to get your turn back from a company. ... Some games never end. They can go on virtually forever or until you decide to drop. Many games have victory conditions that can be achieved within a year or two. Prices vary for the different PBM games, but the average price per turn is about $5.00. The earliest PBM games were played using the postal services of the respective countries. In 1990, the average turn-around time for a turn was 2–3 weeks. However, in the 1990s, email was introduced to PBM games. This was known as play-by-email (PBEM). Some games used email solely, while others, such as Hyborian War, used email as options for a portion of turn transmittal, with postal service for the remainder. Other games use digital media or web applications to allow players to make turns at speeds faster than postal mail. Given these changes, the term "turn-based games" is now being used by some commentators. Mechanics After the initial setup of a PBM game, players begin submitting turn orders. In general, players fill out an order sheet for a game and return it to the gaming company. The company processes the orders and sends back turn results to the players so they can make subsequent moves. R. Danard further separates a typical PBM turn into four parts. First, the company informs players on the results of the last turn. Next players conduct diplomatic activities, if desired. Then, they send their next turns to the gamemaster (GM). Finally, the turns are processed and the cycle is repeated. This continues until the game or a player is done. Complexity Jim Townsend stated in a 1990 issue of White Wolf Magazine that the complexity of PBM games is much higher than other types on the average. He noted that PBM games at the extreme high end can have a thousand or more players as well as thousands of units to manage, while turn printouts can range from a simple one-page result to hundreds of pages (with three to seven as the average). According to John Kevin Loth, "Novices should appreciate that some games are best played by veterans." In 1986, he highlighted the complexity of Midguard with its 100-page instruction manual and 255 possible orders. Reviewer Jim Townsend asserted that Empyrean Challenge was "the most complex game system on Earth". Other games, like Galactic Prisoners began simply and gradually increased
(since 2005) the Philip K. Dick Trust. Named after science fiction writer Philip K. Dick, it has been awarded since 1983, the year after his death. It is awarded to the best original paperback published each year in the US. The award was founded by Thomas Disch with assistance from David G. Hartwell, Paul S. Williams, and Charles N. Brown. As of 2016, it is administered by Pat LoBrutto, John Silbersack, and Gordon
David G. Hartwell, and David Alexander Smith. Winners and nominees Winners are listed in bold. Authors of special citation entries are listed in italics. The year in the table below indicates the year the book was published; winners are announced the following year. References External links List of all winning and nominated novels Science fiction awards Awards established in 1983 American literary awards D Philip K.