sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. Equivalence between formulations There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. Symmetries and conservation laws The Hamiltonian is known as the generator of time evolution, since it defines a unitary time-evolution operator for each value of . From this relation between and , it follows that any observable that commutes with will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator can generate a family of unitary operators parameterized by a variable . Under the evolution generated by , any observable that commutes with will be conserved. Moreover, if is conserved by evolution under , then is conserved under the evolution generated by . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law. Examples Free particle The simplest example of quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy: The general solution of the Schrödinger equation is given by which is a superposition of all possible plane waves , which are eigenstates of the momentum operator with momentum . The coefficients of the superposition are , which is the Fourier transform of the initial quantum state . It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet: which has Fourier transform, and therefore momentum distribution We see that as we make smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. Particle in a box The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written With the differential operator defined by the previous equation is evocative of the classic kinetic energy analogue, with state in this case having energy coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are or, from Euler's formula, The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at , and . At , in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of , This constraint on implies a constraint on the energy levels, yielding A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. Harmonic oscillator As in the classical case, the potential for the quantum harmonic oscillator is given by This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by where Hn are the Hermite polynomials and the corresponding energy levels are This is another example illustrating the discretization of energy for bound states. Mach–Zehnder interferometer The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector that is a superposition of the "lower" path and the "upper" path , that is, for complex . In order to respect the postulate that we require that . Both beam splitters are modelled as the unitary matrix , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of , or be reflected to the other path with a probability amplitude of . The phase shifter on the upper arm is modelled as the unitary matrix , which means that if the photon is on the "upper" path it will gain a relative phase of , and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitter , a phase shifter , and another beam splitter , and so end up in the state and the probabilities that it will be detected at the right or at the top are given respectively by One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases there will be no interference between the paths anymore, and the probabilities are given by , independently of the phase . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths. Applications Quantum mechanics has had enormous success in explaining many of the features of our universe, with regards to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics. In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Relation to other scientific theories Classical mechanics The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization. When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator. Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, except maybe at temperatures approaching absolute zero at which quantum behavior may manifest macroscopically. Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. Special relativity and electrodynamics Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. Relation to general relativity Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon. One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG. Philosophical implications Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics." The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations remain popular in the 21st century. Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem. Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we don't observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful. Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later. History Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light. During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units the word "atom" deriving from the Greek for "uncuttable" the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons. The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided
mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. Symmetries and conservation laws The Hamiltonian is known as the generator of time evolution, since it defines a unitary time-evolution operator for each value of . From this relation between and , it follows that any observable that commutes with will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator can generate a family of unitary operators parameterized by a variable . Under the evolution generated by , any observable that commutes with will be conserved. Moreover, if is conserved by evolution under , then is conserved under the evolution generated by . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law. Examples Free particle The simplest example of quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy: The general solution of the Schrödinger equation is given by which is a superposition of all possible plane waves , which are eigenstates of the momentum operator with momentum . The coefficients of the superposition are , which is the Fourier transform of the initial quantum state . It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet: which has Fourier transform, and therefore momentum distribution We see that as we make smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. Particle in a box The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written With the differential operator defined by the previous equation is evocative of the classic kinetic energy analogue, with state in this case having energy coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are or, from Euler's formula, The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at , and . At , in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of , This constraint on implies a constraint on the energy levels, yielding A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. Harmonic oscillator As in the classical case, the potential for the quantum harmonic oscillator is given by This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by where Hn are the Hermite polynomials and the corresponding energy levels are This is another example illustrating the discretization of energy for bound states. Mach–Zehnder interferometer The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector that is a superposition of the "lower" path and the "upper" path , that is, for complex . In order to respect the postulate that we require that . Both beam splitters are modelled as the unitary matrix , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of , or be reflected to the other path with a probability amplitude of . The phase shifter on the upper arm is modelled as the unitary matrix , which means that if the photon is on the "upper" path it will gain a relative phase of , and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitter , a phase shifter , and another beam splitter , and so end up in the state and the probabilities that it will be detected at the right or at the top are given respectively by One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases there will be no interference between the paths anymore, and the probabilities are given by , independently of the phase . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths. Applications Quantum mechanics has had enormous success in explaining many of the features of our universe, with regards to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics. In many aspects modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Relation to other scientific theories Classical mechanics The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization. When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator. Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems. Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, except maybe at temperatures approaching absolute zero at which quantum behavior may manifest macroscopically. Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. Special relativity and electrodynamics Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. Relation to general relativity Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon. One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG. Philosophical implications Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics." The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations remain popular in the 21st century. Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem. Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we don't observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful. Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later. History Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light. During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units the word "atom" deriving from the Greek for "uncuttable" the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons. The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν): , where h is Planck's constant. Planck
large-scale (often chintz) and small-scale (often calico) patterns. Some antique quilts made in North America have worn-out blankets or older quilts as the internal batting layer, quilted between new layers of fabric and thereby extending the usefulness of old material. During American pioneer days, foundation piecing became popular. Paper was cut into shapes and used as a pattern; each individual piece of cut fabric was basted around the paper pattern. Paper was a scarce commodity in the early American west so women would save letters from home, postcards, newspaper clippings, and catalogs to use as patterns. The paper not only served as a pattern but as an insulator. The paper found between the old quilts has become a primary source of information about pioneer life. Quilts made without any insulation or batting were referred to as summer quilts. They were not made for warmth, only to keep the chill off during cooler summer evenings. African-American quilts There is a long tradition of African-American quilting beginning with quilts made by slaves, both for themselves and for the people who enslaved them. The style of these quilts was determined largely by time period and region, rather than race, and the documented slave-made quilts generally resemble those made by white women in their region. After 1865 and the end of slavery in the United States, African-Americans began to develop their own distinctive style of quilting. Harriet Powers, an African American woman born into slavery, made two famous "story quilts" and was one of the many African-American quilters who contributed to the development of quilting in the United States. The first nationwide recognition of African-American quilt-making came when the Gee's Bend quilting community was celebrated in an exhibition that opened in 2002 and traveled to many museums, including the Smithsonian. Gee's Bend is a small, isolated community of African-Americans in southern Alabama with a quilt-making tradition that goes back several generations and is characterized by pattern improvisation, multiple patterning, bright and contrasting colors, visual motion, and a lack of rules. The contributions made by Harriet Powers and other quilters of Gee's Bend, Alabama have been recognized by the US Postal Service with a series of stamps. The communal nature of the quilting process (and how it can bring together women of varied races and backgrounds) was honored in the series of stamps. Beginning with the children's story Sweet Clara and the Freedom Quilt (1989), a legend has developed that enslaved people used quilts as a means to share and transmit secret messages to escape slavery and travel the Underground Railroad. Consensus among historians is that there is no sound basis for this belief, and no documented mention among the thousands of slave narratives or other contemporary records. Amish quilts Another American group to develop a distinct style of quilting were the Amish. Typically, these quilts use only solid fabrics, are pieced from geometric shapes, do not contain appliqué, and construction is simple (corners are butted, rather than mitered, for instance) and done entirely by hand. Amish quilters also tend to use simple patterns: Lancaster County Amish are known for their Diamond-in-a-Square and Bars patterns, while other communities use patterns such as Brick, Streak of Lightning, Chinese Coins, and Log Cabins, and midwestern communities are known for their repeating block patterns. Borders and color choice also vary by community. For example, Lancaster quilts feature wide borders with lavish quilting, while Midwestern quilts feature narrower borders to balance the fancier piecing. Native American quilts Some Native Americans are thought to have learned quilting through observation of white settlers; others learned it from missionaries who taught quilting to Native American women along with other homemaking skills. Native American women quickly developed their own unique style, the Lone Star design (also called the Star of Bethlehem), a variation on Morning Star designs that had been featured on Native American clothing and other items for centuries. These quilts often featured floral appliqué framing the star design. Star quilts have become an important part of many Plains Indian ceremonies, replacing buffalo robes traditionally given away at births, marriages, tribal elections, and other ceremonies. Pictorial quilts, created with appliqué, were also common. Another distinctive style of Native American quilting is Seminole piecing, created by Seminoles living in the Florida Everglades. The style evolved out of a need for cloth (the closest town was often a week's journey away). Women would make strips of sewing the remnants of fabric rolls together, then sew these into larger pieces to make clothing. Eventually the style began to be used not just for clothing but for quilts as well. In 1900, with the introduction of sewing machines and readily available fabric in Seminole communities, the patterns became much more elaborate and the style continues to be in use today, both by Seminole women and by others who have copied and adapted their designs and techniques. Hawaiian quilting "Hawaiian quilting was well established by the beginning of the twentieth century. Hawaiian women learned to quilt from the wives of missionaries from New England in the 1820s. Though they learned both pieced work and applique, by the 1870s they had adapted applique techniques to create a uniquely Hawaiian mode of expression. The classic Hawaiian quilt design is a large, bold, curvilinear appliqué pattern that covers much of the surface of the quilt, with the symmetrical design cut from only one piece of fabric." South Asian quilting There are two primary forms of quilting that originate in South Asia: Nakshi Kantha and Ralli. Nakshi Kantha quilts originated in India and are typically made of scraps and worn-out fabric stitched together with old sari threads using kantha embroidery stitches. "The layers of cloth were spread on the ground, held in place with weights at the edges, and sewn together with rows of large basting stitches. The cloth was then folded and worked on whenever there was time." The first recorded kantha are more than 500 years old. Ralli quilts are traditionally made in Pakistan, western India, and the surrounding area. They are made by every sector of society including Hindu and Muslim women, women of different castes, and women from different towns or villages or tribes with the colors and designs varying among these groups. The name comes from ralanna, a word meaning to mix or connect. Quilts tops were designed and pieced by one woman using scraps of hand-dyed cotton. This cotton often comes from old dresses or shawls. Once pieced, the quilt top is placed on a reed mat with the other layers and sewn together using thick, colored thread in straight parallel lines by members of the designer's family and community. East Asian quilting Quilting in Japan, until the 20th century, generally covered local bast fibers with more valuable cotton cloth. The rectangular nature of Japanese cloth articles encouraged rectangle-based patterns. Sashiko stitching has now also developed purely decorative forms. Swedish quilting Quilting originated in Sweden in the fifteenth century with heavily stitched and appliquéd quilts made for the very wealthy. These quilts, created from silk, wool, and felt, were intended to be both decorative and functional and were found in churches and in the homes of nobility. Imported cotton first appeared in Sweden in 1870, and began to appear in Swedish quilts soon after along with scraps of wool, silk, and linen. As the availability of cotton increased and its price went down, quilting became widespread among all classes of Swedish society. Wealthier quilters used wool batting while others used linen scraps, rags, or paper mixed with animal hair. In general, these quilts were simple and narrow, made by both men and women. The biggest influence on Swedish quilting in this time period is thought to have come from America as Swedish immigrants to the United States returned to their home country when conditions there improved. Art quilting During the late 20th century, art quilts became popular for their aesthetic and artistic qualities rather than for functionality as they are displayed on a wall or table rather than being used on a bed.. "It is believed that decorative quilting came to Europe and Asia during the Crusades (A.D. 1100–1300), a likely idea because textile arts were more developed in China and India than in the West." American artist Judy Chicago stated in a 1981 interview that if not for sexism in the visual arts, the art world, and broader society, quilting would be regarded as a form of high art: Modern quilting In the early 21st century, modern quilting became a more prominent area of quilting. Modern quilting follows a distinct aesthetic style which draws on inspiration from modern style in architecture, art, and design using traditional quilt making techniques. Modern quilts are different from art quilts in that they are made to be used. Modern quilts are also influenced by the Quilters of Gee's Bend, Amish quilts, Nancy Crow, Denyse Schmidt, Gwen Marston, Yoshiko Jinzenji, Bill Kerr and Weeks Ringle. The Modern Quilt Guild has attempted to define modern quilting. The characteristics of a modern quilt may include: the use of bold colors and prints, high contrast and graphic areas of solid color, improvisational piecing, minimalism, expansive negative space, and alternate grid work. The Modern Quilt Guild, a non-profit corporation, with 14,000 members in more than 200 members guilds in 39 countries, fosters modern quilting via local guilds, workshops, webinars, and Quiltcon—an annual modern quilting conference and convention. The founding Modern Quilt Guild formed in October 2009 in Los Angeles. QuiltCon features a quilt show with 400+ quilts, quilt vendors, lectures, and quilting workshops and classes. The first
worked on whenever there was time." The first recorded kantha are more than 500 years old. Ralli quilts are traditionally made in Pakistan, western India, and the surrounding area. They are made by every sector of society including Hindu and Muslim women, women of different castes, and women from different towns or villages or tribes with the colors and designs varying among these groups. The name comes from ralanna, a word meaning to mix or connect. Quilts tops were designed and pieced by one woman using scraps of hand-dyed cotton. This cotton often comes from old dresses or shawls. Once pieced, the quilt top is placed on a reed mat with the other layers and sewn together using thick, colored thread in straight parallel lines by members of the designer's family and community. East Asian quilting Quilting in Japan, until the 20th century, generally covered local bast fibers with more valuable cotton cloth. The rectangular nature of Japanese cloth articles encouraged rectangle-based patterns. Sashiko stitching has now also developed purely decorative forms. Swedish quilting Quilting originated in Sweden in the fifteenth century with heavily stitched and appliquéd quilts made for the very wealthy. These quilts, created from silk, wool, and felt, were intended to be both decorative and functional and were found in churches and in the homes of nobility. Imported cotton first appeared in Sweden in 1870, and began to appear in Swedish quilts soon after along with scraps of wool, silk, and linen. As the availability of cotton increased and its price went down, quilting became widespread among all classes of Swedish society. Wealthier quilters used wool batting while others used linen scraps, rags, or paper mixed with animal hair. In general, these quilts were simple and narrow, made by both men and women. The biggest influence on Swedish quilting in this time period is thought to have come from America as Swedish immigrants to the United States returned to their home country when conditions there improved. Art quilting During the late 20th century, art quilts became popular for their aesthetic and artistic qualities rather than for functionality as they are displayed on a wall or table rather than being used on a bed.. "It is believed that decorative quilting came to Europe and Asia during the Crusades (A.D. 1100–1300), a likely idea because textile arts were more developed in China and India than in the West." American artist Judy Chicago stated in a 1981 interview that if not for sexism in the visual arts, the art world, and broader society, quilting would be regarded as a form of high art: Modern quilting In the early 21st century, modern quilting became a more prominent area of quilting. Modern quilting follows a distinct aesthetic style which draws on inspiration from modern style in architecture, art, and design using traditional quilt making techniques. Modern quilts are different from art quilts in that they are made to be used. Modern quilts are also influenced by the Quilters of Gee's Bend, Amish quilts, Nancy Crow, Denyse Schmidt, Gwen Marston, Yoshiko Jinzenji, Bill Kerr and Weeks Ringle. The Modern Quilt Guild has attempted to define modern quilting. The characteristics of a modern quilt may include: the use of bold colors and prints, high contrast and graphic areas of solid color, improvisational piecing, minimalism, expansive negative space, and alternate grid work. The Modern Quilt Guild, a non-profit corporation, with 14,000 members in more than 200 members guilds in 39 countries, fosters modern quilting via local guilds, workshops, webinars, and Quiltcon—an annual modern quilting conference and convention. The founding Modern Quilt Guild formed in October 2009 in Los Angeles. QuiltCon features a quilt show with 400+ quilts, quilt vendors, lectures, and quilting workshops and classes. The first QuiltCon was February 21–24, 2013 in Austin, TX. QuiltCon 2020 will be held in Austin, Texas, February 20–23, 2020 and will feature 400 juried modern quilts from quilters around the world. Quilting in fashion and design Unusual quilting designs have increasingly become popular as decorative textiles. As industrial sewing technology has become more precise and flexible, quilting using exotic fabrics and embroidery began to appear in home furnishings in the early 21st century. Quilt blocks The quilt block is traditionally a sub-unit composed of several pieces of fabric sewn together. The quilt blocks are repeated, or sometimes alternated with plain blocks, to form the overall design of a quilt. Barbara Brackman has documented over 4000 different quilt block patterns from the early 1830s to the 1970s in the Encyclopedia Of Pieced Quilt Patterns. Some of the simpler designs for quilt blocks include the Nine-Patch, Shoo Fly, Churn Dash, and the Prairie Queen. Most geometric quilt block designs fit into a "grid", which is the number of squares a pattern block is divided into. The five categories into which most square patterns fall are Four Patch, Nine Patch, Five-Patch, Seven-Patch, and Eight-Pointed Star. Each block can be subdivided into multiples: a Four-Patch can be constructed of 16 or 64 squares, for example. A simple Nine Patch is made by sewing five patterned or dark pieces (patches) to four light square pieces in alternating order. These nine sewn squares make one block. The Shoo Fly varies from this Nine Patch by dividing each of the four corner pieces into a light and dark triangle. Another variation develops when one square piece is divided into two equal rectangles in the basic Nine Patch design. The Churn Dash block combines the triangles and rectangle to expand the Nine Patch. The Prairie Queen block combines two large scale triangles in the corner section with the middle section using four squares. The center piece is one full size square. Each of the nine sections does have the same overall measurement and fits together. The number of patterns possible by subdividing Four-, Five-, Seven-, Nine-Patches and Eight-Pointed Stars and using triangles instead of squares in the small subdivisions is almost endless. Quilting techniques Many types of quilting exist today. The two most widely used are hand-quilting and machine quilting. Hand quilting is the process of using a needle and thread to sew a running stitch by hand across the entire area to be quilted. This binds the layers together. A quilting frame or hoop is often used to assist in holding the piece being quilted off the quilter's lap. A quilter can make one stitch at a time by first driving the needle through the fabric from the right side, then pushing it back up through the material from the wrong side to complete the stitch; this is called a stab stitch. Another option is called a rocking stitch, where the quilter has one hand, usually with a finger wearing a thimble, on top of the quilt, while the other hand is located beneath the piece to push the needle back up. A third option is called "loading the needle" and involves doing four or more stitches before pulling the needle through the cloth. Hand quilting is still practiced by the Amish and Mennonites within the United States and Canada, and is enjoying a resurgence worldwide. Machine quilting is the process of using a home sewing machine or a longarm machine to sew the layers together. With the home sewing machine, the layers are tacked together before quilting. This involves laying the top, batting, and backing out on a flat surface and either pinning (using large safety pins) or tacking the layers together. Longarm quilting involves placing the layers to be quilted on a special frame. The frame has bars on which the layers are rolled, keeping these together without the need for tacking or pinning. These frames are used with a professional sewing machine mounted on a platform. The platform rides along tracks so that the machine can be moved across the layers on the frame. A longarm machine is moved across the fabric. In contrast, the fabric is moved through a home sewing machine. Tying is another technique of fastening the three layers together. This is done primarily on quilts that are made to be used and are needed quickly. The process of tying the quilt is done with yarns or multiple strands of thread. Square knots are used to finish off the ties so that the quilt may be washed and used without fear of the knots coming undone. This technique is commonly called "tacking." In the Midwest, tacked bed covers are
e.g. Enterprise Controls are commercially sold by The Qt Company. Supported platforms Qt works on many different platforms; the following are officially supported: After Nokia opened the Qt source code to the community on Gitorious, various ports appeared. There are also some ports of Qt that may be available, but are not supported anymore. These platforms are listed in List of platforms supported by Qt. See also there for current community support for other lesser known platforms, such as SailfishOS. Licensing Qt is available under the following free software licenses: GPL 2.0, GPL 3.0, LGPL 3.0 and LGPL 2.1 (with Qt special exception). Note that some modules are available only under a GPL license, which means that applications which link to these modules need to comply with that license. In addition, Qt has always been available under a commercial license, like the Qt Commercial License, that allows developing proprietary applications with no restrictions on licensing. Qt tools Qt comes with its own set of tools to ease cross-platform development, which can otherwise be cumbersome due to different set of development tools. Qt Creator is a cross-platform IDE for C++ and QML. Qt Designer's GUI layout/design functionality is integrated into the IDE, although Qt Designer can still be started as a standalone tool. In addition to Qt Creator, Qt provides qmake, a cross-platform build script generation tool that automates the generation of Makefiles for development projects across different platforms. There are other tools available in Qt, including the Qt Designer interface builder and the Qt Assistant help browser (which are both embedded in Qt Creator), the Qt Linguist translation tool, uic (user interface compiler), and moc (Meta-Object Compiler). History of Qt Early developments In the summer of 1990, Haavard Nord and Eirik Chambe-Eng (the original developers of Qt and the CEO and President, respectively, of Trolltech) were working together on a database application for ultrasound images written in C++ and running on Mac OS, Unix, and Microsoft Windows. They began development of "Qt" in 1991, three years before the company was incorporated as Quasar Technologies, then changed the name to Troll Tech and then to Trolltech. The toolkit was called Qt because the letter Q looked appealing in Haavard's Emacs typeface, and "t" was inspired by Xt, the X toolkit. The first two versions of Qt had only two flavors: Qt/X11 for Unix and Qt/Windows for Windows. On 20 May 1995 Troll Tech publicly released Qt 0.90 for X11/Linux with the source code under the Qt Free Edition License. This license was viewed as not compliant with the free software definition by Free Software Foundation because, while the source was available, it did not allow the redistribution of modified versions. Trolltech used this license until version 1.45. Controversy erupted around 1998 when it became clear that the K Desktop Environment was going to become one of the leading desktop environments for Linux. As it was based on Qt, many people in the free software movement worried that an essential piece of one of their major operating systems would be proprietary. The Windows platform was available only under a proprietary license, which meant free/open source applications written in Qt for X11 could not be ported to Windows without purchasing the proprietary edition. Becoming free software–friendly With the release of version 2.0 of the toolkit in mid-1999, the license was changed to the Q Public License (QPL), a free software license, but one regarded by the Free Software Foundation as incompatible with the GPL. Compromises were sought between KDE and Trolltech whereby Qt would not be able to fall under a more restrictive license than the QPL, even if Trolltech was bought out or went bankrupt. This led to the creation of the KDE Free Qt foundation, which guarantees that Qt would fall under a BSD-style license should no free/open source version of Qt be released during 12 months. In 2000, Qt/X11 2.2 was released under the GPL v2, ending all controversy regarding GPL compatibility. At the end of 2001, Trolltech released Qt 3.0, which added support for Mac OS X (now known as macOS). The Mac OS X support was available only in the proprietary license until June 2003, when Trolltech released Qt 3.2 with Mac OS X support available under the GPL. In 2002, members of the KDE on Cygwin project began porting the GPL licensed Qt/X11 code base to Windows. This was in response to Trolltech's refusal to license Qt/Windows under the GPL on the grounds that Windows was not a free/open source software platform. The project achieved reasonable success although it never reached production quality. This was resolved when Trolltech released Qt 4.0 also for Windows under the GPL in June 2005. Qt 4 supported the same set of platforms in the free software/open source editions as in the proprietary edition, so it is possible, with Qt 4.0 and later releases, to create GPL-licensed free/open source applications using Qt on all supported platforms. The GPL v3 with special exception was later added as an added licensing option. The GPL exception allows the final application to be licensed under various GPL-incompatible free software/open source licenses such as the Mozilla Public License 1.1. Acquisition by Nokia Nokia acquired Trolltech ASA on 17 June 2008 and changed the name first to Qt Software, then to Qt Development Frameworks. Nokia focused on turning Qt into the main development platform for its devices, including a port to the Symbian S60 platform. Version 1.0 of the Nokia Qt SDK was released on 23 June 2010. The source code was made available over Gitorious, a community oriented git source code repository, with a goal of creating a broader community using and improving Qt. On 14 January 2009, Qt version 4.5 added another option, the LGPL, to make Qt more attractive for both non-GPL open source projects and closed applications. In February 2011, Nokia announced its decision to drop Symbian technologies and base their future smartphones on the Windows Phone platform instead (and since then support for that platform has also been dropped). One month later, Nokia announced the sale of Qt's commercial licensing and professional services to Digia, with the immediate goal of taking Qt support to Android, iOS and Windows 8 platforms, and to continue focusing on desktop and embedded development, although Nokia was to remain the main development force behind the framework at that time. Merging and demerging with Digia In March 2011, Nokia sold the commercial licensing part of Qt to Digia, creating Qt Commercial. In August 2012, Digia announced that it would acquire Qt from Nokia. The Qt team at Digia started their work in September 2012. They released Qt 5.0 within a month and newer versions every six months with new features and additional supported platforms. In September 2014, Digia transferred the Qt business and copyrights to their wholly owned subsidiary, The Qt Company, which owns 25 brands related to Qt. In May 2016, Digia and Qt demerged completely into two independent companies. The Qt Project and open governance Qt 5 was officially released on 19 December 2012. This new version marked a major change in the platform, with hardware-accelerated graphics, QML and JavaScript playing a major role. The traditional C++-only QWidgets continued to be supported, but did not benefit from the performance improvements available through the new architecture. Qt 5 brings significant improvements to the speed and ease of developing user interfaces. Framework development of Qt 5 moved to open governance at qt-project.org, which made it possible for developers outside Digia to submit patches for review. Qt contributors Aside from The Qt Company, many organizations and individuals using Qt as their development platform participate in the open development of Qt via the Qt Project. One such Qt contributor is Klarälvdalens Datakonsult AB, a Swedish Qt consulting company. KDAB is involved in many areas, including maintenance of several components. Together with RIM/BlackBerry, KDAB is maintaining the QNX and BlackBerry 10 ports of Qt. Another participator is Intel, contributing for example Wayland support. AudioCodes maintains IBM ClearCase support in Qt Creator. As a heavy user of Qt, the KDE project submits many patches and features from its developer library KDE Frameworks back to Qt. See also List of widget toolkits Android software development iOS SDK Wt (web toolkit) Bibliography Qt Wiki provides a comprehensive list of English books about Qt. This is a list of notable books: References External links Qt Marketplace 1992 software Application programming interfaces Articles with example C++ code C++ libraries Cross-platform software Cross-platform desktop-apps development Formerly proprietary software Free computer libraries
scientific, engineering, mathematical, and computing fields. Moonlight Stream, an open-source implementation of Nvidia Shield Musescore, an open-source, multiplatform notation software OBS, a libre cross-platform screencast software Orange data mining suite ParaView open-source cross-platform application for interactive, scientific visualization qBittorrent cross-platform free and open-source BitTorrent client QGIS geographic information system Qtractor Audio multitrack recorder and editing software QuiteRSS Feed Reader Retroshare F2F communication platform Roblox Studio a game creation tool used on the Roblox platform Scribus desktop publishing software Sibelius music composition and notation software Source 2 engine tools a 3D video game engine developed by Valve Stellarium, a planetarium program Subsurface, a software for logging and planning scuba dives initially designed and developed by Linus Torvalds SuperCollider, an environment and programming language for real-time audio synthesis and algorithmic composition Teamviewer, a computer software package for remote control, desktop sharing, online meetings, web conferencing and file transfer between computers Telegram, a messaging client available for Windows, Mac and Linux VirtualBox OS virtualization software VLC media player Wireshark, a packet analyzer WPS Office XaoS, a real-time fractal zoomer XnView MP Organizations using Qt Qt is utilized by a wide range of companies and organizations such as AMD Blizzard Entertainment BMW Crytek Daimler AG Electronic Arts European Space Agency DreamWorks LG Lucasfilm Microsoft Panasonic Philips Robert Bosch GmbH Samsung Siemens Tesla Tomtom Volvo German Air Traffic Control HP Walt Disney Animation Studios Valve Qt software architecture Qt concepts Qt is built on these key concepts: Complete abstraction of the GUI When first released, Qt used its own paint engine and controls, emulating the look of the different platforms it runs on when it drew its widgets. This made the porting work easier because very few classes in Qt really depended on the target platform; however, this occasionally led to slight discrepancies where that emulation was imperfect. Recent versions of Qt use the native style APIs of the different platforms, on platforms that have a native widget set, to query metrics and draw most controls, and do not suffer from such issues as often. On some platforms (such as MeeGo and KDE) Qt is the native API. Some other portable graphical toolkits have made different design decisions; for example, wxWidgets uses the toolkits of the target platform for its implementations. Signals and slots A language construct introduced in Qt for communication between objects which makes it easy to implement the observer pattern while avoiding boilerplate code. The concept is that GUI widgets can send signals containing event information which can be received by other controls using special functions known as slots. Metaobject compiler The metaobject compiler, termed moc, is a tool that is run on the sources of a Qt program. It interprets certain macros from the C++ code as annotations, and uses them to generate added C++ code with meta information about the classes used in the program. This meta information is used by Qt to provide programming features not available natively in C++: signals and slots, introspection and asynchronous function calls. Language bindingsQt can be used in several programming languages other than C++, such as Python, Javascript, C# and Rust via language bindings; many languages have bindings for Qt 5 and bindings for Qt 4. The Ring programming language includes Qt in the standard library. Qt modules Starting with Qt 4.0 the framework was split into individual modules. With Qt 5.0 the architecture was modularized even further. Qt is now split into essential and add-on modules. Qt essentials Qt add-ons Editions There are four editions of Qt available: Community, Indie Mobile, Professional and Enterprise. The Community version is under the open source licenses, while the Indie Mobile, Professional and Enterprise versions, which contain additional functionality and libraries, e.g. Enterprise Controls are commercially sold by The Qt Company. Supported platforms Qt works on many different platforms; the following are officially supported: After Nokia opened the Qt source code to the community on Gitorious, various ports appeared. There are also some ports of Qt that may be available, but are not supported anymore. These platforms are listed in List of platforms supported by Qt. See also there for current community support for other lesser known platforms, such as SailfishOS. Licensing Qt is available under the following free software licenses: GPL 2.0, GPL 3.0, LGPL 3.0 and LGPL 2.1 (with Qt special exception). Note that some modules are available only under a GPL license, which means that applications which link to these modules need to comply with that license. In addition, Qt has always been available under a commercial license, like the Qt Commercial License, that allows developing proprietary applications with no restrictions on licensing. Qt tools Qt comes with its own set of tools to ease cross-platform development, which can otherwise be cumbersome due to different set of development tools. Qt Creator is a cross-platform IDE for C++ and QML. Qt Designer's GUI layout/design functionality is integrated into the IDE, although Qt Designer can still be started as a standalone tool. In addition to Qt Creator, Qt provides qmake, a cross-platform build script generation tool that automates the generation of Makefiles for development projects across different platforms. There are other tools available in Qt, including the Qt Designer interface builder and the Qt Assistant help browser (which are both embedded in Qt Creator), the Qt Linguist translation tool, uic (user interface compiler), and moc (Meta-Object Compiler). History of Qt Early developments In the summer of 1990, Haavard Nord and Eirik Chambe-Eng (the original developers of Qt and the CEO and President, respectively, of Trolltech) were working together on a database application for ultrasound images written in C++ and running on Mac OS, Unix, and Microsoft Windows. They began development of "Qt" in 1991, three years before the company was incorporated as Quasar Technologies, then changed the name to Troll Tech and then to Trolltech. The toolkit was called Qt because the letter Q looked appealing in Haavard's Emacs typeface, and "t" was inspired by Xt, the X toolkit. The first two versions of Qt had only two flavors: Qt/X11 for Unix and Qt/Windows for Windows. On 20 May 1995 Troll Tech publicly released Qt 0.90 for X11/Linux with the source code under the Qt Free Edition License. This license was viewed as not compliant with the free software definition by Free Software Foundation because, while the source was available, it did not allow the redistribution of modified versions. Trolltech used this license until version 1.45. Controversy erupted around 1998 when it became clear that the K Desktop Environment was going to become one of the leading desktop environments for Linux. As it was based on Qt, many
logic using a custom scripting language and interpreter was dropped from the next generation Quake II engine in favor of compiled C code due to the overall inflexibility of QuakeC, the increasingly complex game logic, the performance to be gained by packaging game logic into a native dynamic link library, and the advantage of leveraging an already established programming language's community, tools, educational materials, and documentation. Distributing native code created new security and portability concerns. QuakeC bytecode afforded little opportunity for mischief, while native code has access to the whole machine. QuakeC bytecode also worked on any machine that could run Quake. Compiling to native code added an additional barrier to entry for novice mod developers, because they were being asked to set up a more complicated programming environment. The eventual solution, implemented by the Quake III engine, was to combine the advantages of original QuakeC with the advantages of compiling C to native code. The lcc C compiler was extended to compile standard C into bytecode, which could be interpreted by a virtual machine in a manner similar to QuakeC. This addressed the security, portability, and tool chain problems, but lost the performance advantage of native code. That was solved by further compiling the bytecode into native code at run time on supported machines. Modified compilers and language extensions A decompiler and a recompiler were released by Armin Rigo (called DEACC and REACC respectively). These programs were made through the process of reverse engineering, and were most likely published before the release of qcc. id Software released the source of qcc, their QuakeC compiler, along with the original QuakeC code in 1996. Modified versions soon sprung up, including Jonathan Roy's fastqcc and Ryan "FrikaC" Smith's FrikQCC. These added functionality, optimizations, and compiling speed boosts. In 1999, when id Software released the code from Quake's engine under the GNU General Public License (GPL), the workings of the bytecode interpreter were examined and new QuakeC compilers were released, such as J.P. Grossman's qccx and a new version of FrikQCC. These compilers took advantage of newly discovered features in a backwards-compatible way so that the bytecode could still be properly interpreted by unmodified Quake engines. New features include arrays, pointers, integers, for loops and string manipulation. With the Quake engine source code now able to be changed, further features were added to QuakeC in the form of new built-in functions. Features long yearned for by QuakeC coders finally reached realization as QuakeC now had file and string handling
simply not needed by the original game. Most video games at the time had their game logic written in plain C/C++ and compiled into the executable, which is faster. However, this makes it harder for the community to create mods and it makes the process of porting the game to another platform (such as Linux) more costly. Despite its advantages, the choice of implementing game logic using a custom scripting language and interpreter was dropped from the next generation Quake II engine in favor of compiled C code due to the overall inflexibility of QuakeC, the increasingly complex game logic, the performance to be gained by packaging game logic into a native dynamic link library, and the advantage of leveraging an already established programming language's community, tools, educational materials, and documentation. Distributing native code created new security and portability concerns. QuakeC bytecode afforded little opportunity for mischief, while native code has access to the whole machine. QuakeC bytecode also worked on any machine that could run Quake. Compiling to native code added an additional barrier to entry for novice mod developers, because they were being asked to set up a more complicated programming environment. The eventual solution, implemented by the Quake III engine, was to combine the advantages of original QuakeC with the advantages of compiling C to native code. The lcc C compiler was extended to compile standard C into bytecode, which could be interpreted by a virtual machine in a manner similar to QuakeC. This addressed the security, portability, and tool chain problems, but lost the performance advantage of native code. That was solved by further compiling the bytecode into native code at run time on supported machines. Modified compilers and language extensions A decompiler and a recompiler were released by Armin Rigo (called DEACC and REACC respectively). These programs were made through the process of reverse engineering, and were most likely published before the release of qcc. id Software released the source of qcc, their QuakeC compiler, along with the original QuakeC code in 1996. Modified versions soon sprung up, including Jonathan Roy's fastqcc and Ryan "FrikaC" Smith's FrikQCC. These added functionality, optimizations, and compiling speed boosts. In 1999, when id Software released the code from Quake's engine under the GNU General Public License (GPL), the workings of the bytecode interpreter were examined and new QuakeC compilers were released, such as J.P. Grossman's qccx and a new version of FrikQCC. These compilers took advantage of newly discovered features in a backwards-compatible way so that the bytecode could still be properly interpreted by unmodified Quake engines. New features include arrays, pointers, integers, for loops and string manipulation. With the Quake engine source code now able to be changed, further features were added to QuakeC in the form of new built-in functions. Features long yearned for by
Eric Christian Olsen, actor, Bettendorf Daniel David Palmer, chiropractor, Davenport Oran Pape, state patrol, Davenport Laurdine Patrick, musician, East Moline Mary Beth Peil, actress and singer, Davenport Nat Pendleton, wrestler and actor, Davenport Roger Perry, actor, Davenport James Philbrook, actor, Davenport Scott Pose, MLB baseball player, Davenport Hiram Price, politician, Davenport Margo Price, country singer, Aledo Linnea Quigley, actress and producer, Davenport Ed Reimers, announcer, Moline Otto Frederick Rohwedder, engineer, inventor of sliced bread, Davenport Seth Rollins, WWE wrestler, Davenport Randy Shilts, journalist, Davenport Jim Skinner, CEO of McDonald's, Davenport Dean Stone, MLB pitcher, Silvis Tim Sylvia, MMA fighter, Bettendorf Jason Tanamor, author, writer, Rock Island Julian Vandervelde, football player, Davenport Hynden Walch, actress, Davenport Henry Cantwell Wallace, U.S. Secretary of Agriculture, Rock Island Friedrich Weyerhäuser, lumber baron, Rock Island Dwight Deere Wiman, Broadway producer, Moline Bryan Woods, filmmaker, Bettendorf Education Colleges and universities Augustana College – A private, four-year liberal arts college in Rock Island. Bible Missionary Institute – A Bible college in Rock Island affiliated with the Bible Missionary Church. Black Hawk College – Community college in Moline, with a satellite campus in Kewanee, Illinois. Eastern Iowa Community College District – Consisting of campuses in Bettendorf, Clinton, and Muscatine. Bettendorf's campus is known as Scott Community College. Palmer Chiropractic College – Davenport, first chiropractic school in the world. Saint Ambrose University - A university in Davenport. Upper Iowa University – A satellite campus in Bettendorf. Western Illinois University-Quad Cities – The only public, four-year university in the Quad Cities region. The campus is located in Moline along the Mississippi Riverfront at the former site of the John Deere Technical Site. Culture Since 1916, the region has supported the Quad City Symphony Orchestra, which presents a year-round schedule of concerts at the Adler Theatre in Davenport and Centennial Hall in Rock Island. The Handel Oratorio Society, dating to 1880, is the second-oldest organization of its kind in the nation and presents annual performances of "Messiah" along with another major work for choir and orchestra. The Augustana Choir, founded at Rock Island's Augustana College in 1934, is one of the nation's leading collegiate choruses. Major outdoor summer music festivals include the Bix Beiderbecke Memorial Jazz Festival, Mississippi Valley Blues Festival, and River Roots Live. The Quad Cities' three traditional community theaters – Playcrafters Barn Theatre (founded in 1920, comedies and dramas) and Quad City Music Guild (1948, musicals) in Moline, and Genesius guild (1957, outdoor Shakespeare and Greek comedies and tragedies) in Rock Island – were joined in 1976 by Circa '21 Dinner Playhouse, a professional dinner theater in downtown Rock Island's historic Fort Theatre. Ballet is performed at Ballet Quad Cities. ComedySportz provides improv comedy. Bluebox Limited is a Bettendorf-based film production company, and many outside productions companies have filmed movies in the Quad Cities in recent years. Historic buildings and sites listed on state and the National Register of Historic Places interpret the history of people's settlement and lives in the area. Media The Quad Cities is the 151st largest radio market in the United States. It is ranked 97th by Nielsen Media Research for the 2008–09 television season with 309,600 television households. The area is served by over 13 commercial radio stations, 8 non-commercial radio stations, 3 low power FM radio stations, 8 TV stations and 3 daily newspapers. In 2012, the Mississippi Valley Fair that is held in Davenport served as the film location for Rodney Atkins' music video "Just Wanna Rock N' Roll." Also in 2012, the PBS Frontline documentary Poor Kids was filmed in and around the Quad Cities showing poverty from a child's perspective. Transportation Four interstate highways serve the Quad Cities: Interstate 80, Interstate 280, Interstate 74 serve both states while Interstate 88 serves just Illinois. United States highways include U.S. Route 6 and U.S. Route 67 which run through both Iowa and Illinois, while U.S. Route 61 serves just Iowa and U.S. Route 150 serves just Illinois. A total of five bridges accessible by automobiles connect Iowa with Illinois in the Quad Cities across the Mississippi River. The Fred Schwengel Memorial Bridge carries Interstate 80 and connects Le Claire, Iowa, with Rapids City, Illinois. Continuing downstream, the I-74 Bridge connects Bettendorf, Iowa, with Moline, Illinois, and is the busiest bridge with an average of 70,400 cars a day. The Government Bridge connects Downtown Davenport with the Rock Island Arsenal. Three bridges connect Davenport with Rock Island, Illinois; The Rock Island Centennial Bridge, The Crescent Rail Bridge, and the furthest downstream bridge, the Sergeant John F. Baker, Jr. Bridge which carries I-280. Several state highways also serve the area. Iowa Highway 22 is on Davenport's southwest side and runs west through the county, while Iowa Highway 130 runs along Northwest Boulevard on Davenport's north edge. Illinois Route 5 (John Deere Road) runs from Rock Island east till it runs into Interstate 88. Illinois Route 92 runs along the Mississippi River, while Illinois Route 84 runs along the east side of Rock Island County. Illinois Route 192 connects Highway 92 with Illinois Route 94 near Taylor Ridge. The Chicago – Kansas City Expressway also serves the area along Interstates 74, 80, and 88. There are three transit operators in the Quad Cities with limited interconnection between them. Rock Island County Metropolitan Mass Transit District (Quad Cities MetroLINK) serves the Illinois cities of Rock Island, Moline, East Moline, Milan, Silvis, Carbon Cliff, Hampton and Colona. It has 12 routes and a fleet of about 52 buses. It operates a river craft during summer months. In Iowa, Davenport Citibus has 10 fixed routes and operates 20 buses, seven days a week and Bettendorf Transit operates three routes, Monday–Saturday, and has eight buses. Amtrak currently does not serve the Quad Cities. The closest station is about away in Galesburg, Illinois. In 2008, United States Senators Tom Harkin, Chuck Grassley, Dick Durbin, and Barack Obama sent a letter to Amtrak asking them to begin plans to bring rail service to the Quad Cities. In October 2010, a $230 million federal fund was announced that will bring Amtrak service to the Quad Cities, with a new line running from Moline to Chicago. They hoped to have the line completed in 2015, and offer two round trips daily to Chicago. In December 2011, the federal government awarded $177 million in funding for the Amtrak connection. Budgetary and logistical issues have delayed the completion of all necessary track improvements, but the project is still in development. The multi-modal Moline Q Station building was completed in early 2018, with the attached Westin Element hotel opening in February. When the full project is completed, it will establish passenger rail through the Quad Cities, for the first time since the 1970s. The Quad Cities is served by the Quad Cities International Airport, Illinois' third-busiest airport, located in Moline. The airport is marketed as a regional alternative to the larger airports in Chicago, nearly away. The smaller Davenport Municipal Airport is the home of the Quad City Air Show. Sports From 1907 to 1926, Rock Island was home to the NFL's Rock Island Independents. The franchise was a charter member on the National Football League (NFL) in 1920. The first NFL Game ever was played by the Independents at Douglas Park in September 1920. Football legend Jim Thorpe was a member of the team in 1924. The Tri-Cities Blackhawks, named in honor of the Sauk war chief Black Hawk, was the next top-level professional sports franchise. The club played in the National Basketball League (NBL) from 1946 until its merger in 1949 with the Basketball Association of America to become the National Basketball Association (NBA). Hall of famer Red Auerbach coached the Blackhawks during their first NBA season. After the 1950–51 basketball season, the team moved to Milwaukee, where they were named the Hawks. After a second move to St. Louis, the team is now the Atlanta Hawks. Professional basketball returned to the Quad Cities during the 1980s and 1990s with the Quad City Thunder of the Continental Basketball Association. The CBA served as the NBA's premier developmental league and produced many highly regarded NBA stars. From 1987 through the 1992–93 season, the Thunder played at Wharton Field House in Moline. Starting with the 1993–94 season, the team played at The MARK of the Quad Cities (now the TaxSlayer Center.) After the CBA folded in 2001, the Thunder franchise ceased operations permanently. The TaxSlayer Center occasionally hosts NCAA Division I college basketball conference tournaments as well as NBA and NHL exhibitions. The Quad Cities has hosted minor league baseball teams since the Davenport Brown Stockings first played in 1878. The Rock Island Islanders and Moline Plowboys each fielded teams for many seasons. The Islanders began play in 1901 and played primarily at Douglas Park. The Plowboys were founded in 1914. Their home was Browning Field. The Davenport franchise has been a member of the Midwest League since 1960. They have played at Modern Woodman Park since 1931. Today, the Quad Cities River Bandits are High Class A affiliate of the Kansas City Royals The PGA Tour makes an annual stop in the Quad Cities. The golf tournament is currently known as the John Deere Classic. It has drawn dozens of top PGA players over the years, including Tiger Woods, Vijay Singh, and Payne Stewart. The Quad Cities Marathon has run annually in late September since 1998. Roughly 400-500 participants race through the four cities, beginning and ending in Moline. The race weekend also offers a half marathon and a 5K as well as races for children. Kenyan Kiplangat Terer holds the men's record with a 2:14:04, run in 2013. Ethiopian Hirut Guangul holds the woman's record at 2:35:07, from her 2012 win. Sports teams Quad City River Bandits is a Class A Midwest League minor league baseball team in Davenport. Their home games are played at Modern Woodmen Park, formerly John O'Donnell Stadium. The Davenport team has existed under a variety of names and Major League Baseball team affiliations since 1901. The River Bandits are currently affiliated with the Kansas City Royals. Quad City Mallards were an ice hockey team that played from 2009 to 2018 with home games held at the TaxSlayer Center in Moline. The new Mallards replaced the former Quad City Flames AHL team which played from 2007 to 2009. The original Mallards played in the United Hockey League from 1995 to 2007. The Quad City Storm was launched for the 2018–19 season in the Southern Professional Hockey League. The Quad City Steamwheelers were an AF2 arena football franchise that also played at the TaxSlayer Center. The Steamwheelers won the league's title game, the ArenaCup, in 2000 and 2001. After the AF2 league folded following its 2009 season, the Steamwheelers also ceased operations. A new Quad City Steamwheelers organization launched for the 2018 season in Champions Indoor Football and then moved to the Indoor Football League for 2019. Quad City Silverbacks were a professional mixed martial arts team competing in the now-defunct International Fight League. Home matches took place at the iWireless Center. Pat Miletich formed and based a mixed martial arts gym and fight team, Miletich Fighting Systems, in the Quad Cities. Miletich Fighting Systems is among MMA's first 'super-camps', and housed many of the consensus greatest fighters of the early 2000s, such as Jens Pulver, Matt Hughes, Robbie Lawler, Tim Sylvia, and Jeremy Horn, among others. The Quad City Riverhawks was a PBL (Premier Basketball League) team. They played home games at Wharton Field House in Moline during the 2008 season. They ended with that season. Previously, the Quad City Thunder were a CBA team playing in the late 1980s thru 2000, first at Wharton and then at The Mark. The Quad City Raiders are a semi-professional minor league football team that was formed in 2011 to serve the Quad City area. The Raiders play in the MidStates Football League and have reached the semi-finals in the league playoffs each season. See also Mississippi Athletic Conference for Iowa high
As it grew, Davenport annexed the communities of Rockingham, Nahant, Probstei, East Davenport, Oakdale, Cawiezeel, Blackhawk, Mt. Joy, Green Tree, and others. Bettendorf annexed portions of Pleasant Valley in the 1970s. In 1987, Rock Island, Moline, East Moline, Milan, Carbon Cliff, Hampton, Coal Valley and Silvis considered a super-city merger which would have seen the Illinois cities become the second-largest city in the state, but the proposal ultimately failed. Moline and East Moline considered a merger in 1997. That same year, Green Rock and Colona did merge. Geography The Quad Cities is located at the confluence of the Rock and Mississippi rivers, approximately west of Chicago, and form the largest metropolitan area along the Mississippi River between Minneapolis–Saint Paul and the St. Louis metropolitan area. Interstate 80 crosses the Mississippi River here. The Quad Cities area is distinctive because the Mississippi River flows from east to west as it passes through the heart of the area; the Iowa cities of Davenport and Bettendorf are located due north of Rock Island and Moline, respectively. The Quad Cities area is one where the telephone companies cooperate with regional phone calls. Iowa and Illinois have different area codes (563 and 309 respectively), yet most calls originating and terminating within the core urban area are placed without long-distance charges by dialing just a 7-digit number. This helps the bi-state area promote itself as a single community, "joined by a river." The Quad Cities Metropolitan Area consists of three counties: Scott County in Iowa, and Rock Island County and Henry County in Illinois. The Quad City metro population is 382,268. The Quad Cities Metropolitan Area is also considered part of the Great Lakes Megalopolis. Demographics According to the 2010 United States Census Count, the metropolitan area grew to 471,551. As of the 2000 census, a total of 96,495 households and 60,535 families resided in the area. Race and ethnicity The racial makeup of the area is 90.6% White (410,861), 3.7% Black or African American (27,757), 0.6% American Indian and Alaskan Native (1,255), 1.0% Asian (6,624), 0.03% Pacific Islander (156), and 2.0% from two or more races (11,929). 7.1% of the population is Hispanic or Latino of any race (37,070). The predominant ethnicities in the Quad Cities are of northern European descent, including German, Irish, and English, as well Scandinavian (Mostly Swedish and Norwegian) and Dutch. The primary minority groups in the area are African-Americans, which in Davenport make up the third largest black population in the state of Iowa, a community dating back to the 1830s when Iowa was a free territory. Many of the city's African-American residents have roots in the Southern/Border states of the U.S., including Mississippi, Arkansas, Alabama, and Missouri. The most significant Asian-American populations are South Asian and Vietnamese American. Religion According to resources, Christianity is the largest religion to be practiced in the area. However, the two states have a different population of Christian groups. In Davenport and Bettendorf, Catholics make up an 18.5% plurality, but Protestants with 15.1% Mainline and 11.6% Evangelical make up large minorities as well. The Black Protestants on the Iowa side comes in at 1.2%. On the Illinois side, between Rock Island, Moline, and East Moline, Catholicism is less prevalent at 12.4%, and at 12.5% Evangelical and 11.0% Mainline have smaller declines. The Jewish population is about 500–600, which is down from about 1,800–2,000 in the 1950s and 1960s. Landmarks The business Antique Archeology, featured on the History Channel show American Pickers, is located in LeClaire Brady Street Stadium, a major high-school sports venue along Davenport's Brady Street (U.S. 61) The Col Ballroom, a small arena for music concerts, in Davenport Davenport Skybridge Figge Art Museum, Davenport, formerly the Davenport Museum of Art, designed by British architect David Chipperfield and opened in 2005. Its holdings include extensive collections of Haitian, colonial Mexican and Midwestern art, particularly pieces by Thomas Hart Benton, Marvin Cone and Grant Wood, and personal effects from Wood's estate. Fred Schwengel Memorial Bridge, a four-lane steel-girder bridge on Interstate 80, crossing the Mississippi River to connect LeClaire and Rapids City. Opened in 1966. Government Bridge, a double-decked bridge adjacent to Lock and Dam 15, carrying motor and rail traffic between Arsenal Island and Davenport. The 1896 truss bridge, about 1,950 feet long, includes a 360-degree swing span over the twin locks. It connects to the Illinois side of the river via the Rock Island Viaduct. Iowa 80 Truck Stop – the world's largest truck stop is along Interstate 80 near Walcott, Iowa, west of Davenport. Interstate 74 Bridge, formerly known as the "Iowa-Illinois Memorial Bridge", connecting Bettendorf and Moline. The twin suspension spans across the Mississippi River were built in 1935 and 1959 and adapted to carry Interstate 74 in the early 1970s. The twinned towers are a symbol of the two-state Quad Cities community. The bridge is set to be replaced with eight lanes. John Deere Pavilion, a small museum and showcase for John Deere equipment, built adjacent to the John Deere Commons in the 1990s in downtown Moline. John Deere World Headquarters, designed by Eero Saarinen and completed in 1963 in Moline. The John Looney Mansion, designed and built in 1897 for the attorney, publisher and gangster John Looney in Rock Island which still stands off 20th Street and 17th Avenue. Lock and Dam No. 15, a 1,200-foot roller dam with twin locks across the Mississippi River between Arsenal Island and Davenport. The roller dam, billed as the longest of its type, maintains a pool upstream that allows river traffic to pass through the once notorious Rock Island Rapids. Mississippi Valley Fairgrounds, a fair and exposition venue in Davenport Modern Woodmen Park, formerly John O'Donnell Stadium, home of the Kansas City Royals' class high A affiliate, the Quad Cities River Bandits, on the Davenport riverfront. With the lights of Rock Island across the Mississippi and the Centennial Bridge looming just beyond the right-field fence, the park was named by USA Today as one of 10 great places for a baseball pilgrimage. The ball park added a 110 ft. ferris wheel before the start of the 2014 season. Old Main, completed in 1888, the oldest building on the campus of Augustana College. Located on a bluff overlooking the Mississippi River, its iconic and newly renovated dome was lighted as of October 2011. Putnam Museum in Davenport Quad City Botanical Center in Rock Island Quad Cities Waterfront Convention Center, located in Bettendorf RiverCenter/Adler Theatre, a convention and performing-arts complex in Davenport. The 2,400-seat Adler is the former RKO Orpheum Theater, which opened in 1931, designed by A.S. Graven of Chicago, whose projects included the Drake Hotel in Chicago and the Paramount Theater in New York City. The theater was extensively renovated and expanded in 1984–86 and 2005. River Music Experience, a performance, education and music-history venue in the Redstone Building, the former Petersen Harned Von Maur department store Rock Island Arsenal, manufacturer of military equipment and ordnance since the 1880s, now the largest government-owned weapons manufacturing arsenal in the United States. The arsenal is located on Arsenal Island (formerly known as Rock Island) in the Mississippi River between Davenport, Iowa, and Rock Island, Illinois. Fort Armstrong was built there in 1816. During the civil war, the island held a Union prison camp for Confederate soldiers. The Federal-style home of Colonel George Davenport, built in 1833–34, the oldest extant building in the Quad Cities, is on the north bank of the island. Rock Island Centennial Bridge over the Mississippi River between downtown Davenport and Rock Island, completed in 1940 to commemorate Rock Island's 100th anniversary. The five arches of the 3,853-foot through-arch bridge often are used as a symbol of the Quad Cities. Rock Island County Fairgrounds in East Moline, also the site of the Quad City Speedway Rock Island Auction Company from the Discovery Channel show Ready, Aim, Sold! TaxSlayer Center – 11,000-seat arena in Moline (formerly The Mark of the Quad Cities and the iWireless Center). Vander Veer Botanical Park is a 33-acre (130,000 m2) botanical garden in the Vander Veer Park Historic District of Davenport, Iowa. It is believed to be one of the first botanical parks west of the Mississippi River. The Quarter – a site in East Moline, alongside the Mississippi River, featuring shops, restaurants, condominiums, boat docks, sports and interpretive centers, and a working lighthouse, currently under development. (Geographical coordinates: ) Chicago, Milwaukee, St. Paul and Pacific Freight House, referred to locally as the "Freight House", is an entertainment venue TBK Bank Sports Complex, also known as the BettPlex, is a state-of-the-art sport and entertainment complex. Containing eight full-size volleyball and basketball courts. Four indoor and five outdoor sand volleyball courts, 10 lighted outdoor baseball and softball fields, the BettPlex is a 45 million dollar sporting facility that was created to host weekend sporting tournaments in the Quad Cities. Noteworthy companies Arconic Cobham plc Deere & Company (also known as/branded: John Deere) Genesis Health System Group O Guardian Industries Happy Joe's KONE, Inc (formerly Montgomery Elevator) Lee Enterprises Lewis Machine and Tool Company Modern Woodmen of America Nestlé Purina PetCare QCR Holdings Sears Seating (also known as Sears Manufacturing) Von Maur Whitey's Ice Cream Top employers According to Quad Cities website, the top employers in the Quad Cities area are: Notable people Eddie Albert, actor, Rock Island Ken Anderson, football player and coach, Rock Island Pat Angerer, football player, Bettendorf Matthew Ashford, actor, Davenport Tavian Banks, football player, Bettendorf Bonnie Bartlett, actress, Moline Scott Beck, filmmaker, Bettendorf Bix Beiderbecke, jazz musician, Davenport Louis Bellson, drummer, Moline Vincent Hugo Bendix, inventor and industrialist, Moline Ken Berry, actor, Moline Joseph W. Bettendorf, industrialist, Bettendorf (Gilbert) William P. Bettendorf, industrialist, Bettendorf (Gilbert) Chief Black Hawk, band leader and warrior of the Sauk Native American tribe Isabel Bloom, artist, Davenport Lisa Bluder, basketball coach, Marion Suzy Bogguss, country singer, Aledo Ken Bowman, football player, Milan Lara Flynn Boyle, actress, Davenport Ambrose Burke, priest and college president, Davenport Mike Butcher, pitcher and coach, Davenport Branden Campbell, bassist for the Neon Trees, Davenport Louise Carver, actress, Davenport Samuel Franklin Cody, aviator, Davenport William F. "Buffalo Bill" Cody, pioneer, LeClaire Danielle Colby, reality star American Pickers, Davenport/LeClaire Jude Cole, musician, Carbon Cliff Martin Cone, priest and college president, Davenport Ed Conroy, basketball coach, Davenport George Cram Cook, author, Davenport Roger Craig, football player, Davenport Doris Davenport, actress, Moline Colonel George Davenport, pioneer, US Army officer Dana Davis, actress, Davenport Ricky Davis, basketball player, Davenport John Deere, inventor, Moline Frederick Denkmann, lumber baron, Rock Island Justin Diercks, racecar driver, Davenport Acie Earl, basketball player, Moline Eugene Burton Ely, aviation pioneer, Davenport Embassy, Music Producer, Moline Bill Fitch, NBA basketball player and coach, Davenport John Flannagan, priest and college president, Davenport Jack Fleck, golfer, 1955 U.S. Open champion, Bettendorf Joe Frisco, vaudeville performer, Davenport John Getz, actor, Davenport Susan Glaspell, writer, Davenport Ethan Happ, Big Ten basketball player, Milan Warren Hearnes, governor of Missouri, Moline Anne Marie Howard, actress, Davenport Austin Howard, football player, Davenport Jim Jensen, NFL running back, Davenport Jesse Johnson (musician), The Time, Rock Island Mark Johnson, Olympic wrestler, Rock Island James Jones, football player, Davenport Gail Karp, cantor of the Reform Jewish synagogue, Davenport Hazel Keener, actress, Bettendor and Davenport Madison Keys, tennis player, Rock Island Josh Kroeger, athlete, Davenport Steve Kuberski, basketball player, Moline Perry Lafferty, producer, Davenport Elmer Layden, athlete and coach, Davenport Jim Leach, politician, Davenport Johnny Lujack, quarterback, 1947 Heisman Trophy winner, Bettendorf Sue Lyon, actress, Davenport Helen Mack, actress, Rock Island Cletus Madsen, priest and college president, Davenport Stuart Margolin, actor and director, Davenport Elisabeth Maurus, musician, Rock Island Carl Meinberg, priest and college president, Davenport Sebastian Menke, priest and college president, Davenport Julia Michaels, musician, Davenport Pat Miletich, MMA fighter, Bettendorf Marvin Mottet, priest, Davenport Don Nelson, NBA basketball player and coach, Rock Island Michael Nunn, boxer, Davenport Spike O'Dell, radio personality, East Moline Gerald Francis O'Keefe, priest, Davenport Gene Oliver, MLB player, Rock Island Eric Christian Olsen, actor, Bettendorf Daniel David Palmer, chiropractor, Davenport Oran Pape, state patrol, Davenport Laurdine Patrick, musician, East Moline Mary Beth Peil, actress and singer, Davenport Nat Pendleton, wrestler and actor, Davenport Roger Perry, actor, Davenport James Philbrook, actor, Davenport Scott Pose, MLB baseball player, Davenport Hiram Price, politician, Davenport Margo Price, country singer, Aledo Linnea Quigley, actress and producer, Davenport Ed Reimers, announcer, Moline Otto Frederick Rohwedder, engineer, inventor of sliced bread, Davenport Seth Rollins, WWE wrestler, Davenport Randy Shilts, journalist, Davenport Jim Skinner, CEO of McDonald's, Davenport Dean Stone, MLB pitcher, Silvis Tim Sylvia, MMA fighter, Bettendorf Jason Tanamor, author, writer, Rock Island Julian Vandervelde, football player, Davenport Hynden Walch, actress, Davenport Henry Cantwell Wallace, U.S. Secretary of Agriculture, Rock Island Friedrich Weyerhäuser, lumber baron, Rock Island Dwight Deere Wiman, Broadway producer, Moline Bryan Woods, filmmaker, Bettendorf Education Colleges and universities Augustana College – A private, four-year liberal arts college in Rock Island. Bible Missionary Institute – A Bible college in Rock Island affiliated with the Bible Missionary Church. Black Hawk College – Community college in Moline, with a satellite campus in Kewanee, Illinois. Eastern Iowa Community College District – Consisting of campuses in Bettendorf, Clinton, and Muscatine. Bettendorf's campus is known as Scott Community College. Palmer Chiropractic College – Davenport, first chiropractic school in the world. Saint Ambrose University - A university in Davenport. Upper Iowa University – A satellite campus in Bettendorf. Western Illinois University-Quad Cities – The only public, four-year university in the Quad Cities region. The campus is located in Moline along the Mississippi Riverfront at the former site of the John Deere Technical Site. Culture Since
scanning probe microscopy. Quantum chemistry studies the ground state of individual atoms and molecules, and the excited states, and transition states that occur during chemical reactions. On the calculations, quantum chemical studies use also semi-empirical and other methods based on quantum mechanical principles, and deal with time dependent problems. Many quantum chemical studies assume the nuclei are at rest (Born–Oppenheimer approximation). Many calculations involve iterative methods that include self-consistent field methods. Major goals of quantum chemistry include increasing the accuracy of the results for small molecular systems, and increasing the size of large molecules that can be processed, which is limited by scaling considerations—the computation time increases as a power of the number of atoms. History Some view the birth of quantum chemistry as starting with the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926. However, the 1927 article of Walter Heitler (1904–1981) and Fritz London, is often recognized as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Fock, to cite a few. The history of quantum chemistry also goes through the 1838 discovery of cathode rays by Michael Faraday, the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete, and the 1900 quantum hypothesis by Max Planck that any energy radiating atomic system can theoretically be divided into a number of discrete energy elements ε such that each of these energy elements is proportional to the frequency ν with which they each individually radiate energy and a numerical value called Planck's constant. Then, in 1905, to explain the photoelectric effect (1839), i.e., that shining light on certain materials can function to eject electrons
hydrogen molecular ion have been identified in terms of the generalized Lambert W function). Since all other atomic, or molecular systems, involve the motions of three or more "particles", their Schrödinger equations cannot be solved exactly and so approximate solutions must be sought. Valence bond Although the mathematical basis of quantum chemistry had been laid by Schrödinger in 1926, it is generally accepted that the first true calculation in quantum chemistry was that of the German physicists Walter Heitler and Fritz London on the hydrogen (H2) molecule in 1927. Heitler and London's method was extended by the American theoretical physicist John C. Slater and the American theoretical chemist Linus Pauling to become the valence-bond (VB) [or Heitler–London–Slater–Pauling (HLSP)] method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds. It focuses on how the atomic orbitals of an atom combine to give individual chemical bonds when a molecule is formed, incorporating the two key concepts of orbital hybridization and resonance. Molecular orbital An alternative approach was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund–Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptional basis of the Hartree–Fock method and further post Hartree–Fock methods. Density functional theory The Thomas–Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory (DFT). Modern day DFT uses the Kohn–Sham method, where the density functional is split into four terms; the Kohn–Sham kinetic energy, an external potential, exchange and correlation energies.
Uppercase). is . is . Greek The stress accents, indicated in red, are produced by pressing that key (or shifted key) followed by an appropriate vowel. Use of the "AltGr" key may produce the characters shown in blue. German Germany, Austria, Switzerland, Liechtenstein, and Luxembourg use QWERTZ layouts, where the letter Z is to the right of T. Icelandic The Icelandic keyboard layout is different from the standard QWERTY keyboard because the Icelandic alphabet has some special letters, most of which it shares with the other Nordic countries: Þ/þ, Ð/ð, Æ/æ, and Ö/ö. (Æ/æ also occurs in Norwegian, Danish and Faroese, Ð/ð in Faroese, and Ö/ö in Swedish, Finnish and Estonian. In Norwegian Ö/ö could be substituted for Ø/Ø which is the same sound/letter and is widely understood). The letters Á/á, Ý/ý, Ú/ú, Í/í, and É/é are produced by first pressing the dead key and then the corresponding letter. The Nordic letters Å/å and Ä/ä can be produced by first pressing , located below the key, and (for ¨) which also works for the non-Nordic ÿ, Ü/ü, Ï/ï, and Ë/ë. These letters are not used natively in Icelandic, but may have been implemented for ease of communication in other Nordic languages. Additional diacritics may be found behind the key: for ˋ (grave accent) and for ˆ (circumflex). Irish Microsoft Windows includes an Irish layout which supports acute accents with for the Irish language and grave accents with the dead key for Scottish Gaelic. The other Insular Celtic languages have their own layout. The UK or UK-Extended layout is also frequently used. Italian Braces (right above square brackets and shown in purple) are given with both AltGr and Shift pressed. The tilde (~) and backquote (`) characters are not present on the Italian keyboard layout (with Linux, they are available by pressing ++, and ++; Windows might not recognise these keybindings). When using Microsoft Windows, the standard Italian keyboard layout does not allow one to write 100% correct Italian language, since it lacks capital accented vowels, and in particular the È key. The common workaround is writing E' (E followed by an apostrophe) instead, or relying on the auto-correction feature of several word processors when available. It is possible to obtain the È symbol in MS Windows by typing + . Mac users, however, can write the correct accented character by pressing + + or, in the usual Mac way, by pressing the correct key for the accent (in this case + ) and subsequently pressing the wanted letter (in this case + ). Linux users can also write it by pressing the key with enabled. There is an alternate layout, which differs only in disposition of characters accessible through , and includes the tilde and the curly brackets. It is commonly used in IBM keyboards. Italian typewriters often have the QZERTY layout instead. The Italian-speaking part of Switzerland uses the QWERTZ keyboard. Latvian Although rarely used, a keyboard layout specifically designed for the Latvian language called ŪGJRMV exists. The Latvian QWERTY keyboard layout is most commonly used; its layout is the same as Latin ones, but with a dead key, which allows entering special characters (āčēģīķļņõŗšūž). The most common dead key is the apostrophe ('), which is followed by Alt+Gr (Windows default for Latvian layout). Some prefer using the tick (`). Lithuanian Where in standard QWERTY the number row is located, you find in Lithuanian QWERTY: Ą, Č, Ę, Ė, Į, Š, Ų, Ū, Ž, instead of their counterparts 1, 2, 3, 4, 5, 6, 7, 8, =. If you still want to use the numbers of the mentioned 'number row', you can create them in combination with the -key. Aside from these changes the keyboard is standard QWERTY. Besides QWERTY, the ĄŽERTY layout without the adjustment of the number row is used. Maltese The Maltese language uses Unicode (UTF-8) to display the Maltese diacritics: ċ Ċ; ġ Ġ; ħ Ħ; ż Ż (together with à À; è È; ì Ì; ò Ò; ù Ù). There are two standard keyboard layouts for Maltese, according to "MSA 100:2002 Maltese Keyboard Standard"; one of 47 keys and one of 48 keys. The 48-key layout is the most popular. Norwegian The Norwegian languages use the same letters as Danish, but the Norwegian keyboard differs from the Danish layout regarding the placement of the , and (backslash) keys. On the Danish keyboard, the and are swapped. The Swedish keyboard is also similar to the Norwegian layout, but and are replaced with and . On some systems, the Norwegian keyboard may allow typing Ö/ö and Ä/ä by holding the or key while striking and , respectively. There is also an alternative keyboard layout called Norwegian with Sámi, which allows for easier input of the characters required to write various Sámi languages. All the Sámi characters are accessed through the key. On Macintosh computers, the Norwegian and Norwegian extended keyboard layouts have a slightly different placement for some of the symbols obtained with the help of the or keys. Notably, the $ sign is accessed with and ¢ with . Furthermore, the frequently used @ is placed between and . Polish Most typewriters use a QWERTZ keyboard with Polish letters (with diacritical marks) accessed directly (officially approved as "Typist's keyboard", , Polish Standard PN-87), which is mainly ignored in Poland as impractical (custom-made keyboards, e.g., those in the public sector as well as some Apple computers, present an exception to this paradigm); the "Polish programmer's" () layout has become the de facto standard, used on virtually all computers sold on the Polish market. Most computer keyboards in Poland are laid out according to the standard US visual and functional layout. Polish diacritics are accessed by using the AltGr key with a corresponding similar letter from the base Latin alphabet. Normal capitalization rules apply with respect to Shift and Caps Lock keys. For example, to enter "Ź", one can type Shift+AltGr+X with Caps Lock off, or turn on Caps Lock and type AltGr+X. Both ANSI and ISO mechanical layouts are common sights, and even some non-standard mechanical layouts are in use. ANSI is often preferred, as the additional key provides no additional function, at least in Microsoft Windows where it duplicates the backslash key, while taking space from the Shift key. Many keyboards do not label AltGr as such, leaving the Alt marking as in the US layout - the right Alt key nevertheless functions as AltGr in this layout, causing possible confusion when keyboard shortcuts with the Alt key are required (these usually work only with the left Alt) and causing the key to be commonly referred to as right Alt (). However, keyboards with AltGr marking are available and it is also officially used by Microsoft when depicting the layout. Also, on MS Windows, the tilde character "~" (Shift+`) acts as a dead key to type Polish letters (with diacritical marks) thus, to obtain an "Ł", one may press Shift+` followed by L. The tilde character is obtained with (Shift+`) then space. In Linux-based systems, the euro symbol is typically mapped to Alt+5 instead of Alt+U, the tilde acts as a normal key, and several accented letters from other European languages are accessible through combinations with left Alt. Polish letters are also accessible by using the compose key. Software keyboards on touchscreen devices usually make the Polish diacritics available as one of the alternatives which show up after long-pressing the corresponding Latin letter. However, modern predictive text and autocorrection algorithms largely mitigate the need to type them directly on such devices. There is also unofficial, expanded Polish keyboard layout since 2021, based on the layout from Polish 80s computers Mazovia and wide expanded into all Latin diacritical sings, Greek signs, mathematical signs, IPA signs, typographical signs, symbols and sign "zł" meaning Polish currency, like actual German expanded layout E1, available in two versions, QWERTZ and QWERTY. Portuguese Brazil The Brazilian computer keyboard layout is specified in the ABNT NBR 10346 variant 2 (alphanumeric portion) and 10347 (numeric portion) standards. Essentially, the Brazilian keyboard contains dead keys for five variants of diacritics in use in the language; the letter Ç, the only application of the cedilla in Portuguese, has its own key. In some keyboard layouts the + combination produces the ₢ character (Unicode 0x20A2), symbol for the old currency cruzeiro, a symbol that is not used in practice (the common abbreviation in the eighties and nineties used to be Cr$). The cent sign ¢, is accessible via +, but is not commonly used for the centavo, subunit of previous currencies as well as the current real, which itself is represented by R$. The Euro sign € is not standardized in this layout. The masculine and feminine ordinals ª and º are accessible via combinations. The section sign § (Unicode U+00A7), in Portuguese called parágrafo, is nowadays practically only used to denote sections of laws. Variant 2 of the Brazilian keyboard, the only which gained general acceptance (MS Windows treats both variants as the same layout), has a unique mechanical layout, combining some features of the ISO 9995-3 and the JIS keyboards in order to fit 12 keys between the left and right Shift (compared to the American standard of 10 and the international of 11). Its modern, IBM PS/2-based variations, are thus known as 107-keys keyboards, and the original PS/2 variation was 104-key. Variant 1, never widely adopted, was based on the ISO 9995-2 keyboards. To make this layout usable with keyboards with only 11 keys in the last row, the rightmost key (/?°) has its functions replicated across the +, +, and + combinations. Portugal Essentially, the Portuguese keyboard contains dead keys for five variants of diacritics; the letter Ç, the only application of the cedilla in Portuguese, has its own key, but there are also a dedicated key for the ordinal indicators and a dedicated key for quotation marks. The + combination for producing the euro sign € (Unicode 0x20AC) has become standard. On some QWERTY keyboards the key labels are translated, but the majority are labelled in English. During the 20th century, a different keyboard layout, HCESAR, was in widespread use in Portugal. Romanian (in Romania and Moldova) The current Romanian National Standard SR 13392:2004 establishes two layouts for Romanian keyboards: a "primary" one and a "secondary" one. The "primary" layout is intended for traditional users who have learned how to type with older, Microsoft-style implementations of the Romanian keyboard. The "secondary" layout is mainly used by programmers as it does not contradict the physical arrangement of keys on a US-style keyboard. The "secondary" arrangement is used as the default Romanian layout by Linux distributions, as defined in the "X Keyboard Configuration Database". There are four Romanian-specific characters that are incorrectly implemented in versions of Microsoft Windows until Vista came out: Ș (U+0218, S with comma), incorrectly implemented as Ş (U+015E, S with cedilla) ș (U+0219, s with comma), incorrectly implemented as ş (U+015F, s with cedilla) Ț (U+021A, T with comma), incorrectly implemented as Ţ (U+0162, T with cedilla) ț (U+021B, t with comma), incorrectly implemented as ţ (U+0163, t with cedilla) The cedilla-versions of the characters do not exist in the Romanian language (they came to be used due to a historic bug). The UCS now says that encoding this was a mistake because it messed up Romanian data and the letters with cedilla and the letters with comma are the same letter with a different style. Since Romanian hardware keyboards are not widely available, Cristian Secară has created a driver that allows Romanian characters to be generated with a US-style keyboard in all versions of Windows prior to Vista through the use of the AltGr key modifier. Windows Vista and newer versions include the correct diacritical signs in the default Romanian Keyboard layout. This layout has the Z and Y keys mapped like in English layouts and also includes characters like the 'at' (@) and dollar ($) signs, among others. The older cedilla-version layout is still included albeit as the 'Legacy' layout. Slovak In Slovakia, similarly to the Czech Republic, both QWERTZ and QWERTY keyboard layouts are used. QWERTZ is the default keyboard layout for Slovak in Microsoft Windows. Spanish Spain The Spanish keyboard layout is used to write in Spanish and in other languages of Spain such as Catalan, Basque, Galician, Aragonese, Asturian and Occitan. It includes Ñ for Spanish, Asturian and Galician, the acute accent, the diaeresis, the inverted question and exclamation marks (¿, ¡), the superscripted o and a (º, ª) for writing abbreviated ordinal numbers in masculine and feminine in Spanish and Galician, and finally, some characters required only for typing Catalan and Occitan, namely Ç, the grave accent and the interpunct ( / , used in l·l, n·h, s·h; located at Shift-3). It can also be used to write other international characters, such as those using a circumflex accent (used in French and Portuguese among others) or a tilde (used in both Spanish and Portuguese), which are available as dead keys. However, it lacks two characters used in Asturian: Ḥ and Ḷ (historically, general support for these two has been poor – they aren't present in the ISO 8859-1 character encoding standard, or any other ISO/IEC 8859 standard). Several alternative distributions, based on this one or created from scratch, have been created to address this issue (see the Other original layouts and layout design software section for more information). On most keyboards, € is marked as Alt Gr + E and not Alt Gr + 5 as shown in the image. However, in some keyboards, € is found marked twice. An alternative version exists, supporting all of ISO 8859-1. Spanish keyboards are usually labelled in Spanish instead of English, its abbreviations being: On some keyboards, the c-cedilla key (Ç) is located one or two lines above, rather than on the right of, the acute accent key (´). In some cases it is placed on the right of the plus sign key (+), while in other keyboards it is situated on the right of the inverted exclamation mark key (¡). Latin America, officially known as Spanish Latinamerican sort The Latin American Spanish keyboard layout is used throughout Mexico, Central and South America. Before its design, Latin American vendors had been selling the Spanish (Spain) layout as default. Its most obvious difference from the Spanish (Spain) layout is the lack of a Ç key; on Microsoft Windows it lacks a tilde (~) dead key, whereas on Linux systems the dead tilde can be optionally enabled. This is not a problem when typing in Spanish, but it is rather problematic when typing in Portuguese, which can be an issue in countries with large commercial ties to Brazil (Argentina, Uruguay and Paraguay). Normally "Bloq Mayús" is used instead of "Caps Lock", and "Intro" instead of "Enter". Swedish The central characteristics of the Swedish keyboard are the three additional letters Å/å, Ä/ä, and Ö/ö. The same visual layout is also in use in Finland and Estonia, as the letters Ä/ä and Ö/ö are shared with the Swedish language, and even Å/å is needed by Swedish-speaking Finns. However, the Finnish multilingual keyboard adds new letters and punctuation to the functional layout. The Norwegian keyboard largely resembles the Swedish layout, but the and are replaced with and . The Danish keyboard is also similar, but it has the and swapped. On some systems, the Swedish or Finnish keyboard may allow typing Ø/ø and Æ/æ by holding the or key while striking and , respectively. The Swedish with Sámi keyboard allows typing not only Ø/ø and Æ/æ, but even the letters required to write various Sámi languages. This keyboard has the same function for all the keys engraved on the regular Swedish keyboard, and the additional letters are available through the key. On Macintosh computers, the Swedish and Swedish Pro keyboards differ somewhat from the image shown above, especially as regards the characters available using the or keys. (on the upper row) produces the ° sign, and produces the € sign. The digit keys produce ©@£$∞§|[]≈ with and ¡"¥¢‰¶\{}≠ with . On Linux systems, the Swedish keyboard may also give access to additional characters as follows: first row: ¶¡@£$€¥{[]}\± and ¾¹²³¼¢⅝÷«»°¿¬ second row: @ł€®þ←↓→œþ"~ and ΩŁ¢®Þ¥↑ıŒÞ°ˇ third row: ªßðđŋħjĸłøæ´ and º§ÐªŊĦJ&ŁØÆ× fourth row: |«»©""nµ¸·̣ and ¦<>©‘’Nº˛˙˙ Several of these characters function as dead keys. Turkish Today the majority of Turkish keyboards are based on QWERTY (the so-called Q-keyboard layout), although there is also the older F-keyboard layout specifically designed for the language. Vietnamese The Vietnamese keyboard layout is an extended Latin QWERTY layout. The letters Ă, Â, Ê, and Ô are found on what would be the number keys – on the US English keyboard, with – producing the tonal marks (grave accent, hook, tilde, acute accent and dot below, in that order), producing Đ, producing the đồng sign (₫) when not shifted, and brackets () producing Ư and Ơ. Multilingual variants Multilingual keyboard layouts, unlike the default layouts supplied for one language and market, try to make it possible for the user to type in any of several languages using the same number of keys. Mostly this is done by adding a further virtual layer in addition to the -key by means of (or 'right ' reused as such), which contains a further repertoire of symbols and diacritics used by the desired languages. This section also tries to arrange the layouts in ascending order by the number of possible languages and not chronologically according to the Latin alphabet as usual. United Kingdom (Extended) Layout Windows From Windows XP SP2 onwards, Microsoft has included a variant of the British QWERTY keyboard (the "United Kingdom Extended" keyboard layout) that can additionally generate several diacritical marks. This supports input on a standard physical UK keyboard for many languages without changing positions of frequently used keys, which is useful when working with text in Welsh, Scottish Gaelic and Irish — languages native to parts of the UK (Wales, parts of Scotland and Northern Ireland respectively). In this layout, the grave accent key () becomes, as it also does in the US International layout, a dead key modifying the character generated by the next key pressed. The apostrophe, double-quote, tilde and circumflex (caret) keys are not changed, becoming dead keys only when 'shifted' with . Additional precomposed characters are also obtained by shifting the 'normal' key using the key. The extended keyboard is software installed from the Windows control panel, and the extended characters are not normally engraved on keyboards. The UK Extended keyboard uses mostly the AltGr key to add diacritics to the letters a, e, i, n, o, u, w and y (the last two being used in Welsh) as appropriate for each character, as well as to their capitals. Pressing the key and then a character that does not take the specific diacritic produces the behaviour of a standard keyboard. The key presses followed by spacebar generate a stand-alone mark.: grave accents (e.g. à, è, etc.) needed for Scots Gaelic are generated by pressing the grave accent (or 'backtick') key , which is a dead key, then the letter. Thus produces à. acute accents (e.g. á) needed for Irish are generated by pressing the key together with the letter (or acting as a dead key combination followed by the letter). Thus produces á; produces Á. (Some programs use the combination of and a letter for other functions, in which case the method must be used to generate acute accents). the circumflex diacritic needed for Welsh may be added by , acting as a dead key combination, followed by the letter. Thus then produces â, then produces the letter ŵ. Some other languages commonly studied in the UK and Ireland are also supported to some extent: diaeresis or umlaut (e.g. ä, ë, ö, etc.) is generated by a dead key combination , then the letter. Thus produces ä. tilde (e.g. ã, ñ, õ, etc., as used in Spanish and Portuguese) is generated by dead key combination , then the letter. Thus produces ã. cedilla (e.g. ç) under c is generated by , and the capital letter (Ç) is produced by The and letter method used for acutes and cedillas does not work for applications which assign shortcut menu functions to these key combinations. These combinations are intended to be mnemonic and designed to be easy to remember: the circumflex accent (e.g. â) is similar to the free-standing circumflex (caret) (^), printed above
in most electronic keyboards. Some keyboards, such as the Kinesis or TypeMatrix, retain the QWERTY layout but arrange the keys in vertical columns, to reduce unnecessary lateral finger motion. Computer keyboards The first computer terminals such as the Teletype were typewriters that could produce and be controlled by various computer codes. These used the QWERTY layouts and added keys such as escape (ESC) which had special meanings to computers. Later keyboards added function keys and arrow keys. Since the standardization of PC-compatible computers and Windows after the 1980s, most full-sized computer keyboards have followed this standard (see drawing at right). This layout has a separate numeric keypad for data entry at the right, 12 function keys across the top, and a cursor section to the right and center with keys for Insert, Delete, Home, End, Page Up, and Page Down with cursor arrows in an inverted-T shape. Diacritical marks QWERTY was designed for English, a language with accents ('diacritics') appearing only in a few words of foreign origin. The standard US keyboard has no provision for these at all; the need was later met by the so called "US-International" keyboard mapping, which uses "dead keys" to type accents without having to add more physical keys. (The same principle is used in the standard US keyboard layout for MacOS, but in a different way.) Most European (including UK) keyboards for PCs have an AltGr key ('Alternative Graphics' key, replaces the right Alt key) that enables easy access to the most common diacritics used in the territory where sold. For example, default keyboard mapping for the UK/Ireland keyboard has the diacritics used in Irish but these are rarely printed on the keys; but to type the accents used in Welsh and Scots Gaelic requires the use of a "UK Extended" keyboard mapping and the dead key or compose key method. This arrangement applies to Windows, ChromeOS and Linux; MacOS computers have different techniques. The US International and UK Extended mappings provide many of the diacritics needed for students of other European languages. Other keys and characters Specific language variants Minor changes to the arrangement are made for other languages. There are a large number of different keyboard layouts used for different languages written in Latin script. They can be divided into three main families according to where the , , , , and keys are placed on the keyboard. These are usually named after the first six letters, for example this QWERTY layout and the AZERTY layout. In this section you will also find keyboard layouts that include some additional symbols of other languages. But they are different from layouts that were designed with the goal to be usable for multiple languages (see Multilingual variants). The following sections give general descriptions of QWERTY keyboard variants along with details specific to certain operating systems. The emphasis is on Microsoft Windows. English Canada English-speaking Canadians have traditionally used the same keyboard layout as in the United States, unless they are in a position where they have to write French on a regular basis. French-speaking Canadians respectively have favoured the Canadian French keyboard layout (see French (Canada), below). The CSA keyboard is the official multilingual keyboard layout of Canada. United Kingdom The United Kingdom and Ireland use a keyboard layout based on the 48-key version defined in the (now withdrawn) British Standard BS 4822. It is very similar to that of the United States, but has an AltGr key and a larger Enter key, includes £ and € signs and some rarely used EBCDIC symbols (¬, ¦), and uses different positions for the characters @, ", #, ~, \, and |. The BS 4822:1994 standard did not make any use of the AltGr key and lacked support for any non-ASCII characters other than ¬ and £. It also assigned a key for the non-ASCII character broken bar (¦), but lacked one for the far more commonly used ASCII character vertical bar (|). It also lacked support for various diacritics used in the Welsh alphabet, and the Scottish Gaelic alphabet; and also is missing the letter yogh, ȝ, used very rarely in the Scots language. Therefore, various manufacturers have modified or extended the BS 4822 standard: The B00 key (left of Z), shifted, results in vertical bar (|) on some systems (e.g. Windows UK/Ireland keyboard layout and Linux/X11 UK/Ireland keyboard layout), rather than the broken bar (¦) assigned by BS 4822 and provided in some systems (e.g. IBM OS/2 UK166 keyboard layout) The E00 key (left of 1) with AltGr provides either vertical bar (|) (OS/2's UK166 keyboard layout, Linux/X11 UK keyboard layout) or broken bar (¦) (Microsoft Windows UK/Ireland keyboard layout) Support for the diacritics needed for Scots Gaelic and Welsh was added to Windows and Chrome OS using a "UK-extended" setting (see below); Linux and X-Windows systems have an explicit or redesignated compose key for this purpose. UK Apple keyboard The British version of the Apple Keyboard does not use the standard UK layout. Instead, some older versions have the US layout (see below) with a few differences: the sign is reached by and the sign by , the opposite to the US layout. The is also present and is typed with . Umlauts are reached by typing and then the vowel, and ß is reached by typing . Newer Apple "British" keyboards use a layout that is relatively unlike either the US or traditional UK keyboard. It uses an elongated return key, a shortened left with and in the newly created position, and in the upper left of the keyboard are and instead of the traditional EBCDIC codes. The middle-row key that fits inside the key has and . United States The arrangement of the character input keys and the Shift keys contained in this layout is specified in the US national standard ANSI-INCITS 154-1988 (R1999) (formerly ANSI X3.154-1988 (R1999)), where this layout is called "ASCII keyboard". The complete US keyboard layout, as it is usually found, also contains the usual function keys in accordance with the international standard ISO/IEC 9995-2, although this is not explicitly required by the US American national standard. US keyboards are used not only in the United States, but also in many other English-speaking places, (except UK and Ireland), including India, Australia, Anglophone Canada, Hong Kong, New Zealand, South Africa, Malaysia, Singapore, Philippines, and Indonesia that uses the same 26-letter alphabets as English. In many other English-speaking jurisdictions (e.g., Canada, Australia, the Caribbean nations, Hong Kong, Malaysia, India, Pakistan, Bangladesh, Singapore, New Zealand, and South Africa), local spelling sometimes conforms more closely to British English usage, although these nations decided to use a US English keyboard layout. Until Windows 8 and later versions, when Microsoft separated the settings, this had the undesirable side effect of also setting the language to US English, rather than the local orthography. The US keyboard layout has a second Alt key instead of the AltGr key and does not use any dead keys; this makes it inefficient for all but a handful of languages. On the other hand, the US keyboard layout (or the similar UK layout) is occasionally used by programmers in countries where the keys for []{} are located in less convenient positions on the locally customary layout. On some keyboards the enter key is bigger than traditionally and takes up also a part of the line above, more or less the area of the traditional location of the backslash key (\). In these cases the backslash is located in alternative places. It can be situated one line above the default location, on the right of the equals sign key (=). Sometimes it is placed one line below its traditional situation, on the right of the apostrophe key (') (in these cases the enter key is narrower than usual on the line of its default location). It may also be two lines below its default situation on the right of a narrower than traditionally right shift key. A variant of this layout is used in Arabic-speaking countries. This variant has the | \ key to the left of Z, ~ ` key where the | \ key is in the usual layout, and the > < key where the ~ ` key is in the usual layout. Czech The typewriter came to the Czech-speaking area in the late 19th century, when it was part of Austria-Hungary where German was the dominant language of administration. Therefore, Czech typewriters have the QWERTZ layout. However, with the introduction of imported computers, especially since the 1990s, the QWERTY keyboard layout is frequently used for computer keyboards. The Czech QWERTY layout differs from QWERTZ in that the characters (e.g. @$& and others) missing from the Czech keyboard are accessible with AltGr on the same keys where they are located on an American keyboard. In Czech QWERTZ keyboards the positions of these characters accessed through AltGr differs. Danish Both the Danish and Norwegian keyboards include dedicated keys for the letters Å/å, Æ/æ and Ø/ø, but the placement is a little different, as the and keys are swapped on the Norwegian layout. (The Finnish–Swedish keyboard is also largely similar to the Norwegian layout, but the and are replaced with and . On some systems, the Danish keyboard may allow typing Ö/ö and Ä/ä by holding the or key while striking and , respectively.) Computers with Windows are commonly sold with ÖØÆ and ÄÆØ printed on the two keys, allowing same computer hardware to be sold in Denmark, Finland, Norway and Sweden, with different operating system settings. Dutch (Netherlands) Though it is seldom used (most Dutch keyboards use US International layout), the Dutch layout uses QWERTY but has additions for the € sign, the diaresis (¨), and the braces ({ }) as well as different locations for other symbols. An older version contained a single-stroke key for the Dutch character IJ/ij, which is usually typed by the combination of and . In the 1990s, there was a version with the now-obsolete florin sign (Dutch: guldenteken) for IBM PCs. In Flanders (the Dutch-speaking part of Belgium), "AZERTY" keyboards are used instead, due to influence from the French-speaking part of Belgium. See also #US-International in the Netherlands below. Estonian The keyboard layout used in Estonia is virtually the same as the Swedish layout. The main difference is that the and keys (to the right of ) are replaced with and respectively (the latter letter being the most distinguishing feature of the Estonian alphabet). Some special symbols and dead keys are also moved around. Faroese The same as the Danish layout with added (Eth), since the Faroe Islands are a self-governed part of the Kingdom of Denmark. French (Canada) This keyboard layout is commonly used in Canada by French-speaking Canadians. It is the most common layout for laptops and stand-alone keyboards aimed at the Francophone market. Unlike the AZERTY layout used in France and Belgium, it is a QWERTY layout and as such is also relatively commonly used by English speakers in the US and Canada (accustomed to using US standard QWERTY keyboards) for easy access to the accented letters found in some French loanwords. It can be used to type all accented French characters, as well as some from other languages, and serves all English functions as well. It is popular mainly because of its close similarity to the basic US keyboard commonly used by English-speaking Canadians and Americans, historical use of US-made typewriters by French-Canadians, and is the standard for keyboards in Quebec. It can also easily 'map' to or from a standard US QWERTY keyboard with the sole loss the guillemet/degree sign key. Its significant difference from the US standard is that the right Alt key is reconfigured as an AltGr key that gives easy access to a further range of characters (marked in blue and red on the keyboard image. Blue indicates an alternative character that will display as typed. Red indicates a dead key: the diacritic will be applied to the next vowel typed.) In some variants, the key names are translated to French: is or (short for Fixer/Verrouiller Majuscule, meaning Lock Uppercase). is . is . Greek The stress accents, indicated in red, are produced by pressing that key (or shifted key) followed by an appropriate vowel. Use of the "AltGr" key may produce the characters shown in blue. German Germany, Austria, Switzerland, Liechtenstein, and Luxembourg use QWERTZ layouts, where the letter Z is to the right of T. Icelandic The Icelandic keyboard layout is different from the standard QWERTY keyboard because the Icelandic alphabet has some special letters, most of which it shares with the other Nordic countries: Þ/þ, Ð/ð, Æ/æ, and Ö/ö. (Æ/æ also occurs in Norwegian, Danish and Faroese, Ð/ð in Faroese, and Ö/ö in Swedish, Finnish and Estonian. In Norwegian Ö/ö could be substituted for Ø/Ø which is the same sound/letter and is widely understood). The letters Á/á, Ý/ý, Ú/ú, Í/í, and É/é are produced by first pressing the dead key and then the corresponding letter. The Nordic letters Å/å and Ä/ä can be produced by first pressing , located below the key, and (for ¨) which also works for the non-Nordic ÿ, Ü/ü, Ï/ï, and Ë/ë. These letters are not used natively in Icelandic, but may have been implemented for ease of communication in other Nordic languages. Additional diacritics may be found behind the key: for ˋ (grave accent) and for ˆ (circumflex). Irish Microsoft Windows includes an Irish layout which supports acute accents with for the Irish language and grave accents with the dead key for Scottish Gaelic. The other Insular Celtic languages have their own layout. The UK or UK-Extended layout is also frequently used. Italian Braces (right above square brackets and shown in purple) are given with both AltGr and Shift pressed. The tilde (~) and backquote (`) characters are not present on the Italian keyboard layout (with Linux, they are available by pressing ++, and ++; Windows might not recognise these keybindings). When using Microsoft Windows, the standard Italian keyboard layout does not allow one to write 100% correct Italian language, since it lacks capital accented vowels, and in particular the È key. The common workaround is writing E' (E followed by an apostrophe) instead, or relying on the auto-correction feature of several word processors when available. It is possible to obtain the È symbol in MS Windows by typing + . Mac users, however, can write the correct accented character by pressing + + or, in the usual Mac way, by pressing the correct key for the accent (in this case + ) and subsequently pressing the wanted letter (in this case + ). Linux users can also write it by pressing the key with enabled. There is an alternate layout, which differs only in disposition of characters accessible through , and includes the tilde and the curly brackets. It is commonly used in IBM keyboards. Italian typewriters often have the QZERTY layout instead. The Italian-speaking part of Switzerland uses the QWERTZ keyboard. Latvian Although rarely used, a keyboard layout specifically designed for the Latvian language called ŪGJRMV exists. The Latvian QWERTY keyboard layout is most commonly used; its layout is the same as Latin ones, but with a dead key, which allows entering special characters (āčēģīķļņõŗšūž). The most common dead key is the apostrophe ('), which is followed by Alt+Gr (Windows default for Latvian layout). Some prefer using the tick (`). Lithuanian Where in standard QWERTY the number row is located, you find in Lithuanian QWERTY: Ą, Č, Ę, Ė, Į, Š, Ų, Ū, Ž, instead of their counterparts 1, 2, 3, 4, 5, 6, 7, 8, =. If you still want to use the numbers of the mentioned 'number row', you can create them in combination with the -key. Aside from these changes the keyboard is standard QWERTY. Besides QWERTY, the ĄŽERTY layout without the adjustment of the number row is used. Maltese The Maltese language uses Unicode (UTF-8) to display the Maltese diacritics: ċ Ċ; ġ Ġ; ħ Ħ; ż Ż (together with à À; è È; ì Ì; ò Ò; ù Ù). There are two standard keyboard layouts for Maltese, according to "MSA 100:2002 Maltese Keyboard Standard"; one of 47 keys and one of 48 keys. The 48-key layout is the most popular. Norwegian The Norwegian languages use the same letters as Danish, but the Norwegian keyboard differs from the Danish layout regarding the placement of the , and (backslash) keys. On the Danish keyboard, the and are swapped. The Swedish keyboard is also similar to the Norwegian layout, but and are replaced with and . On some systems, the Norwegian keyboard may allow typing Ö/ö and Ä/ä by holding the or key while striking and , respectively. There is also an alternative keyboard layout called Norwegian with Sámi, which allows for easier input of the characters required to write various Sámi languages. All the Sámi characters are accessed through the key. On Macintosh computers, the Norwegian and Norwegian extended keyboard layouts have a slightly different placement for some of the symbols obtained with the help of the or keys. Notably, the $ sign is accessed with and ¢ with . Furthermore, the frequently used @ is placed between and . Polish Most typewriters use a QWERTZ keyboard with Polish letters (with diacritical marks) accessed directly (officially approved as "Typist's keyboard", , Polish Standard PN-87), which is mainly ignored in Poland as impractical (custom-made keyboards, e.g., those in the public sector as well as some Apple computers, present an exception to this paradigm); the "Polish programmer's" () layout has become the de facto standard, used on virtually all computers sold on the Polish market. Most computer keyboards in Poland are laid out according to the standard US visual and functional layout. Polish diacritics are accessed by using the AltGr key with a corresponding similar letter from the base Latin alphabet. Normal capitalization rules apply with respect to Shift and Caps Lock keys. For example, to enter "Ź", one can type Shift+AltGr+X with Caps Lock off, or turn on Caps Lock and type AltGr+X. Both ANSI and ISO mechanical layouts are common sights, and even some non-standard mechanical layouts are in use. ANSI is often preferred, as the additional key provides no additional function, at least in Microsoft Windows where it duplicates the backslash key, while taking space from the Shift key. Many keyboards do not label AltGr as such, leaving the Alt marking as in the US layout - the right Alt key nevertheless functions as AltGr in this layout, causing possible confusion when keyboard shortcuts with the Alt key are required (these usually work only with the left Alt) and causing the key to be commonly referred to as right Alt (). However, keyboards with AltGr marking are available and it is also officially used by Microsoft when depicting the layout. Also, on MS Windows, the tilde character "~" (Shift+`) acts as a dead key to type Polish letters (with diacritical marks) thus, to obtain an "Ł", one may press Shift+` followed by L. The tilde character is obtained with (Shift+`) then space. In Linux-based systems, the euro symbol is typically mapped to Alt+5 instead of Alt+U, the tilde acts as a normal key, and several accented letters from other European languages are accessible through combinations with left Alt. Polish letters are also accessible by using the compose key. Software
The single-player mode is played against computer-controlled bots. It features music composed by Sonic Mayhem and Front Line Assembly founder Bill Leeb. Notable features of Quake III Arena include the minimalist design, lacking rarely used items and features; the extensive customizability of player settings such as field of view, texture detail and enemy model; and advanced movement features such as strafe-jumping and rocket-jumping. The game was praised by reviewers who, for the most part, described the gameplay as fun and engaging. Many liked the crisp graphics and focus on multiplayer. Quake III Arena has also been used extensively in professional electronic sports tournaments such as QuakeCon, Cyberathlete Professional League, DreamHack, and the Electronic Sports World Cup. Gameplay Unlike its predecessors, Quake III Arena does not have a plot-based single-player campaign. Instead, it simulates the multiplayer experience with computer-controlled players known as bots. The game's story is brief: "the greatest warriors of all time fight for the amusement of a race called the Vadrigar in the Arena Eternal." The introduction video shows the abduction of such a warrior, Sarge, while making a last stand. Continuity with prior games in the Quake series and even Doom is maintained by the inclusion of player models and biographical information. A familiar mixture of gothic and technological map architecture as well as specific equipment is included, such as the Quad Damage power-up, the rocket launcher, and the BFG. In Quake III Arena, the player progresses through tiers of maps, combating different bot characters that increase in difficulty, from Crash (at Tier 0) to Xaero (at Tier 7). As the game progresses, the fights take place in more complex arenas and against tougher opponents. While deathmatch maps are designed for up to 16 players, tournament maps are designed for duels between 2 players and in the single-player game could be considered 'boss battles'. The weapons are balanced by role, with each weapon having advantages in certain situations, such as the railgun at long-range and the lightning gun at close quarters. The BFG super-weapon is an exception to this; compared to other similarly named weapons in the Doom/Quake series, Quake III Arenas incarnation of this weapon is basically a fast-firing rocket launcher and it is found in hard-to-reach locations. Weapons appear as level items, spawning at regular intervals in set locations on the map. If a player dies, all of their weapons are lost and they receive the spawn weapons for the current map, usually the gauntlet and machine gun. Players also drop the weapon they were using when killed, which other players can then pick up. Quake III Arena comes with several gameplay modes: Free for All (FFA), a classic deathmatch, where each player competes against the rest for the highest score, Team Deathmatch (TDM), where usually two teams of four compete for the highest team frag (kill) total, Tournament (1v1), a deathmatch between two players, usually ending after a set time and Capture the Flag, which is played on symmetrical maps where teams have to recover the enemy flag from the opponents' base while retaining their own. Quake III Arena was specifically designed for multiplayer. The game allows players whose computers are connected by a network or to the internet to play against each other in real time, and incorporates a handicap system. It employs a client–server model, requiring all players' clients to connect to a server. Quake III Arena'''s focus on multiplayer gameplay spawned a lively community, similar to QuakeWorld, that is still active as of 2021. CharactersQuake III Arena features several characters from previous entries in the Quake series including "Bitterman" from Quake II, the "Ranger" character from Quake as well as Doomguy from id Software's sister franchise Doom. Development During early March 1999, ATI leaked the internal hardware vendor (IHV) copy of the game, which unveiled to the public in Macworld Conference & Expo at Moscone Center in January and Makuhari Messe in February by Steve Jobs (CEO of Apple Inc. at the time when it unveiled). This was a functional version of the engine with a textured level and working guns. The IHV contained most of the weapons (excepting the Gauntlet) that would make it into the final game although most were not fully modeled; a chainsaw and grappling hook were also in the IHV but did not make it into the final release. Many of the sounds that would make it into the final release were also included. After the IHV leak, id Software released a beta of the game called Quake III Arena Test on April 24, 1999, initially only for Mac OS before expanding to Windows at a later date. The Q3Test started with version 1.05 and included three levels that would be included in the final release: dm7, dm17, and q3tourney2. Id Software continued to update Q3Test up until version 1.09. id co-founder and former technical director John Carmack has stated that Quake III Arena is his favorite game he has worked on.Quake III Arena was shipped to retailers on December 2, 1999; the official street date for the game was December 5, although id Software chief executive officer Todd Hollenshead expected the game to be available as early as December 3 from retailers like Babbage's and EB Games. the game was support for the A3D 2.0 HRTF technology by Aureal Semiconductor out of the box. Game engine The id Tech 3 engine is the name given to the engine that was developed for Quake III Arena. Unlike most other games released at the time, Quake III Arena requires an OpenGL-compliant graphics accelerator to run. making The game does not include a software or Direct3D renderer. The graphic technology of the game is based tightly around a "shader" system where the appearance of many surfaces can be defined in text files referred to as "shader scripts". Quake 3 also introduced spline-based curved surfaces in addition to planar volumes, which are responsible for many of the surfaces present within the game. Quake 3 also provided support for models animated using vertex animation with attachment tags (known as the .md3 format), allowing models to maintain separate torso and leg animations and hold weapons. Quake 3 is one of the first games where the third-person model is able to look up and down and around as the head, torso and legs are separate. Other visual features include volumetric fog, mirrors, portals, decals, and wave-form vertex distortion. For networking, id Tech 3 uses a "snapshot" system to relay information about game "frames" to the client over UDP. The server attempts to omit as much information as possible about each frame, relaying only differences from the last frame the client confirmed as received (Delta encoding). id Tech 3 uses a virtual machine to control object behavior on the server, effects and prediction on the client and the user interface. This presents many advantages as mod authors do not need to worry about crashing the entire game with bad code, clients could show more advanced effects and game menus than was possible in Quake II and the user interface for mods was entirely customizable. Unless operations which require a specific endianness are used, a QVM file will run the same on any platform supported by Quake III Arena. The engine also contains bytecode compilers for the x86 and PowerPC architectures, executing QVM instructions via an interpreter.Quake III Arena features an advanced AI with five difficulty levels which can accommodate both a beginner and an advanced player, though they usually do not pose a challenge to high-tier or competitive players. Each bot has its own, often humorous, 'personality', expressed as scripted lines that are triggered to simulate real player chat. If the player types certain phrases, the bots may respond: for example, typing "You bore me" might cause a bot to reply "You should have been here 3 hours ago!". Each bot has a number of alternative lines to reduce the repetition of bot chatter. The Gladiator bots from Quake II were ported to Quake III Arena and incorporated into the game by their creator - Jan Paul van Waveren, aka Mr. Elusive. Bot chat lines were written by R. A. Salvatore, Seven Swords and Steve Winter. Xaero, the hardest opponent in the game, was based on the Gladiator bot Zero. The bot Hunter appears on magazine covers in the later id game Doom 3. On August 19, 2005, id Software released the complete source code for Quake III Arena under the GNU General Public License v2.0 or later, as they have for most of their prior engines. As before, the engine, but not the content such as textures and models, was released, so that anyone who wishes to build the game from source will still need an original copy of the game to play it as intended.
"Bitterman" from Quake II, the "Ranger" character from Quake as well as Doomguy from id Software's sister franchise Doom. Development During early March 1999, ATI leaked the internal hardware vendor (IHV) copy of the game, which unveiled to the public in Macworld Conference & Expo at Moscone Center in January and Makuhari Messe in February by Steve Jobs (CEO of Apple Inc. at the time when it unveiled). This was a functional version of the engine with a textured level and working guns. The IHV contained most of the weapons (excepting the Gauntlet) that would make it into the final game although most were not fully modeled; a chainsaw and grappling hook were also in the IHV but did not make it into the final release. Many of the sounds that would make it into the final release were also included. After the IHV leak, id Software released a beta of the game called Quake III Arena Test on April 24, 1999, initially only for Mac OS before expanding to Windows at a later date. The Q3Test started with version 1.05 and included three levels that would be included in the final release: dm7, dm17, and q3tourney2. Id Software continued to update Q3Test up until version 1.09. id co-founder and former technical director John Carmack has stated that Quake III Arena is his favorite game he has worked on.Quake III Arena was shipped to retailers on December 2, 1999; the official street date for the game was December 5, although id Software chief executive officer Todd Hollenshead expected the game to be available as early as December 3 from retailers like Babbage's and EB Games. the game was support for the A3D 2.0 HRTF technology by Aureal Semiconductor out of the box. Game engine The id Tech 3 engine is the name given to the engine that was developed for Quake III Arena. Unlike most other games released at the time, Quake III Arena requires an OpenGL-compliant graphics accelerator to run. making The game does not include a software or Direct3D renderer. The graphic technology of the game is based tightly around a "shader" system where the appearance of many surfaces can be defined in text files referred to as "shader scripts". Quake 3 also introduced spline-based curved surfaces in addition to planar volumes, which are responsible for many of the surfaces present within the game. Quake 3 also provided support for models animated using vertex animation with attachment tags (known as the .md3 format), allowing models to maintain separate torso and leg animations and hold weapons. Quake 3 is one of the first games where the third-person model is able to look up and down and around as the head, torso and legs are separate. Other visual features include volumetric fog, mirrors, portals, decals, and wave-form vertex distortion. For networking, id Tech 3 uses a "snapshot" system to relay information about game "frames" to the client over UDP. The server attempts to omit as much information as possible about each frame, relaying only differences from the last frame the client confirmed as received (Delta encoding). id Tech 3 uses a virtual machine to control object behavior on the server, effects and prediction on the client and the user interface. This presents many advantages as mod authors do not need to worry about crashing the entire game with bad code, clients could show more advanced effects and game menus than was possible in Quake II and the user interface for mods was entirely customizable. Unless operations which require a specific endianness are used, a QVM file will run the same on any platform supported by Quake III Arena. The engine also contains bytecode compilers for the x86 and PowerPC architectures, executing QVM instructions via an interpreter.Quake III Arena features an advanced AI with five difficulty levels which can accommodate both a beginner and an advanced player, though they usually do not pose a challenge to high-tier or competitive players. Each bot has its own, often humorous, 'personality', expressed as scripted lines that are triggered to simulate real player chat. If the player types certain phrases, the bots may respond: for example, typing "You bore me" might cause a bot to reply "You should have been here 3 hours ago!". Each bot has a number of alternative lines to reduce the repetition of bot chatter. The Gladiator bots from Quake II were ported to Quake III Arena and incorporated into the game by their creator - Jan Paul van Waveren, aka Mr. Elusive. Bot chat lines were written by R. A. Salvatore, Seven Swords and Steve Winter. Xaero, the hardest opponent in the game, was based on the Gladiator bot Zero. The bot Hunter appears on magazine covers in the later id game Doom 3. On August 19, 2005, id Software released the complete source code for Quake III Arena under the GNU General Public License v2.0 or later, as they have for most of their prior engines. As before, the engine, but not the content such as textures and models, was released, so that anyone who wishes to build the game from source will still need an original copy of the game to play it as intended. Mods Like its predecessors, Quake and Quake II, Quake III Arena can be heavily modified, allowing the engine to be used for many different games. Mods range from small gameplay adjustments like Rocket Arena 3 and Orange Smoothie Productions to total conversions such as Smokin' Guns, DeFRaG, and Loki's Revenge. The source code's release has allowed total conversion mods such as Tremulous, World of Padman, OpenArena, and Urban Terror to evolve into free standalone games. Other mods like Weapons Factory Arena have moved to more modern commercial engines. Challenge ProMode Arena became the primary competitive mod for Quake III Arena since the Cyberathlete Professional League announced CPMA as its
of the N64 port is used as a prologue. Some enemy types were removed and two new enemies was added: the Arachnid, a human-spider cyborg with twin railgun arms, and the Guardian, a bipedal boss enemy. Saving the game is only possible between levels and at mid-level checkpoints where the game loads, while in the PC version the game could be saved and loaded at any time. The game supports the PlayStation Mouse peripheral to provide a greater parity with the PC version's gameplay. The music used in this port is a combination of the Quake II original music score and tracks from the PC version's mission packs, while the opening and closing cut-scenes are taken from the Ground Zero expansion pack. The PlayStation version uses a new engine developed by HammerHead for their future PlayStation projects and runs at a 512x240 resolution at 30 frames per second. The developer was keen to retain a visual parity with the PC version and avoid tricks such as the use of environmental fog. Colored lights for levels and enemies, and yellow highlights for gunfire and explosions, are carried across from the PC version, with the addition of lens flare effects located around the light sources on the original lightmaps. There is no skybox; instead, a flat Gouraud-textured purple "sky" is drawn across the ceiling. The game uses particles to render blood, debris, and rail gun beams analogously to the PC version. There is also a split-screen multiplayer mode for two to four players (a four player game is possible using the PlayStation's Multi-tap). The only available player avatar is a modified version of the male player avatar from the PC version, the most noticeable difference being the addition of a helmet. Players can only customize the color of their avatar's armor and change their name. The twelve multiplayer levels featured are unique to the PlayStation version, with none of the PC multiplayer maps being carried over. The Nintendo 64 version has completely different single player levels and multiplayer maps, and features multiplayer support for up to four players. This version also has new lighting effects, mostly seen in gunfire, and also uses the Expansion Pak for extra graphical detail. This port also features an entirely new soundtrack, consisting mostly of dark ambient pieces, composed by Aubrey Hodges. A port of Quake II was included with Quake 4 for the Xbox 360 on a bonus disc. This is a direct port of the original game, with some graphical improvements. The port allows for System Link play for up to sixteen players, split-screen for four players, and cooperative play in single-player for up to sixteen players or four players with split-screen alone. Unofficial In December 2018, Polish programmer Krzysztof Kondrak released the original Quake 2 v3.21 source code with Vulkan support added. The port, called vkQuake2, is available under the GPLv2. Mods As with the original Quake, Quake II was designed to allow players to easily create custom content. A large number of mods, maps, graphics such as player models and skins, and sound effects were created and distributed to others free of charge via the Internet. Popular websites such as PlanetQuake and Telefragged allowed players to gain access to custom content. Another improvement over Quake was that it was easier to select custom player models, skins, and sound effects because they could be selected from an in-game menu. Mods for the game include Action Quake from 1999. PC Gaming Worlds Simon Quirk wrote of the game, "The Action Quake team fancied a multiplayer-only total conversion of Quake II where strategy, accuracy, and cool-looking fights would dominate." Release Quake II released on December 9, 1997, in the United States (one day short of the release of Doom four years prior) and on December 12 in Europe. Despite the title, Quake II is a sequel to the original Quake in name only. The scenario, enemies, and theme are entirely separate and do not fall into the same continuity as Quake. id initially wanted to set it separately from Quake, but due to legal reasons (most of their suggested names were already taken), they decided to use the working title. Quake II was also adopted as a name to leverage the popularity of Quake according to Jennell Jaquays. Quake II has been released on Steam, but this version does not include the soundtrack. The game was released on a bonus disc included with Quake 4 Special Edition for the PC, along with both expansion packs. This version also lacks the soundtrack. Quake II is also available on a bonus disc with the Xbox 360 version of Quake 4. This version is a direct port featuring the original soundtrack and multiplayer maps. In 2015, Quake II: Quad Damage, a bundle containing the original game with the mission packs has been released at GOG.com, but unlike the previous releases, this one contains a new customizeable launcher and the official soundtrack in OGG format which was made possible to play in-game, making it the only digital release to include music. The game has also been included in the following official compilations: Quake II: Quad Damage – contains Quake II and all three official expansion packs. Quake II: Colossus – a compilation for Linux that contains Quake II and both mission packs. Ultimate Quake – a compilation including the original Quake trilogy. Quake II RTX A remastered version of the game, titled Quake II RTX was announced by Nvidia in March 2019 and was released on June 6 for Windows and Linux on Steam. This remastered version requires either a Nvidia RTX or an AMD Radeon RX 6000 series GPU to utilize these cards' hardware ray-tracing functionality, but a software fallback is available for graphics cards that are fast enough. The game, provided free of charge, includes the three levels present in the original Quake II demo, but can be used to play the full game if its data files are available. Unlike in most games, ray tracing is used extensively here for lighting, reflections, etc. This is only possible because of the otherwise low hardware demands of Quake II. Expansions Quake II Mission Pack: The Reckoning Quake II Mission Pack: The Reckoning is the first official expansion pack, released on May 30, 1998. It was developed by Xatrix Entertainment. First announced in January 1998, it features eighteen new single player levels, six new deathmatch levels, three new weapons (the Ion Ripper, Phalanx Particle Cannon, and Trap), a new power-up, two new enemies, seven modified versions of existing enemies, and five new music tracks. The storyline follows Joker, a member of an elite squad of marines on a mission to infiltrate a Strogg base on one of Stroggos' moons and destroy the Strogg fleet, which is preparing to attack. Joker crash lands in the swamps outside of the compound where his squad is waiting. He travels through the swamps and bypasses the compounds outer defenses and enters through the main gate, finding his squad just in time to watch them get executed by Strogg forces. Next, Joker escapes on his own to the fuel refinery where he helps the Air Force destroy all fuel production, then infiltrates the Strogg spaceport, boards a cargo ship and reaches the Moon Base, destroying it and the Strogg fleet. Notably, the section of the game that takes place on the Moon Base has low gravity, something that was previously used on one secret level of the original Quake. The Reckoning received mixed reviews. It holds 69.50% from Gamerankings and Gamespot given a score of 7.4/10. Quake II Mission Pack: Ground Zero Quake II Mission Pack: Ground Zero is the second official expansion pack, released on September 11, 1998. It was developed by Rogue Entertainment. It comes with fourteen new single-player levels, ten new multiplayer maps, five additional music tracks, five new enemies, seven new power-ups, and five new weapons. In the expansion's story the Gravity Well has trapped the Earth Fleet in orbit above the planet Stroggos. One of the marines who managed to land, Stepchild, must now make his way to the Gravity Well to destroy it and free the fleet above and disable the entire defenses of the planet. Ground Zero received average to mixed reviews. It holds 65.40% from Gamerankings. Patrick Baggatta of IGN gave the expansion 7.5/10, describing it as similar to the original, but noting occasionally confusing map design. Elliott Chin of GameSpot gave the game 7.9/10, citing it as decent for an expansion and praising the monsters and enhanced AI. Johnny B. of Game Revolution rated the expansion D+, citing bad level design and few additions to the original game, and noted the multiplayer power-up gameplay as the only fun feature. Quake II Netpack I: Extremities Quake II Netpack I: Extremities contains, among other features, 11 game mods and 12 deathmatch maps. Reception Sales Quake II entered PC Data's monthly computer game sales rankings at #2 for December 1997, behind Riven. The game's sales in the United States alone reached 240,913 copies by the end of 1997, after its release on December 9. According to PC Data, it was the country's 22nd-best-selling computer game of 1997. The following year, Quake II secured fifth place on PC Data's charts for January and February 1998, then dropped to #8 in March and #9 in April. It remained in PC Data's top 20 for another two months, before exiting in July 1998. Quake II surpassed 850,000 units shipped to retailers by May 1998, and 900,000 by June. According to PC Data, Quake II was the United States' 14th-best-selling computer game during the January–November 1998 period. It ultimately secured 15th place for the full year, with sales of 279,536 copies and revenues of $12.6 million. GameDaily reported in January 1999 that Quake IIs sales in the United States had reached 550,000 units; this number rose to 610,000 units by December of that year. Worldwide, Quake II sold over 1 million copies by 2002. Critical reviews Quake II received generally positive reviews across all platforms. Next Generation reviewed the PC version of the game, rating it four stars out of five, and stated that "all in all, id should be commended for the advancement of its technology and improvement in its single-player level design, but it's going to be up to mod designers to provide the necessary additions to the
attempt to prevent an alien invasion of Earth by launching a pre-emptive attack against the home planet of the hostile Strogg civilization. Most of the other soldiers are captured or killed as soon as they approach the planned landing zone. Bitterman survives because another Marine's personal capsule collided with his upon launch, causing him to crash far short of the landing zone. Bitterman fights his way through the Strogg city, destroying strategic objectives along the way, and finally kills the Strogg leader, the Makron in his orbital asteroid base. Development Originally, Quake II was supposed to be an entirely new game and IP; titles like "Strogg", "Lock and Load", and even just "Load" were toyed with in the early days of development. But after numerous failed attempts, the team at id decided to stick with 'Quake II' and forego the gothic Lovecraftian horror theme from the original in favor of a more sci-fi aesthetic. The game was developed with 13 person team. Artist and co-owner Adrian Carmack had said that Quake II is his favorite game in the series because "it was different and a cohesive project". This is also the last id Software game to feature American McGee before he was fired shortly after its release. Unlike Quake, where hardware-accelerated graphics controllers were supported only with later patches, Quake II came with OpenGL support out of the box. Later downloads from id Software added support for AMD's 3DNow! instruction set for improved performance on their K6-2 processors, and Rendition released a native renderer for their V1000 graphics chip. The latest version is 3.21. This update includes numerous bug fixes and new levels designed for multiplayer deathmatch. Version 3.21, available as source code on id Software's FTP server, has no improved functionality over version 3.20 and is simply a slight modification to make compiling for Linux easier. Quake II uses an improved client–server model introduced in Quake. The game code of Quake II, which defines all the functionality for weapons, entities, and game mechanics, can be changed in any way because id Software published the source code of their own implementation that shipped with the game. Quake II uses the shared library functionality of the operating system to load the game library at run-time—this is how mod authors are able to alter the game and provide different gameplay mechanics, new weapons, and much more. The full source code to Quake II version 3.19 was released under the terms of the GNU GPL-2.0-or-later on December 22, 2001. Version 3.21 followed later. An LCC-friendly version was released on January 1, 2002, by a modder going by the name of Major Bitch. Since the release of the Quake II source code, several updates from third-party projects to the game engine have been created; the most prominent of these are projects focused on graphical enhancements to the game such as most notable "Yamagi Quake II", Quake2maX, EGL, Quake II Evolved, and KMQuake II. The source release also revealed numerous security flaws which can result in remote compromise of both the Quake II client and server. As id Software no longer maintains Quake II, most third-party engines include fixes for these bugs. The unofficial patch 3.24 that fixes bugs and adds only meager tweaks is recommended for Quake II purists, as it is not intended to add new features or be an engine mod in its own right. The most popular server-side engine modification for multiplayer, R1Q2, is generally recommended as a replacement for the 3.20 release for both clients and servers. In July 2003, Vertigo Software released a port of Quake II for the Microsoft .NET platform, using Managed C++, called Quake II .NET. It became a poster application for the language, showcasing the powerful interoperability between .NET and standard C++ code. It remains one of the top downloads on the Visual C++ website. In May 2004, Bytonic Software released a port of Quake II (called Jake2) written in Java using JOGL. In 2010 Google ported Jake2 to HTML5, running in Safari and Chrome. Quake IIs game engine was a popular license and formed the basis for several commercial and free games, such as CodeRED: Alien Arena, War§ow, SiN, Anachronox, Heretic II, Daikatana, Soldier of Fortune, Kingpin: Life of Crime, and UFO: Alien Invasion. Valve's 1998 video game Half-Life used the Quake II engine during early development stages. However, the final version runs on a heavily modified version of the Quake engine, GoldSrc, with a small amount of the Quake II code. Ports Ports of Quake II were released in 1999 on the Nintendo 64 (ported by Raster Productions) and PlayStation (ported by HammerHead) video game consoles. In both cases, the core gameplay was largely identical; however, changes were made to the game sequence and split-screen multiplayer replaced network or Internet play. A Macintosh port was developed by Logicware and released in July 1999. Quake II: Colossus (Quake II with both official add-ons) was ported to Linux by id Software and published by Macmillan Digital Publishing in 1999. Be Inc. officially ported Quake II: Colossus to the BeOS to test their OpenGL acceleration in 1999, and provided the game files for free download at a later date—a Windows, Macintosh, or Linux install CD was required to install the game, with the official add-ons being optional. The PlayStation version contains abridged versions of Units 1, 3, 6, 7, 8, and 10 of the PC version, redesigned to meet the console's technical limitations. For example, many short airlock-like corridors were added to maps to provide loading pauses inside what were contiguous areas in the PC version. In addition, part of the first mission of the N64 port is used as a prologue. Some enemy types were removed and two new enemies was added: the Arachnid, a human-spider cyborg with twin railgun arms, and the Guardian, a bipedal boss enemy. Saving the game is only possible between levels and at mid-level checkpoints where the game loads, while in the PC version the game could be saved and loaded at any time. The game supports the PlayStation Mouse peripheral to provide a greater parity with the PC version's gameplay. The music used in this port is a combination of the Quake II original music score and tracks from the PC version's mission packs, while the opening and closing cut-scenes are taken from the Ground Zero expansion pack. The PlayStation version uses a new engine developed by HammerHead for their future PlayStation projects and runs at a 512x240 resolution at 30 frames per second. The developer was keen to retain a visual parity with the PC version and avoid tricks such as the use of environmental fog. Colored lights for levels and enemies, and yellow highlights for gunfire and explosions, are carried across from the PC version, with the addition of lens flare effects located around the light sources on the original lightmaps. There is no skybox; instead, a flat Gouraud-textured purple "sky" is drawn across the ceiling. The game uses particles to render blood, debris, and rail gun beams analogously to the PC version. There is also a split-screen multiplayer mode for two to four players (a four player game is possible using the PlayStation's Multi-tap). The only available
the centuries the descriptions of qi have varied and have sometimes been in conflict. Until China came into contact with Western scientific and philosophical ideas, the Chinese had not categorized all things in terms of matter and energy. Qi and li (: "pattern") were 'fundamental' categories similar to matter and energy. Fairly early on, some Chinese thinkers began to believe that there were different fractions of qi—the coarsest and heaviest fractions formed solids, lighter fractions formed liquids, and the most ethereal fractions were the "lifebreath" that animated living beings. Yuanqi is a notion of innate or prenatal qi which is distinguished from acquired qi that a person may develop over their lifetime. Philosophical roots The earliest texts that speak of qi give some indications of how the concept developed. In the Analects of Confucius qi could mean "breath". Combining it with the Chinese word for blood (making 血氣, xue–qi, blood and breath), the concept could be used to account for motivational characteristics: The philosopher Mozi used the word qi to refer to noxious vapors that would eventually arise from a corpse were it not buried at a sufficient depth. He reported that early civilized humans learned how to live in houses to protect their qi from the moisture that troubled them when they lived in caves. He also associated maintaining one's qi with providing oneself with adequate nutrition. In regard to another kind of qi, he recorded how some people performed a kind of prognostication by observing qi (clouds) in the sky. Mencius described a kind of qi that might be characterized as an individual's vital energies. This qi was necessary to activity and it could be controlled by a well-integrated willpower. When properly nurtured, this qi was said to be capable of extending beyond the human body to reach throughout the universe. It could also be augmented by means of careful exercise of one's moral capacities. On the other hand, the qi of an individual could be degraded by adverse external forces that succeed in operating on that individual. Living things were not the only things believed to have qi. Zhuangzi indicated that wind is the qi of the Earth. Moreover, cosmic yin and yang "are the greatest of qi. He described qi as "issuing forth" and creating profound effects. He also said "Human beings are born [because of] the accumulation of qi. When it accumulates there is life. When it dissipates there is death... There is one qi that connects and pervades everything in the world." The Guanzi essay Neiye (Inward Training) is the oldest received writing on the subject of the cultivation of vapor [qi] and meditation techniques. The essay was probably composed at the Jixia Academy in Qi in the late fourth century B.C. Xun Zi, another Confucian scholar of the Jixia Academy, followed in later years. At 9:69/127, Xun Zi says, "Fire and water have qi but do not have life. Grasses and trees have life but do not have perceptivity. Fowl and beasts have perceptivity but do not have yi (sense of right and wrong, duty, justice). Men have qi, life, perceptivity, and yi." Chinese people at such an early time had no concept of radiant energy, but they were aware that one can be heated by a campfire from a distance away from the fire. They accounted for this phenomenon by claiming "qi" radiated from fire. At 18:62/122, he also uses "qi" to refer to the vital forces of the body that decline with advanced age. Among the animals, the gibbon and the crane were considered experts at inhaling the qi. The Confucian scholar Dong Zhongshu (ca. 150 BC) wrote in Luxuriant Dew of the Spring and Autumn Annals: "The gibbon resembles a macaque, but he is larger, and his color is black. His forearms being long, he lives eight hundred years, because he is expert in controlling his breathing." ("") Later, the syncretic text assembled under the direction of Liu An, the Huai Nan Zi, or "Masters of Huainan", has a passage that presages most of what is given greater detail by the Neo-Confucians: Role in traditional Chinese medicine The Huangdi Neijing ("The Yellow Emperor's Classic of Medicine", circa 2nd century BCE) is historically credited with first establishing the pathways, called meridians, through which qi allegedly circulates in the human body. In traditional Chinese medicine, symptoms of various illnesses are believed to be either the product of disrupted, blocked, and unbalanced qi movement through meridians or deficiencies and imbalances of qi in the Zang Fu organs. Traditional Chinese medicine often seeks to relieve these imbalances by adjusting the circulation of qi using a variety of techniques including herbology, food therapy, physical training regimens (qigong, t'ai chi ch'uan, and other martial arts training), moxibustion, tui na, or acupuncture.The cultivation of Heavenly and Earthly qi allow for the maintenance of psychological actions The nomenclature of Qi in the human body is different depending on its sources, roles, and locations. For sources there is a difference between so-called "Primordial Qi" (acquired at birth from one's parents) and Qi acquired throughout one's life. Or again Chinese medicine differentiates between Qi acquired from the air we breathe (so called "Clean Air") and Qi acquired from food and drinks (so-called "Grain Qi"). Looking at roles Qi is divided into "Defensive Qi" and "Nutritive Qi". Defensive Qi's role is to defend the body against invasions while Nutritive Qi's role is to provide sustenance for the body. To protect against said invasions, medicines have four types of qi; cold, hot, warm, and cool. Cold qi medicines are used to treat invasions hot in nature, while hot qi medicines are used to treat invasions cold in nature. looking at locations, Qi is also named after the Zang-Fu organ or the Meridian in which it resides: "Liver Qi", "Spleen Qi", etc. Lastly, prolonged exposure to the three evil qi (wind, cold, and wetness) can result in the penetration of evil qi through surface body parts, eventually reaching Zang-Fu organs. A qi field (chu-chong) refers to the cultivation of an energy field by a group, typically for healing or other benevolent purposes. A qi field is believed to be produced by visualization and affirmation. They are an important component of Wisdom Healing'Qigong (Zhineng Qigong), founded by Grandmaster Ming Pang. Scientific view The existence of Qi has not been proven scientifically. A 1997 consensus statement on acupuncture by the United States National Institutes of Health noted that concepts such as qi "are difficult to reconcile with contemporary biomedical information". Practices involving qi Feng shui The traditional Chinese art of geomancy, the placement and arrangement of space called feng shui, is based on calculating the balance of qi, interactions between the five elements, yin and yang, and other factors. The retention or dissipation of qi is believed to affect the health, wealth, energy level, luck, and many other aspects of the occupants. Attributes of each item in a space affect the flow of qi by slowing it down, redirecting it or accelerating it. This is said to influence the energy level of the occupants. Positive qi flows in curved lines, whereas negative qi travels in straight lines. In order for qi to be nourishing and positive, it must continue to flow
/*kʰɯds/ (Zhengzhang Shangfang) and /*C.qʰəp-s/ (William H. Baxter and Laurent Sagart). The etymology of qì interconnects with Kharia kʰis "anger", Sora kissa "move with great effort", Khmer kʰɛs "strive after; endeavor", and Gyalrongic kʰɐs "anger". Characters In the East Asian languages, qì has three logographs: is the traditional Chinese character, Korean hanja, and Japanese kyūjitai ("old character form") kanji is the Japanese shinjitai ("new character form") kanji is the simplified Chinese character. In addition, qì is an uncommon character especially used in writing Daoist talismans. Historically, the word qì was generally written as until the Han dynasty (206 BCE–220 CE), when it was replaced by the graph clarified with mǐ "rice" indicating "steam (rising from rice as it cooks.)" This primary logograph , the earliest written character for qì, consisted of three wavy horizontal lines seen in Shang dynasty (c. 1600–1046 BCE) oracle bone script, Zhou dynasty (1046–256 BCE) bronzeware script and large seal script, and Qin dynasty (221–206 BCE) small seal script. These oracle, bronze, and seal scripts logographs were used in ancient times as a phonetic loan character to write qǐ "plead for; beg; ask" which did not have an early character. The vast majority of Chinese characters are classified as radical-phonetic characters. Such characters combine a semantically suggestive "radical characters" with a phonetic element approximating ancient pronunciation. For example, the widely known word dào "the Dao; the way" graphically combines the "walk" radical with a shǒu "head" phonetic. Although the modern dào and shǒu pronunciations are dissimilar, the Old Chinese *lˤuʔ-s and *l̥uʔ-s were alike. The regular script character qì is unusual because qì is both the "air radical" and the phonetic, with mǐ "rice" semantically indicating "steam; vapor". This qì "air/gas radical" was only used in a few native Chinese characters like yīnyūn "thick mist/smoke", but was also used to create new scientific characters for gaseous chemical elements. Some examples are based on pronunciations in European languages: fú (with a fú phonetic) "fluorine" and nǎi (with a nǎi phonetic) "neon". Others are based on semantics: qīng (with a jīng phonetic, abbreviating qīng "light-weight") "hydrogen (the lightest element)" and lǜ (with a lù phonetic, abbreviating lǜ "green") "(greenish-yellow) chlorine". Qì is the phonetic element in a few characters such as kài "hate" with the "heart-mind radical" or , xì "set fire to weeds" with the "fire radical" , and xì "to present food" with the "food radical" . The first Chinese dictionary of characters, the Shuowen Jiezi(121 CE) notes that the primary qì is a pictographic character depicting "cloudy vapors", and that the full combines "rice" with the phonetic qi , meaning "present provisions to guests" (later disambiguated as xì ). Meanings Qi is a polysemous word. The unabridged Chinese-Chinese character dictionary Hanyu Da Cidian defines it as "present food or provisions" for the xì pronunciation but also lists 23 meanings for the qì pronunciation. The modern ABC Chinese-English Comprehensive Dictionary, which enters xì "grain; animal feed; make a present of food", and a qì entry with seven translation equivalents for the noun, two for bound morphemes, and three equivalents for the verb. n. ① air; gas ② smell ③ spirit; vigor; morale ④ vital/material energy (in Ch[inese] metaphysics) ⑤ tone; atmosphere; attitude ⑥ anger ⑦ breath; respiration b.f. ① weather tiānqì ② [linguistics] aspiration sòngqì v. ① anger ② get angry ③ bully; insult. English borrowing Qi was an early Chinese loanword in English. It was romanized as k'i in Church Romanization in the early-19th century, as ch'i in Wade–Giles in the mid-19th century (sometimes misspelled chi omitting the apostrophe), and as qi in Pinyin in the mid-20th century. The Oxford English Dictionary entry for qi gives the pronunciation as , the etymology from Chinese qì "air; breath", and a definition of "The physical life-force postulated by certain Chinese philosophers; the material principle." It also gives eight usage examples, with the first recorded example of k'í in 1850 (The Chinese Repository), of ch'i in 1917 (The Encyclopaedia Sinica), and qi in 1971 (Felix Mann's Acupuncture) Concept References to concepts analogous to qi are found in many Asian belief systems. Philosophical conceptions of qi from the earliest records of Chinese philosophy (5th century BCE) correspond to Western notions of humours and to the ancient Hindu yogic concept of prana. An early form of qi comes from the writings of the Chinese philosopher Mencius (4th century BCE). The ancient Chinese described qi as "life force". They believed it permeated everything and linked their surroundings together. Qi was also linked to the flow of energy around and through the body, forming a cohesive functioning unit. By understanding the rhythm and flow of qi, they believed they could guide exercises and treatments to provide stability and longevity. Although the concept has been important within many Chinese philosophies, over the centuries the descriptions of qi have varied and have sometimes been in conflict. Until China came into contact with Western scientific and philosophical ideas, the Chinese had not categorized all things in terms of matter and energy. Qi and li (: "pattern") were 'fundamental' categories similar to matter and energy. Fairly early on, some Chinese thinkers began to believe that there were different fractions of qi—the coarsest and heaviest fractions formed solids, lighter fractions formed liquids, and the most ethereal fractions were the "lifebreath" that animated living beings. Yuanqi is a notion of innate or prenatal qi which is distinguished from acquired qi that a person may develop over their lifetime. Philosophical roots The earliest texts that speak of qi give some indications of how the concept developed. In the Analects of Confucius qi could mean "breath". Combining it with the Chinese word for blood (making 血氣, xue–qi, blood and breath), the concept could be used to account for motivational characteristics: The philosopher Mozi used the word qi to refer to noxious vapors that would eventually arise from a corpse were it not buried at a sufficient depth. He reported that early civilized humans learned how to live in houses to protect their qi from the moisture that troubled them when they lived in caves. He also associated maintaining one's qi with providing oneself with adequate nutrition. In regard to another kind of qi, he recorded how some people performed a kind of prognostication by observing qi (clouds) in the sky. Mencius described a kind of qi that might be characterized as an individual's vital energies. This qi was necessary to activity and it could be controlled by a well-integrated willpower. When properly nurtured, this qi was said to be capable of extending beyond the human body to reach throughout the universe. It could also be augmented by means of careful exercise of one's moral capacities. On the other hand, the qi of an
by AU Optronics QUANTA, a user group for the Sinclair QL computer Quanta Services, a US-based speciality contractor for the electric, gas, and telecommunications industries Quanta Technology, a utility infrastructure consulting company Technology Quanta, an algorithm for random
US-based speciality contractor for the electric, gas, and telecommunications industries Quanta Technology, a utility infrastructure consulting company Technology Quanta, an algorithm for random number generation for smart contracts Quanta Plus, a web development tool Music Quanta, a 1997 album by Gilberto Gil Quanta Live, a Grammy Award-winning 1998 album by Gilberto Gil Science Quanta (journal), an open-access academic journal Quanta Magazine, a magazine covering
Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security. Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking. Search problems The most well-known example of a problem admitting a polynomial quantum speedup is unstructured search, finding a marked item out of a list of items in a database. This can be solved by Grover's algorithm using queries to the database, quadratically fewer than the queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Problems that can be efficiently addressed with Grover's algorithm have the following properties: There is no searchable structure in the collection of possible answers, The number of possible answers to check is the same as the number of inputs to the algorithm, and There exists a boolean function that evaluates each input and determines whether it is the correct answer For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest of government agencies. Simulation of quantum systems Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. Quantum simulations might be used to predict future paths of particles and protons under superposition in the double-slit experiment. About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry while naturally occurring organisms also produce ammonia. Quantum simulations might be used to understand this process increasing production. Quantum annealing and adiabatic optimization Quantum annealing or Adiabatic quantum computation relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which is slowly evolved to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Machine learning Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks. For example, the quantum algorithm for linear systems of equations, or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. Computational biology In the field of computational biology, quantum computing has played a big role in solving many biological problems. One of the well-known examples would be in computational genomics and how computing has drastically reduced the time to sequence a human genome. Given how computational biology is using generic data modeling and storage, its applications to computational biology are expected to arise as well. Computer-aided drug design and generative chemistry Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms. Hybrid architectures combining quantum computers with deep classical networks, such as Quantum Variational Autoencoders, can already be trained on commercially available annealers and used to generate novel drug-like molecular structures. Developing physical quantum computers Challenges There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer: Physically scalable to increase the number of qubits Qubits that can be initialized to arbitrary values Quantum gates that are faster than decoherence time Universal gate set Qubits that can be read easily Sourcing parts for quantum computers is also very difficult. Many quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co. The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers which enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge. Quantum decoherence One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator) in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds. As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions. These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time. As described in the Quantum threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing. Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates. Quantum supremacy Quantum supremacy is a term coined by John Preskill referring to the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark. In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to or the closing of the gap between Sycamore and classical supercomputers. In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer Jiuzhang to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds. On November 16, 2021 at the quantum computing summit IBM presented a 127-qubit microprocessor named IBM Eagle. Skepticism Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales. Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows: "So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be... about 10300... Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never." Candidates for physical realizations For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits): Superconducting quantum computing (qubit implemented by the state of small superconducting circuits [Josephson junctions]) Trapped ion quantum computer (qubit implemented by the internal state of trapped ions) Neutral atoms in optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice) Quantum dot computer, spin-based (e.g. the Loss-DiVincenzo quantum computer) (qubit given by the spin states of trapped electrons) Quantum dot computer, spatial-based (qubit given by electron position in double quantum dot) Quantum computing using engineered quantum wells, which could in principle enable the construction of quantum computers that operate at room temperature Coupled quantum wire (qubit implemented by a pair of quantum wires coupled by a quantum point contact) Nuclear magnetic resonance quantum computer (NMRQC) implemented with the nuclear magnetic resonance of molecules in solution, where qubits are provided by nuclear spins within the dissolved molecule and probed with radio waves Solid-state NMR Kane quantum computers (qubit realized by the nuclear spin state of phosphorus donors in silicon) Vibrational quantum computer (qubits realized by vibrational superpositions in cold molecules) Electrons-on-helium quantum computers (qubit is the electron spin) Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of trapped atoms coupled to high-finesse cavities) Molecular magnet (qubit given by spin states) Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerenes) Nonlinear optical quantum computer (qubits realized by processing states of different modes of light through both linear and nonlinear elements) Linear optical quantum computer (qubits realized by processing states of different modes of light through linear elements e.g. mirrors, beam splitters and phase shifters) Diamond-based quantum computer (qubit realized by the electronic or nuclear spin of nitrogen-vacancy centers in diamond) Bose-Einstein condensate-based quantum computer Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap Rare-earth-metal-ion-doped inorganic crystal based quantum computers (qubit realized by the internal electronic state of dopants in optical fibers) Metallic-like carbon nanospheres-based quantum computers The large number of candidates demonstrates that quantum computing, despite rapid progress, is still in its infancy. Models of computation for quantum computing There are a number of models of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed. For practical implementations, the four relevant models of computation are: Quantum gate array – Computation decomposed into a sequence
replaced with a finite gate set by appealing to the Solovay-Kitaev theorem. Quantum algorithms Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms. Quantum algorithms that offer more than a polynomial speedup over the best known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems. Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely. Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems. Many examples of provable quantum speedups for query problems are related to Grover's algorithm, including Brassard, Høyer, and Tapp's algorithm for finding collisions in two-to-one functions, which uses Grover's algorithm, and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees, which is a variant of the search problem. Potential applications Cryptography A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security. Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size). Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking. Search problems The most well-known example of a problem admitting a polynomial quantum speedup is unstructured search, finding a marked item out of a list of items in a database. This can be solved by Grover's algorithm using queries to the database, quadratically fewer than the queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Problems that can be efficiently addressed with Grover's algorithm have the following properties: There is no searchable structure in the collection of possible answers, The number of possible answers to check is the same as the number of inputs to the algorithm, and There exists a boolean function that evaluates each input and determines whether it is the correct answer For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest of government agencies. Simulation of quantum systems Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. Quantum simulations might be used to predict future paths of particles and protons under superposition in the double-slit experiment. About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry while naturally occurring organisms also produce ammonia. Quantum simulations might be used to understand this process increasing production. Quantum annealing and adiabatic optimization Quantum annealing or Adiabatic quantum computation relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which is slowly evolved to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Machine learning Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks. For example, the quantum algorithm for linear systems of equations, or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. Computational biology In the field of computational biology, quantum computing has played a big role in solving many biological problems. One of the well-known examples would be in computational genomics and how computing has drastically reduced the time to sequence a human genome. Given how computational biology is using generic data modeling and storage, its applications to computational biology are expected to arise as well. Computer-aided drug design and generative chemistry Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms. Hybrid architectures combining quantum computers with deep classical networks, such as Quantum Variational Autoencoders, can already be trained on commercially available annealers and used to generate novel drug-like molecular structures. Developing physical quantum computers Challenges There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer: Physically scalable to increase the number of qubits Qubits that can be initialized to arbitrary values Quantum gates that are faster than decoherence time Universal gate set Qubits that can be read easily Sourcing parts for quantum computers is also very difficult. Many quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co. The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers which enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge. Quantum decoherence One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator) in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds. As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions. These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than
science and technology Qt (software), a cross-platform application framework QuickTime, a multimedia technology from Apple Inc. Quiet Trader, cargo versions of the BAe 146 jet Quart (qt), a unit of volume equal to a quarter of a gallon Other uses Quality time, time spent with loved ones which is in some way special QT (New York City Subway service), a former New York City Subway
pop singer QT: QueerTelevision, an LGBT newsmagazine which aired on Canada's CityTV in the 1990s Quentin Tarantino (born 1963), American filmmaker Question Time (TV programme), a topical debate BBC television programme in the UK Organizations QuikTrip, a US convenience store chain QT Inc., manufacturer of the Q-Ray ionized bracelet QT Hotels & Resorts The Qt Company Science and technology Medicine Long QT syndrome, a rare hereditary and medication induced cardiac condition Short QT syndrome, a genetic disease of the electrical system of the heart QT interval, a measurement
elements. See small Latin squares and quasigroups. Infinite quasigroups For a countably infinite quasigroup Q, it is possible to imagine an infinite array in which every row and every column corresponds to some element q of Q, and where the element a*b is in the row corresponding to a and the column responding to b. In this situation too, the Latin Square property says that each row and each column of the infinite array will contain every possible value precisely once. For an uncountably infinite quasigroup, such as the group of non-zero real numbers under multiplication, the Latin square property still holds, although the name is somewhat unsatisfactory, as it is not possible to produce the array of combinations to which the above idea of an infinite array extends since the real numbers cannot all be written in a sequence. (This is somewhat misleading however, as the reals can be written in a sequence of length , assuming the Well-Ordering Theorem.) Inverse properties The binary operation of a quasigroup is invertible in the sense that both and , the left and right multiplication operators, are bijective, and hence invertible. Every loop element has a unique left and right inverse given by A loop is said to have (two-sided) inverses if for all x. In this case the inverse element is usually denoted by . There are some stronger notions of inverses in loops which are often useful: A loop has the left inverse property if for all and . Equivalently, or . A loop has the right inverse property if for all and . Equivalently, or . A loop has the antiautomorphic inverse property if or, equivalently, if . A loop has the weak inverse property when if and only if . This may be stated in terms of inverses via or equivalently . A loop has the inverse property if it has both the left and right inverse properties. Inverse property loops also have the antiautomorphic and weak inverse properties. In fact, any loop which satisfies any two of the above four identities has the inverse property and therefore satisfies all four. Any loop which satisfies the left, right, or antiautomorphic inverse properties automatically has two-sided inverses. Morphisms A quasigroup or loop homomorphism is a map between two quasigroups such that . Quasigroup homomorphisms necessarily preserve left and right division, as well as identity elements (if they exist). Homotopy and isotopy Let Q and P be quasigroups. A quasigroup homotopy from Q to P is a triple of maps from Q to P such that for all x, y in Q. A quasigroup homomorphism is just a homotopy for which the three maps are equal. An isotopy is a homotopy for which each of the three maps is a bijection. Two quasigroups are isotopic if there is an isotopy between them. In terms of Latin squares, an isotopy is given by a permutation of rows α, a permutation of columns β, and a permutation on the underlying element set γ. An autotopy is an isotopy from a quasigroup to itself. The set of all autotopies of a quasigroup form a group with the automorphism group as a subgroup. Every quasigroup is isotopic to a loop. If a loop is isotopic to a group, then it is isomorphic to that group and thus is itself a group. However, a quasigroup which is isotopic to a group need not be a group. For example, the quasigroup on R with multiplication given by is isotopic to the additive group , but is not itself a group. Every medial quasigroup is isotopic to an abelian group by the Bruck–Toyoda theorem. Conjugation (parastrophe) Left and right division are examples of forming a quasigroup by permuting the variables in the defining equation. From the original operation ∗ (i.e., ) we can form five new operations: (the opposite operation), / and \, and their opposites. That makes a total of six quasigroup operations, which are called the conjugates or parastrophes of ∗. Any two of these operations are said to be "conjugate" or "parastrophic" to each other (and to themselves). Isostrophe (paratopy) If the set Q has two quasigroup operations, ∗ and ·, and one of them is isotopic to a conjugate of the other, the operations are said to be isostrophic to each other. There are also many other names for this relation of "isostrophe", e.g., paratopy. Generalizations Polyadic or multiary quasigroups An n-ary quasigroup is a set with an n-ary operation, with , such that the equation has a unique solution for any one variable if all the other n variables are specified arbitrarily. Polyadic or multiary means n-ary for some nonnegative integer n. A 0-ary, or nullary, quasigroup is just a constant element of Q. A 1-ary, or unary, quasigroup is a bijection of Q to
One defines a quasigroup as a set with one binary operation, and the other, from universal algebra, defines a quasigroup as having three primitive operations. The homomorphic image of a quasigroup defined with a single binary operation, however, need not be a quasigroup. We begin with the first definition. Algebra A quasigroup is a non-empty set Q with a binary operation ∗ (that is, a magma, indicating that a quasigroup has to satisfy closure property), obeying the Latin square property. This states that, for each a and b in Q, there exist unique elements x and y in Q such that both a ∗ x = b, y ∗ a = b hold. (In other words: Each element of the set occurs exactly once in each row and exactly once in each column of the quasigroup's multiplication table, or Cayley table. This property ensures that the Cayley table of a finite quasigroup, and, in particular, finite group, is a Latin square.) The uniqueness requirement can be replaced by the requirement that the magma be cancellative. The unique solutions to these equations are written and . The operations '\' and '/' are called, respectively, left division and right division. The empty set equipped with the empty binary operation satisfies this definition of a quasigroup. Some authors accept the empty quasigroup but others explicitly exclude it. Universal algebra Given some algebraic structure, an identity is an equation in which all variables are tacitly universally quantified, and in which all operations are among the primitive operations proper to the structure. Algebraic structures axiomatized solely by identities are called varieties. Many standard results in universal algebra hold only for varieties. Quasigroups are varieties if left and right division are taken as primitive. A quasigroup is a type (2,2,2) algebra (i.e., equipped with three binary operations) satisfying the identities: y = x ∗ (x \ y), y = x \ (x ∗ y), y = (y / x) ∗ x, y = (y ∗ x) / x. In other words: Multiplication and division in either order, one after the other, on the same side by the same element, have no net effect. Hence if is a quasigroup according to the first definition, then is the same quasigroup in the sense of universal algebra. And vice versa: if is a quasigroup according to the sense of universal algebra, then is a quasigroup according to the first definition. Loops A loop is a quasigroup with an identity element; that is, an element, e, such that x ∗ e = x and e ∗ x = x for all x in Q. It follows that the identity element, e, is unique, and that every element of Q has unique left and right inverses (which need not be the same). A quasigroup with an idempotent element is called a pique ("pointed idempotent quasigroup"); this is a weaker notion than a loop but common nonetheless because, for example, given an abelian group, , taking its subtraction operation as quasigroup multiplication yields a pique with the group identity (zero) turned into a "pointed idempotent". (That is, there is a principal isotopy .) A loop that is associative is a group. A group can have a non-associative pique isotope, but it cannot have a nonassociative loop isotope. There are weaker associativity properties that have been given special names. For instance, a Bol loop is a loop that satisfies either: x ∗ (y ∗ (x ∗ z)) = (x ∗ (y ∗ x)) ∗ z for each x, y and z in Q (a left Bol loop), or else ((z ∗ x) ∗ y) ∗ x = z ∗ ((x ∗ y) ∗ x) for each x, y and z in Q (a right Bol loop). A loop that is both a left and right Bol loop is a Moufang loop. This is equivalent to any one of the following single Moufang identities holding for all x, y, z: x ∗ (y ∗ (x ∗ z)) = ((x ∗ y) ∗ x) ∗ z, z ∗ (x ∗ (y ∗ x)) = ((z ∗ x) ∗ y) ∗ x, (x ∗ y) ∗ (z ∗ x) = x ∗ ((y ∗ z) ∗ x), or (x ∗ y) ∗ (z ∗ x) = (x ∗ (y ∗ z)) ∗ x. Symmetries Smith (2007) names the following important properties and subclasses: Semisymmetry A quasigroup is semisymmetric if the following equivalent identities hold: x ∗ y = y / x, y ∗ x = x \ y, x = (y ∗ x) ∗ y, x = y ∗ (x ∗ y). Although this class may seem special, every quasigroup Q induces a semisymmetric quasigroup QΔ on the direct product cube Q3 via the following operation: where "//" and "\\" are the conjugate division operations given by and . Triality Total symmetry A narrower class is a totally symmetric quasigroup (sometimes abbreviated TS-quasigroup) in which all conjugates coincide as one operation: . Another way to define (the same notion of) totally symmetric quasigroup is as a semisymmetric quasigroup which also is commutative, i.e. . Idempotent total symmetric quasigroups are precisely (i.e. in a bijection with) Steiner triples, so such a quasigroup is also called a Steiner quasigroup, and sometimes the latter is even abbreviated as squag. The term sloop refers to an analogue for loops, namely, totally symmetric loops that satisfy instead of . Without idempotency, total symmetric quasigroups correspond to the geometric notion of extended
the early republic, there were two quaestors, and their duties were maintaining the public treasury, both taking in funds and deciding whom to pay them to. This continued until 421 BCE when the number of quaestors was doubled to 4. While two continued with the same duties of those that had come before, the other two had additional responsibilities, each being in service to the one of the consuls. When consuls went to war, each was assigned a quaestor. The quaestor's main responsibilities involved the distribution of war spoils between the aerarium, or public treasury, and the army. The key responsibility of the quaestor was the administration of public funds to higher-ranking officials in order to pursue their goals, whether those involve military conquests which require funding for armies or public works projects. The office of quaestor was a position bound to their superior, whether that be a consul, governor, or other magistrate, and the duties would often reflect their superiors. For example, Gaius Gracchus was quaestor under the consul Orestes in Sardinia, and many of his responsibilities involved leading military forces. While not in direct command of the army, the quaestor would be in charge of organizational and lesser duties that were a necessary part of the war machine. Roman Empire During the reign of the Emperor Constantine I, the office of quaestor was reorganized into a judicial position known as the quaestor sacri palatii. The office functioned as the primary legal adviser to the emperor, and was charged with the creation of laws as well as answers petitions to the emperor. From 440 onward, the office of the quaestor worked in conjunction with the praetorian prefect of the East to oversee the supreme tribunal, or supreme court, at Constantinople. There they heard appeals from the various subordinate courts and governors. Byzantine Empire Under the Emperor Justinian I, an additional office named quaestor was created to control police and judicial matters in Constantinople. In this new position, a quaestor was responsible for wills, as well as supervision of complaints by tenants regarding their landlords, and finally over the homeless. Notable quaestors See also Category: Roman Quaestors. Gaius Gracchus Following the death of his brother Tiberius Gracchus, Gaius Gracchus stayed out of the political spotlight for a period until he was forced to defend a good friend of his named Vettius in court. Hearing his vocal abilities, the Senate began to fear that Gaius would arouse the people in the same manner as his brother and appointed him quaestor to Gnaeus Aufidius Orestes in Sardinia to prevent him from becoming a tribune. Gaius used his position as quaestor to successfully defeat his enemies as well as gain a large amount of loyalty among his troops. Following an incident where Gaius won the support of a local village to provide for his troops, the Senate attempted to keep Gaius in Sardinia indefinitely by reappointing Orestes to stay in Sardinia. Gaius was not pleased by this and returned to Rome demanding an explanation, actions which eventually led to his election as a tribune of the people. Marcus Tullius Cicero Marcus Tullius Cicero was the Quaestor to the Propraetor/Proconsul of Sicily. He fixed major agricultural problems in the region and improved on the purchase and selling of grain. The farmers after this
would often reflect their superiors. For example, Gaius Gracchus was quaestor under the consul Orestes in Sardinia, and many of his responsibilities involved leading military forces. While not in direct command of the army, the quaestor would be in charge of organizational and lesser duties that were a necessary part of the war machine. Roman Empire During the reign of the Emperor Constantine I, the office of quaestor was reorganized into a judicial position known as the quaestor sacri palatii. The office functioned as the primary legal adviser to the emperor, and was charged with the creation of laws as well as answers petitions to the emperor. From 440 onward, the office of the quaestor worked in conjunction with the praetorian prefect of the East to oversee the supreme tribunal, or supreme court, at Constantinople. There they heard appeals from the various subordinate courts and governors. Byzantine Empire Under the Emperor Justinian I, an additional office named quaestor was created to control police and judicial matters in Constantinople. In this new position, a quaestor was responsible for wills, as well as supervision of complaints by tenants regarding their landlords, and finally over the homeless. Notable quaestors See also Category: Roman Quaestors. Gaius Gracchus Following the death of his brother Tiberius Gracchus, Gaius Gracchus stayed out of the political spotlight for a period until he was forced to defend a good friend of his named Vettius in court. Hearing his vocal abilities, the Senate began to fear that Gaius would arouse the people in the same manner as his brother and appointed him quaestor to Gnaeus Aufidius Orestes in Sardinia to prevent him from becoming a tribune. Gaius used his position as quaestor to successfully defeat his enemies as well as gain a large amount of loyalty among his troops. Following an incident where Gaius won the support of a local village to provide for his troops, the Senate attempted to keep Gaius in Sardinia indefinitely by reappointing Orestes to stay in Sardinia. Gaius was not pleased by this and returned to Rome demanding an explanation, actions which eventually led to his election as a tribune of the people. Marcus Tullius Cicero Marcus Tullius Cicero was the Quaestor to the Propraetor/Proconsul of Sicily. He fixed major agricultural problems in the region and improved on the purchase and selling of grain. The farmers after this loved Cicero and began
a simple statement such as "this completes the proof", "as required", "as desired", "as expected", "hence proved", "ergo", "so correct", or other similar locutions. Typographical forms used symbolically Due to the paramount importance of proofs in mathematics, mathematicians since the time of Euclid have developed conventions to demarcate the beginning and end of proofs. In printed English language texts, the formal statements of theorems, lemmas, and propositions are set in italics by tradition. The beginning of a proof usually follows immediately thereafter, and is indicated by the word "proof" in boldface or italics. On the other hand, several symbolic conventions exist to indicate the end of a proof. While some authors still use the classical abbreviation, Q.E.D., it is relatively uncommon in modern mathematical texts. Paul Halmos pioneered the use of a solid black square at the end of a proof as a Q.E.D symbol, a practice which has become standard, although not universal. Halmos adopted this use of a symbol from magazine typography customs in which simple geometric shapes had been used to indicate the end of an article. This symbol was later called the tombstone, the Halmos symbol, or even a halmos by mathematicians. Often the Halmos symbol is drawn on chalkboard to signal the end of a proof during a lecture, although this practice is not so common as its use in printed text. The tombstone symbol appears in TeX as the character (filled square, \blacksquare) and sometimes, as a (hollow square, \square or \Box). In the AMS Theorem Environment for LaTeX, the hollow square is the default end-of-proof symbol. Unicode explicitly provides the "end of proof" character, U+220E (∎). Some authors use other Unicode symbols to note the end of a proof, including, ▮ (U+25AE, a black vertical rectangle), and ‣ (U+2023, a triangular bullet). Other authors have adopted two forward slashes (//) or four forward slashes (////). In other cases, authors have elected to segregate proofs typographically—by displaying them as indented blocks. Modern humorous use In Joseph Heller's book Catch-22, the Chaplain, having been told to examine a forged letter allegedly signed by him (which he knew he didn't sign), verified that his name was in fact there. His investigator replied, "Then you wrote it. Q.E.D." The chaplain said he didn't write it and that it wasn't his handwriting, to which the investigator replied, "Then you signed your name in somebody else's handwriting again." In the 1978 science-fiction radio comedy, and later in the television, novel, and film adaptations of The Hitchhiker's Guide to the Galaxy, "Q.E.D." is referred to in the Guide's entry for the babel fish, when it is claimed that the babel fish – which serves the "mind-bogglingly" useful purpose of being able to translate any spoken language when inserted into a person's ear – is used as evidence for existence and non-existence of God. The exchange from the novel is as follows: "'I refuse to
Latin, and phrases such as Q.E.D. were often used to conclude proofs. Perhaps the most famous use of Q.E.D. in a philosophical argument is found in the Ethics of Baruch Spinoza, published posthumously in 1677. Written in Latin, it is considered by many to be Spinoza's magnum opus. The style and system of the book are, as Spinoza says, "demonstrated in geometrical order", with axioms and definitions followed by propositions. For Spinoza, this is a considerable improvement over René Descartes's writing style in the Meditations, which follows the form of a diary. Difference from Q.E.F. There is another Latin phrase with a slightly different meaning, usually shortened similarly, but being less common in use. , originating from the Greek geometers' closing (), meaning "which had to be done". Because of the difference in meaning, the two phrases should not be confused. Euclid used the Greek original of Quod Erat Faciendum (Q.E.F.) to close propositions that were not proofs of theorems, but constructions of geometric objects. For example, Euclid's first proposition showing how to construct an equilateral triangle, given one side, is concluded this way. English equivalent There is no common formal English equivalent, although the end of a proof may be announced with a simple statement such as "this completes the proof", "as required", "as desired", "as expected", "hence proved", "ergo", "so correct", or other similar locutions. Typographical forms used symbolically Due to the paramount importance of proofs in mathematics, mathematicians since the time of Euclid have developed conventions to demarcate the beginning and end of proofs. In printed English language texts, the formal statements of theorems, lemmas, and propositions are set in italics by tradition. The beginning of a proof usually follows immediately thereafter, and is indicated by the word "proof" in boldface or italics. On the other hand, several symbolic conventions exist to indicate the end of a proof. While some authors still use the classical abbreviation, Q.E.D., it is relatively uncommon in modern mathematical texts. Paul Halmos pioneered the use of a solid black square at the end of a proof as a Q.E.D symbol, a practice which has become standard, although not universal. Halmos adopted this use of a symbol from magazine typography customs in which simple geometric shapes had been used to indicate the end of
they live, and the quagga was the most southern-living of them all. Other large African ungulates diverged into separate species and subspecies during this period, as well, probably because of the same climate shift. The simplified cladogram below is based on the 2005 analysis (some taxa shared haplotypes and could, therefore, not be differentiated): A 2018 genetic study of plains zebras populations confirmed the quagga as a member of that species. They found no evidence for subspecific differentiation based on morphological differences between southern populations of zebras, including the quagga. Modern plains zebra populations may have originated from southern Africa, and the quagga appears to be less divergent from neighbouring populations than the northernmost living population in northeastern Uganda. Instead, the study supported a north–south genetic continuum for plains zebras, with the Ugandan population being the most distinct. Zebras from Namibia appear to be the closest genetically to the quagga. Description The quagga is believed to have been long and tall at the shoulder. Based on measurements of skins, mares were significantly longer and slightly taller than stallions, whereas the stallions of extant zebras are the largest. Its coat pattern was unique among equids: zebra-like in the front but more like a horse in the rear. It had brown and white stripes on the head and neck, brown upper parts and a white belly, tail and legs. The stripes were boldest on the head and neck and became gradually fainter further down the body, blending with the reddish brown of the back and flanks, until disappearing along the back. It appears to have had a high degree of polymorphism, with some having almost no stripes and others having patterns similar to the extinct southern population of Burchell's zebra, where the stripes covered most of the body except for the hind parts, legs and belly. It also had a broad dark dorsal stripe on its back. It had a standing mane with brown and white stripes. The only quagga to have been photographed alive was a mare at the Zoological Society of London's Zoo. Five photographs of this specimen are known, taken between 1863 and 1870. On the basis of photographs and written descriptions, many observers suggest that the stripes on the quagga were light on a dark background, unlike other zebras. The German naturalist Reinhold Rau, pioneer of the Quagga Project, claimed that this is an optical illusion: that the base colour is a creamy white and that the stripes are thick and dark. Living in the very southern end of the plains zebra's range, the quagga had a thick winter coat that moulted each year. Its skull was described as having a straight profile and a concave diastema, and as being relatively broad with a narrow occiput. Like other plains zebras, the quagga did not have a dewlap on its neck as the mountain zebra does. The 2004 morphological study found that the skeletal features of the southern Burchell's zebra population and the quagga overlapped, and that they were impossible to distinguish. Some specimens also appeared to be intermediate between the two in striping, and the extant Burchell's zebra population still exhibits limited striping. It can therefore be concluded that the two subspecies graded morphologically into each other. Today, some stuffed specimens of quaggas and southern Burchell's zebra are so similar that they are impossible to definitely identify as either, since no location data was recorded. Behaviour and ecology The quagga was the southernmost distributed plains zebra, mainly living south of the Orange River. It was a grazer, and its habitat range was restricted to the grasslands and arid interior scrubland of the Karoo region of South Africa, today forming parts of the provinces of Northern Cape, Eastern Cape, Western Cape and the Free State. These areas were known for distinctive flora and fauna and high amounts of endemism. Quaggas have been reported gathering into herds of 30–50, and sometimes travelled in a linear fashion. They may have been sympatric with Burchell's zebra between the Vaal and Orange rivers. This is disputed, and there is no evidence that they interbred. It could also have shared a small portion of its range with Hartmann's mountain zebra (Equus zebra hartmannae). Little is known about the behaviour of quaggas in the wild, and it is sometimes unclear what exact species of zebra is referred to in old reports. The only source that unequivocally describes the quagga in the Free State is that of the British military engineer and hunter William Cornwallis Harris. His 1840 account reads as follows: The geographical range of the quagga does not appear to extend to the northward of the river Vaal. The animal was formerly extremely common within the colony; but, vanishing before the strides of civilisation, is now to be found in very limited numbers and on the borders only. Beyond, on those sultry plains which are completely taken possession of by wild beasts, and may with strict propriety be termed the domains of savage nature, it occurs in interminable herds; and, although never intermixing with its more elegant congeners, it is almost invariably to be found ranging with the white-tailed gnu and with the ostrich, for the society of which bird especially it evinces the most singular predilection. Moving slowly across the profile of the ocean-like horizon, uttering a shrill, barking neigh, of which its name forms a correct imitation, long files of quaggas continually remind the early traveller of a rival caravan on its march. Bands of many hundreds are thus frequently seen doing their migration from the dreary and desolate plains of some portion of the interior, which has formed their secluded abode, seeking for those more luxuriant pastures where, during the summer months, various herbs thrust forth their leaves and flowers to form a green carpet, spangled with hues the most brilliant and diversified. The practical function of striping in zebras has been debated and it is unclear why the quagga lacked stripes on its hind parts. A cryptic function for protection from predators (stripes obscure the individual zebra in a herd) and biting flies (which are less attracted to striped objects), as well as various social functions, have been proposed for zebras in general. Differences in hind quarter stripes may have aided species recognition during stampedes of mixed herds, so that members of one subspecies or species would follow its own kind. It has also been evidence that the zebras developed striping patterns as thermoregulation to cool themselves down, and that the quagga lost them due to living in a cooler climate, although one problem with this is that the mountain zebra lives in similar environments and has a bold striping pattern. A 2014 study strongly supported the biting-fly hypothesis, and the quagga appears to have lived in areas with lesser amounts of fly activity than other zebras. A 2020 study suggested that the sexual dimorphism in size, with quagga mares being larger than stallions, could be due to the cold and droughts that affects the Karoo plateau, conditions that were even more severe in prehistoric times, such as during ice ages (other plains zebras live in warmer areas). Isolation, cold, and aridity could thereby have affected quagga evolution, including coat colour and size dimorphism. Since plains zebra mares are pregnant or lactate for much of their lives, larger size could have been a selective advantage for quagga mares, as they would therefore have more food reserves when food was scarce. Dimorphism and coat colour could also have evolved through genetic drift due to isolation, but these influences are not mutually exclusive, and could have worked
Rau, pioneer of the Quagga Project, claimed that this is an optical illusion: that the base colour is a creamy white and that the stripes are thick and dark. Living in the very southern end of the plains zebra's range, the quagga had a thick winter coat that moulted each year. Its skull was described as having a straight profile and a concave diastema, and as being relatively broad with a narrow occiput. Like other plains zebras, the quagga did not have a dewlap on its neck as the mountain zebra does. The 2004 morphological study found that the skeletal features of the southern Burchell's zebra population and the quagga overlapped, and that they were impossible to distinguish. Some specimens also appeared to be intermediate between the two in striping, and the extant Burchell's zebra population still exhibits limited striping. It can therefore be concluded that the two subspecies graded morphologically into each other. Today, some stuffed specimens of quaggas and southern Burchell's zebra are so similar that they are impossible to definitely identify as either, since no location data was recorded. Behaviour and ecology The quagga was the southernmost distributed plains zebra, mainly living south of the Orange River. It was a grazer, and its habitat range was restricted to the grasslands and arid interior scrubland of the Karoo region of South Africa, today forming parts of the provinces of Northern Cape, Eastern Cape, Western Cape and the Free State. These areas were known for distinctive flora and fauna and high amounts of endemism. Quaggas have been reported gathering into herds of 30–50, and sometimes travelled in a linear fashion. They may have been sympatric with Burchell's zebra between the Vaal and Orange rivers. This is disputed, and there is no evidence that they interbred. It could also have shared a small portion of its range with Hartmann's mountain zebra (Equus zebra hartmannae). Little is known about the behaviour of quaggas in the wild, and it is sometimes unclear what exact species of zebra is referred to in old reports. The only source that unequivocally describes the quagga in the Free State is that of the British military engineer and hunter William Cornwallis Harris. His 1840 account reads as follows: The geographical range of the quagga does not appear to extend to the northward of the river Vaal. The animal was formerly extremely common within the colony; but, vanishing before the strides of civilisation, is now to be found in very limited numbers and on the borders only. Beyond, on those sultry plains which are completely taken possession of by wild beasts, and may with strict propriety be termed the domains of savage nature, it occurs in interminable herds; and, although never intermixing with its more elegant congeners, it is almost invariably to be found ranging with the white-tailed gnu and with the ostrich, for the society of which bird especially it evinces the most singular predilection. Moving slowly across the profile of the ocean-like horizon, uttering a shrill, barking neigh, of which its name forms a correct imitation, long files of quaggas continually remind the early traveller of a rival caravan on its march. Bands of many hundreds are thus frequently seen doing their migration from the dreary and desolate plains of some portion of the interior, which has formed their secluded abode, seeking for those more luxuriant pastures where, during the summer months, various herbs thrust forth their leaves and flowers to form a green carpet, spangled with hues the most brilliant and diversified. The practical function of striping in zebras has been debated and it is unclear why the quagga lacked stripes on its hind parts. A cryptic function for protection from predators (stripes obscure the individual zebra in a herd) and biting flies (which are less attracted to striped objects), as well as various social functions, have been proposed for zebras in general. Differences in hind quarter stripes may have aided species recognition during stampedes of mixed herds, so that members of one subspecies or species would follow its own kind. It has also been evidence that the zebras developed striping patterns as thermoregulation to cool themselves down, and that the quagga lost them due to living in a cooler climate, although one problem with this is that the mountain zebra lives in similar environments and has a bold striping pattern. A 2014 study strongly supported the biting-fly hypothesis, and the quagga appears to have lived in areas with lesser amounts of fly activity than other zebras. A 2020 study suggested that the sexual dimorphism in size, with quagga mares being larger than stallions, could be due to the cold and droughts that affects the Karoo plateau, conditions that were even more severe in prehistoric times, such as during ice ages (other plains zebras live in warmer areas). Isolation, cold, and aridity could thereby have affected quagga evolution, including coat colour and size dimorphism. Since plains zebra mares are pregnant or lactate for much of their lives, larger size could have been a selective advantage for quagga mares, as they would therefore have more food reserves when food was scarce. Dimorphism and coat colour could also have evolved through genetic drift due to isolation, but these influences are not mutually exclusive, and could have worked together. Relationship with humans Quaggas have been identified in cave art attributed to the indigenous San people of Southern Africa. As it was easy to find and kill, the quagga was hunted by early Dutch settlers and later by Afrikaners to provide meat or for their skins. The skins were traded or used locally. The quagga was probably vulnerable to extinction due to its limited distribution, and it may have competed with domestic livestock for forage. Local farmers used them as guards for their livestock, as they were likely to attack intruders. Quaggas were said to be lively and highly strung, especially the stallions. Quaggas were brought to European zoos, and an attempt at captive breeding at London Zoo, but was halted when a lone stallion killed itself by bashing itself against a wall after losing its temper. On the other hand, captive quaggas in European zoos were said to be tamer and more docile than Burchell's zebra. One specimen was reported to have lived in captivity for 21 years and 4 months, dying in 1872. The quagga was long regarded a suitable candidate for domestication, as it counted as the most docile of the zebras. The Dutch colonists in South Africa had considered this possibility, because their imported work horses did not perform very well in the extreme climate and regularly fell prey to the feared African horse sickness. In 1843, the English naturalist Charles Hamilton Smith wrote that the quagga was 'unquestionably best calculated for domestication, both as regards strength and docility'. Some mentions have been given of tame or domesticated quaggas in South Africa. In Europe, two stallions were used to drive a phaeton by the sheriff of London in the early 19th century. In an attempt at domesticating the quagga, the British lord George Douglas, 16th Earl of Morton obtained a single male which he bred with a female horse of partial Arabian ancestry. This produced a female hybrid with stripes on its back and legs. Lord Morton's mare was sold and was subsequently bred with a black stallion, resulting in offspring that again had zebra stripes. An account of this was published in 1820 by the Royal Society. It is unknown what happened to the hybrid mare itself. This led to new ideas on telegony, referred to as pangenesis by the British naturalist Charles Darwin. At the close of the 19th century, the Scottish zoologist James Cossar Ewart argued against these ideas and proved, with several cross-breeding experiments, that zebra stripes could appear as an atavistic trait at any time. There are 23 known stuffed and mounted quagga specimens throughout the world, including a juvenile, two foals, and a foetus. In addition, a mounted head and neck, a foot, seven complete skeletons, and samples of various tissues remain. A 24th mounted specimen was destroyed in Königsberg, Germany, during World War II, and various skeletons and bones have also been lost. Extinction The quagga had disappeared from much of its range by the 1850s. The last population in the wild, in the Orange Free State, was extirpated in the late 1870s. The last known wild quagga died in 1878. The specimen in London died in 1872 and the one in Berlin in 1875. The last captive quagga, a female in Amsterdam's Natura Artis Magistra zoo, lived there from 9 May 1867 until it died on 12 August 1883, but its origin and cause of death are unclear. Its death was not recognised as signifying the extinction of its kind at the time, and the zoo requested another specimen; hunters believed it could still be found "closer to the interior" in the Cape Colony. Since locals used the term quagga to refer to all zebras,
operating systems. It is used to view picture files from the still image formats that QuickTime supports. In macOS, it is replaced by Preview. As of version 7.7.9, the Windows version requires one to go to their "Windows Uninstall Or Change A Program" screen to "modify" their installation of QuickTime 7 to include the "Legacy QuickTime Feature" of "QuickTime PictureViewer." File formats The native file format for QuickTime video, QuickTime File Format, specifies a multimedia container file that contains one or more tracks, each of which stores a particular type of data: audio, video, effects, or text (e.g. for subtitles). Each track either contains a digitally encoded media stream (using a specific format) or a data reference to the media stream located in another file. The ability to contain abstract data references for the media data, and the separation of the media data from the media offsets and the track edit lists means that QuickTime is particularly suited for editing, as it is capable of importing and editing in place (without data copying). Other file formats that QuickTime supports natively (to varying degrees) include AIFF, WAV, DV-DIF, MP3, and MPEG program stream. With additional QuickTime Components, it can also support ASF, DivX Media Format, Flash Video, Matroska, Ogg, and many others. QuickTime and MPEG-4 On February 11, 1998, the ISO approved the QuickTime file format as the basis of the MPEG‑4 file format. The MPEG-4 file format specification was created on the basis of the QuickTime format specification published in 2001. The MP4 (.mp4) file format was published in 2001 as the revision of the MPEG-4 Part 1: Systems specification published in 1999 (ISO/IEC 14496-1:2001). In 2003, the first version of MP4 format was revised and replaced by MPEG-4 Part 14: MP4 file format (ISO/IEC 14496-14:2003). The MP4 file format was generalized into the ISO Base Media File Format ISO/IEC 14496-12:2004, which defines a general structure for time-based media files. It in turn is used as the basis for other multimedia file formats (for example 3GP, Motion JPEG 2000). A list of all registered extensions for ISO Base Media File Format is published on the official registration authority website www.mp4ra.org. This registration authority for code-points in "MP4 Family" files is Apple Computer Inc. and it is named in Annex D (informative) in MPEG-4 Part 12. By 2000, MPEG-4 formats became industry standards, first appearing with support in QuickTime 6 in 2002. Accordingly, the MPEG-4 container is designed to capture, edit, archive, and distribute media, unlike the simple file-as-stream approach of MPEG-1 and MPEG-2. Profile support QuickTime 6 added limited support for MPEG-4, specifically encoding and decoding using Simple Profile (SP). Advanced Simple Profile (ASP) features, like B-frames, were unsupported (in contrast with, for example, encoders such as XviD or 3ivx). QuickTime 7 supports the H.264 encoder and decoder. Container benefits Because both MOV and MP4 containers can use the same MPEG-4 codecs, they are mostly interchangeable in a QuickTime-only environment. MP4, being an international standard, has more support. This is especially true on hardware devices, such as the Sony PSP and various DVD players, on the software side, most DirectShow / Video for Windows codec packs include a MP4 parser, but not one for MOV. In QuickTime Pro's MPEG-4 Export dialog, an option called "Passthrough" allows a clean export to MP4 without affecting the audio or video streams. QuickTime 7 now supports multi-channel AAC-LC and HE-AAC audio (used, for example, in the high-definition trailers on Apple's site), for both .MOV and .MP4 containers. History Apple released the first version of QuickTime on December 2, 1991 as a multimedia add-on for System 6 and later. The lead developer of QuickTime, Bruce Leak, ran the first public demonstration at the May 1991 Worldwide Developers Conference, where he played Apple's famous 1984 advertisement in a window at 320×240 pixels resolution. QuickTime 1.x The original video codecs included: the Animation codec, which used run-length encoding and was better suited to cartoon-type images with large areas of flat color the Apple Video codec (also known as "Road Pizza"), suited to normal live-action video. the Graphics codec, for 8-bit images, including ones that had undergone dithering The first commercial project produced using QuickTime 1.0 was the CD-ROM From Alice to Ocean. The first publicly visible use of QuickTime was Ben & Jerry's interactive factory tour (dubbed The Rik & Joe Show after its in-house developers). The Rik and Joe Show was demonstrated onstage at MacWorld in San Francisco when John Sculley announced QuickTime. Apple released QuickTime 1.5 for Mac OS in the latter part of 1992. This added the SuperMac-developed Cinepak vector-quantization video codec (initially known as Compact Video). It could play video at 320×240 resolution at 30 frames per second on a 25 MHz Motorola 68040 CPU. It also added text tracks, which allowed for captioning, lyrics and other potential uses. Apple contracted San Francisco Canyon Company to port QuickTime to the Windows platform. Version 1.0 of QuickTime for Windows provided only a subset of the full QuickTime API, including only movie playback functions driven through the standard movie controller. QuickTime 1.6 came out the following year. Version 1.6.2 first incorporated the "QuickTime PowerPlug" which replaced some components with PowerPC-native code when running on PowerPC Macs. QuickTime 2.x Apple released QuickTime 2.0 for System Software 7 in June 1994—the only version never released for free. It added support for music tracks, which contained the equivalent of MIDI data and which could drive a sound-synthesis engine built into QuickTime itself (using a limited set of instrument sounds licensed from Roland), or any external MIDI-compatible hardware, thereby producing sounds using only small amounts of movie data. Following Bruce Leak's departure to Web TV, the leadership of the QuickTime team was taken over by Peter Hoddie. QuickTime 2.0 for Windows appeared in November 1994 under the leadership of Paul Charlton. As part of the development effort for cross-platform QuickTime, Charlton (as architect and technical lead), along with ace individual contributor Michael Kellner and a small highly effective team including Keith Gurganus, ported a subset of the Macintosh Toolbox to Intel and other platforms (notably, MIPS and SGI Unix variants) as the enabling infrastructure for the QuickTime Media Layer (QTML) which was first demonstrated at the Apple Worldwide Developers Conference (WWDC) in May 1996. The QTML later became the foundation for the Carbon API which allowed legacy Macintosh applications to run on the Darwin kernel in Mac OS X. The next versions, 2.1 and 2.5, reverted to the previous model of giving QuickTime away for free. They improved the music support and added sprite tracks which allowed the creation of complex animations with the addition of little more than the static sprite images to the size of the movie. QuickTime 2.5 also fully integrated QuickTime VR 2.0.1 into QuickTime as a QuickTime extension. On January 16, 1997, Apple released the QuickTime MPEG Extension (PPC only) as an add-on to QuickTime 2.5, which added software MPEG-1 playback capabilities to QuickTime. Lawsuit against San Francisco Canyon In 1994, Apple filed suit against software developer San Francisco Canyon for intellectual property infringement and breach of contract. Apple alleged that San Francisco Canyon had helped develop Video for Windows using several hundred lines of unlicensed QuickTime source code. They were contracted by Intel to help make Video for Windows better use system resources on Intel processors, which was subsequently unilaterally removed. Microsoft and Intel were added to the lawsuit in 1995. The suit ended in a settlement in 1997. QuickTime 3.x The release of QuickTime 3.0 for Mac OS on March 30, 1998 introduced the now-standard revenue model of releasing the software for free, but with additional features of the Apple-provided MoviePlayer application that end-users could only unlock by buying a QuickTime Pro license code. Since the "Pro" features were the same as the existing features in QuickTime 2.5, any previous user of QuickTime could continue to use an older version of the central MoviePlayer application for the remaining lifespan of Mac OS to 2002; indeed, since these additional features were limited to MoviePlayer, any other QuickTime-compatible application remained unaffected. QuickTime 3.0 added support for graphics importer components that could read images from GIF, JPEG, TIFF and other file formats, and video output components which served primarily to export movie data via FireWire. Apple also licensed several third-party technologies for inclusion in QuickTime 3.0, including the Sorenson Video codec for advanced video compression, the QDesign Music codec for substantial audio compression, and the complete Roland Sound Canvas instrument set and GS Format extensions for improved playback of MIDI music files. It also added video effects which programmers could apply in real-time to video tracks. Some of these effects would even respond to mouse clicks by the user, as part of the new movie interaction support (known as wired movies). QuickTime interactive During the development cycle for QuickTime 3.0, part of the engineering team was working on a more advanced version of QuickTime to be known as QuickTime interactive or QTi. Although similar in concept to the wired movies feature released as part of QuickTime 3.0, QuickTime interactive was much more ambitious. It allowed any QuickTime movie to be a fully interactive and programmable container for media. A special track type was added that contained an interpreter for a custom programming language based on 68000 assembly language. This supported a comprehensive user interaction model for mouse and keyboard event handling based in part on the AML language from the Apple Media Tool. The QuickTime interactive movie was to have been the playback format for the next generation of HyperCard authoring tool. Both the QuickTime interactive and the HyperCard 3.0 projects were canceled in order to concentrate engineering resources on streaming support for QuickTime 4.0, and the projects were never released to the public. QuickTime 4.x Apple released QuickTime 4.0 on June 8, 1999 for Mac OS 7.5.5 through 8.6 (later Mac OS 9) and Windows 95, Windows 98, and Windows NT. Three minor updates (versions 4.0.1, 4.0.2, and 4.0.3) followed. It introduced features that most users now consider basic: Graphics exporter components, which could write some of the same formats that the previously introduced importers could read. (GIF support was omitted, possibly because of the LZW patent.) Support for the QDesign Music 2 and MPEG-1 Layer 3 audio (MP3). QuickTime 4 was the first version to support streaming. It was accompanied by the release of the free QuickTime Streaming Server version 1.0. QuickTime 4 Player introduced brushed metal to the Macintosh user interface. On December 17, 1999, Apple provided QuickTime 4.1, this version's first major update. Two minor versions (4.1.1 and 4.1.2) followed. The most notable improvements in the 4.1.x family were: Support for files larger than 2.0 GB in Mac OS 9. (This is a consequence of Mac OS 9 requiring the HFS Plus filesystem.) Variable bit rate (VBR) support for MPEG-1 Layer 3 (MP3) audio. Support for Synchronized Multimedia Integration Language (SMIL). Introduction of AppleScript support in Mac OS. The requirement of a PowerPC processor for Mac OS systems. QuickTime 4.1 dropped support for Motorola 68k Macintosh systems. QuickTime 5.x QuickTime 5 was one of the shortest-lived versions of QuickTime, released in April 2001 and superseded by QuickTime 6 a little over a year later. This version was the last to have greater capabilities under Mac OS 9 than under Mac OS X, and the last version of QuickTime to support Mac OS versions 7.5.5 through 8.5.1 on a PowerPC Mac and Windows 95. Version 5.0 was initially only released for Mac OS and Mac OS X on April 14, 2001, and version 5.0.1 followed shortly thereafter on April 23, 2001, supporting the classic Mac OS, Mac OS X, and Windows. Three more updates to QuickTime 5 (versions 5.0.2, 5.0.4, and 5.0.5) were released over its short lifespan. QuickTime 5 delivered the following enhancements: MPEG-1 playback for Windows, and updated MPEG-1 Layer 3 audio support for all systems. Sorenson Video 3 playback and export (added with the 5.0.2 update). Realtime rendering of effects & transitions in DV files, including enhancements to DV rendering, multiprocessor support, and Altivec enhancements for PowerPC G4 systems. Flash 4 playback and export. A new QuickTime VR engine, adding support for cubic VR panoramas. QuickTime 6.x On July 15, 2002, Apple released QuickTime 6.0, providing the following features: MPEG-4 playback, import, and export, including MPEG-4 Part 2 video and AAC Audio. Support for Flash 5, JPEG 2000, and improved Exif handling. Instant-on streaming playback. MPEG-2 playback (via the purchase of Apple's MPEG-2 Playback Component). Scriptable ActiveX control. QuickTime 6 was initially available for Mac OS 8.6 – 9.x, Mac OS X (10.1.5 minimum), and Windows 98, Me, 2000, and XP. Development of QuickTime 6 for Mac OS slowed considerably in early 2003, after the release of Mac OS X v10.2 in August 2002. QuickTime 6 for Mac OS continued on the 6.0.x path, eventually stopping with version 6.0.3. QuickTime 6.1 & 6.1.1 for Mac OS X v10.1 and Mac OS X v10.2 (released October 22, 2002) and QuickTime 6.1 for Windows (released March 31, 2003) offered ISO-Compliant MPEG-4 file creation and fixed the CAN-2003-0168 vulnerability. Apple released QuickTime 6.2 exclusively for Mac OS X on April 29, 2003 to provide support for iTunes 4, which allowed AAC encoding for songs in the iTunes library. (iTunes was not available for Windows until October 2003.) On June 3, 2003, Apple released QuickTime 6.3, delivering the following: Support for 3GPP, including 3G Text, video, and audio (AAC and AMR codecs). Support for the .3gp, .amr, and .sdv file formats via separate component. QuickTime 6.4, released on October 16, 2003 for Mac OS X v10.2, Mac OS X v10.3, and Windows, added the following: Addition of the Apple Pixlet codec (only for Mac OS X v10.3 and later). ColorSync support. Integrated 3GPP. On December 18, 2003, Apple released QuickTime 6.5, supporting the same systems as version 6.4. Versions 6.5.1 and 6.5.2 followed on April 28, 2004 and October 27, 2004. These versions would be the last to support Windows 98 and Me. The 6.5 family added the following features: 3GPP2 and AMC mobile multimedia formats. QCELP voice code. Apple Lossless (in version 6.5.1). QuickTime 6.5.3 was released on October 12, 2005 for Mac OS X v10.2.8 after the release of QuickTime 7.0, fixing a number of security issues. QuickTime 7.x Initially released on April 29, 2005 in conjunction with Mac OS X v10.4 (for version 10.3.9 and 10.4.x), QuickTime 7.0 featured the following: Improved MPEG-4 compliance. A H.264/MPEG-4 AVC codec (does not support the AVCHD H.264 AVC format from Sony HD camcorders). Support for Core Audio, a set of Application programming interfaces that supports high resolution sound and replaces Sound Manager. Support for using Core Image filters in Mac OS X v10.4 on live video (Not to be confused with Core Video). Support for Quartz Composer (.qtz) animations. Support for distinct decode order and display order. QuickTime Kit Framework (QTKit), a Cocoa framework for QuickTime. After a couple of preview Windows releases, Apple released 7.0.2 as the first stable release on September 7, 2005 for Windows 2000 and Windows XP. Version 7.0.4, released on January 10, 2006 was the first universal binary version. But it suffered numerous bugs, including
2.0 for System Software 7 in June 1994—the only version never released for free. It added support for music tracks, which contained the equivalent of MIDI data and which could drive a sound-synthesis engine built into QuickTime itself (using a limited set of instrument sounds licensed from Roland), or any external MIDI-compatible hardware, thereby producing sounds using only small amounts of movie data. Following Bruce Leak's departure to Web TV, the leadership of the QuickTime team was taken over by Peter Hoddie. QuickTime 2.0 for Windows appeared in November 1994 under the leadership of Paul Charlton. As part of the development effort for cross-platform QuickTime, Charlton (as architect and technical lead), along with ace individual contributor Michael Kellner and a small highly effective team including Keith Gurganus, ported a subset of the Macintosh Toolbox to Intel and other platforms (notably, MIPS and SGI Unix variants) as the enabling infrastructure for the QuickTime Media Layer (QTML) which was first demonstrated at the Apple Worldwide Developers Conference (WWDC) in May 1996. The QTML later became the foundation for the Carbon API which allowed legacy Macintosh applications to run on the Darwin kernel in Mac OS X. The next versions, 2.1 and 2.5, reverted to the previous model of giving QuickTime away for free. They improved the music support and added sprite tracks which allowed the creation of complex animations with the addition of little more than the static sprite images to the size of the movie. QuickTime 2.5 also fully integrated QuickTime VR 2.0.1 into QuickTime as a QuickTime extension. On January 16, 1997, Apple released the QuickTime MPEG Extension (PPC only) as an add-on to QuickTime 2.5, which added software MPEG-1 playback capabilities to QuickTime. Lawsuit against San Francisco Canyon In 1994, Apple filed suit against software developer San Francisco Canyon for intellectual property infringement and breach of contract. Apple alleged that San Francisco Canyon had helped develop Video for Windows using several hundred lines of unlicensed QuickTime source code. They were contracted by Intel to help make Video for Windows better use system resources on Intel processors, which was subsequently unilaterally removed. Microsoft and Intel were added to the lawsuit in 1995. The suit ended in a settlement in 1997. QuickTime 3.x The release of QuickTime 3.0 for Mac OS on March 30, 1998 introduced the now-standard revenue model of releasing the software for free, but with additional features of the Apple-provided MoviePlayer application that end-users could only unlock by buying a QuickTime Pro license code. Since the "Pro" features were the same as the existing features in QuickTime 2.5, any previous user of QuickTime could continue to use an older version of the central MoviePlayer application for the remaining lifespan of Mac OS to 2002; indeed, since these additional features were limited to MoviePlayer, any other QuickTime-compatible application remained unaffected. QuickTime 3.0 added support for graphics importer components that could read images from GIF, JPEG, TIFF and other file formats, and video output components which served primarily to export movie data via FireWire. Apple also licensed several third-party technologies for inclusion in QuickTime 3.0, including the Sorenson Video codec for advanced video compression, the QDesign Music codec for substantial audio compression, and the complete Roland Sound Canvas instrument set and GS Format extensions for improved playback of MIDI music files. It also added video effects which programmers could apply in real-time to video tracks. Some of these effects would even respond to mouse clicks by the user, as part of the new movie interaction support (known as wired movies). QuickTime interactive During the development cycle for QuickTime 3.0, part of the engineering team was working on a more advanced version of QuickTime to be known as QuickTime interactive or QTi. Although similar in concept to the wired movies feature released as part of QuickTime 3.0, QuickTime interactive was much more ambitious. It allowed any QuickTime movie to be a fully interactive and programmable container for media. A special track type was added that contained an interpreter for a custom programming language based on 68000 assembly language. This supported a comprehensive user interaction model for mouse and keyboard event handling based in part on the AML language from the Apple Media Tool. The QuickTime interactive movie was to have been the playback format for the next generation of HyperCard authoring tool. Both the QuickTime interactive and the HyperCard 3.0 projects were canceled in order to concentrate engineering resources on streaming support for QuickTime 4.0, and the projects were never released to the public. QuickTime 4.x Apple released QuickTime 4.0 on June 8, 1999 for Mac OS 7.5.5 through 8.6 (later Mac OS 9) and Windows 95, Windows 98, and Windows NT. Three minor updates (versions 4.0.1, 4.0.2, and 4.0.3) followed. It introduced features that most users now consider basic: Graphics exporter components, which could write some of the same formats that the previously introduced importers could read. (GIF support was omitted, possibly because of the LZW patent.) Support for the QDesign Music 2 and MPEG-1 Layer 3 audio (MP3). QuickTime 4 was the first version to support streaming. It was accompanied by the release of the free QuickTime Streaming Server version 1.0. QuickTime 4 Player introduced brushed metal to the Macintosh user interface. On December 17, 1999, Apple provided QuickTime 4.1, this version's first major update. Two minor versions (4.1.1 and 4.1.2) followed. The most notable improvements in the 4.1.x family were: Support for files larger than 2.0 GB in Mac OS 9. (This is a consequence of Mac OS 9 requiring the HFS Plus filesystem.) Variable bit rate (VBR) support for MPEG-1 Layer 3 (MP3) audio. Support for Synchronized Multimedia Integration Language (SMIL). Introduction of AppleScript support in Mac OS. The requirement of a PowerPC processor for Mac OS systems. QuickTime 4.1 dropped support for Motorola 68k Macintosh systems. QuickTime 5.x QuickTime 5 was one of the shortest-lived versions of QuickTime, released in April 2001 and superseded by QuickTime 6 a little over a year later. This version was the last to have greater capabilities under Mac OS 9 than under Mac OS X, and the last version of QuickTime to support Mac OS versions 7.5.5 through 8.5.1 on a PowerPC Mac and Windows 95. Version 5.0 was initially only released for Mac OS and Mac OS X on April 14, 2001, and version 5.0.1 followed shortly thereafter on April 23, 2001, supporting the classic Mac OS, Mac OS X, and Windows. Three more updates to QuickTime 5 (versions 5.0.2, 5.0.4, and 5.0.5) were released over its short lifespan. QuickTime 5 delivered the following enhancements: MPEG-1 playback for Windows, and updated MPEG-1 Layer 3 audio support for all systems. Sorenson Video 3 playback and export (added with the 5.0.2 update). Realtime rendering of effects & transitions in DV files, including enhancements to DV rendering, multiprocessor support, and Altivec enhancements for PowerPC G4 systems. Flash 4 playback and export. A new QuickTime VR engine, adding support for cubic VR panoramas. QuickTime 6.x On July 15, 2002, Apple released QuickTime 6.0, providing the following features: MPEG-4 playback, import, and export, including MPEG-4 Part 2 video and AAC Audio. Support for Flash 5, JPEG 2000, and improved Exif handling. Instant-on streaming playback. MPEG-2 playback (via the purchase of Apple's MPEG-2 Playback Component). Scriptable ActiveX control. QuickTime 6 was initially available for Mac OS 8.6 – 9.x, Mac OS X (10.1.5 minimum), and Windows 98, Me, 2000, and XP. Development of QuickTime 6 for Mac OS slowed considerably in early 2003, after the release of Mac OS X v10.2 in August 2002. QuickTime 6 for Mac OS continued on the 6.0.x path, eventually stopping with version 6.0.3. QuickTime 6.1 & 6.1.1 for Mac OS X v10.1 and Mac OS X v10.2 (released October 22, 2002) and QuickTime 6.1 for Windows (released March 31, 2003) offered ISO-Compliant MPEG-4 file creation and fixed the CAN-2003-0168 vulnerability. Apple released QuickTime 6.2 exclusively for Mac OS X on April 29, 2003 to provide support for iTunes 4, which allowed AAC encoding for songs in the iTunes library. (iTunes was not available for Windows until October 2003.) On June 3, 2003, Apple released QuickTime 6.3, delivering the following: Support for 3GPP, including 3G Text, video, and audio (AAC and AMR codecs). Support for the .3gp, .amr, and .sdv file formats via separate component. QuickTime 6.4, released on October 16, 2003 for Mac OS X v10.2, Mac OS X v10.3, and Windows, added the following: Addition of the Apple Pixlet codec (only for Mac OS X v10.3 and later). ColorSync support. Integrated 3GPP. On December 18, 2003, Apple released QuickTime 6.5, supporting the same systems as version 6.4. Versions 6.5.1 and 6.5.2 followed on April 28, 2004 and October 27, 2004. These versions would be the last to support Windows 98 and Me. The 6.5 family added the following features: 3GPP2 and AMC mobile multimedia formats. QCELP voice code. Apple Lossless (in version 6.5.1). QuickTime 6.5.3 was released on October 12, 2005 for Mac OS X v10.2.8 after the release of QuickTime 7.0, fixing a number of security issues. QuickTime 7.x Initially released on April 29, 2005 in conjunction with Mac OS X v10.4 (for version 10.3.9 and 10.4.x), QuickTime 7.0 featured the following: Improved MPEG-4 compliance. A H.264/MPEG-4 AVC codec (does not support the AVCHD H.264 AVC format from Sony HD camcorders). Support for Core Audio, a set of Application programming interfaces that supports high resolution sound and replaces Sound Manager. Support for using Core Image filters in Mac OS X v10.4 on live video (Not to be confused with Core Video). Support for Quartz Composer (.qtz) animations. Support for distinct decode order and display order. QuickTime Kit Framework (QTKit), a Cocoa framework for QuickTime. After a couple of preview Windows releases, Apple released 7.0.2 as the first stable release on September 7, 2005 for Windows 2000 and Windows XP. Version 7.0.4, released on January 10, 2006 was the first universal binary version. But it suffered numerous bugs, including a buffer overrun, which is more problematic to most users. Apple dropped support for Windows 2000 with the release of QuickTime 7.2 on July 11, 2007. The last version available for Windows 2000, 7.1.6, contains numerous security vulnerabilities. References to this version have been removed from the QuickTime site, but it can be downloaded from Apple's support section. Apple has not indicated that they will be providing any further security updates for older versions. QuickTime 7.2 is the first version for Windows Vista. Apple dropped support for Flash content in QuickTime 7.3, breaking content that relied on Flash for interactivity, or animation tracks. Security concerns seem to be part of the decision. Flash flv files can still be played in QuickTime if the free Perian plugin is added. In QuickTime 7.3, a processor that supports SSE is required. QuickTime 7.4 does not require SSE. Unlike versions 7.2 and 7.3, QuickTime 7.4 cannot be installed on Windows XP without service packs or with Service Pack 1/1A installed (its setup program checks if Service Pack 2 is installed). QuickTime 7.5 was released on June 10, 2008. QuickTime 7.5.5 was released on September 9, 2008, which requires Mac OS X v10.4 or higher, dropping 10.3 support. QuickTime 7.6 was released on January 21, 2009. QuickTime 7.7 was released on August 3, 2011. QuickTime 7.6.6 is available for OS X, 10.6.3 Snow Leopard until 10.14 Mojave, as 10.15 Catalina will only support 64-bit applications. There is a 7.7 release of QuickTime 7 for OS X, but it is only for Leopard 10.5. QuickTime 7.7.6 is the last release for Windows XP. As it's since version 7.4, they can be installed here only when Service Pack 2 or 3 is installed. QuickTime 7.7.9 is the last Windows release of QuickTime. Apple stopped supporting QuickTime on Windows afterwards. Safari 12, released on September 17, 2018 for macOS Sierra and macOS High Sierra (and the default browser included on macOS Mojave released on September 24, 2018), which drops support for NPAPI plug-ins (except for Adobe Flash) dropped its support for QuickTime 7's web plugin. On September 24, 2018, Apple dropped support for the macOS version of QuickTime 7. This effectively marked the end of the technology in Apple's codec and web development. Starting with macOS Catalina, QuickTime 7 applications, image, audio and video codecs will no longer be compatible with macOS or supported by Apple. QuickTime X (QuickTime Player v10.x) QuickTime X (pronounced QuickTime Ten) was initially demonstrated at WWDC on June 8, 2009, and shipped with Mac OS X v10.6. It includes visual chapters, conversion, sharing to YouTube, video editing, capture of video and audio streams, screen recording, GPU acceleration, and live streaming. But it removed support for various widely used formats, in particular the omission of MIDI caused significant inconvenience and trouble to many musicians and their potential audiences. In addition, a screen recorder is featured which records whatever is on the screen. However it is not possible to capture certain Digital rights management protected content. This includes iTunes/Apple TV video purchases, or any content protected by Apple's FairPlay DRM technology. While Safari uses FairPlay, Google Chrome, and Firefox use Widevine for DRM, whose content is not protected from QuickTime screen capturing. The reason for the jump in numbering from 7 to 10 (X) was to indicate a similar break with the previous versions of the product that Mac OS X indicated. QuickTime X is fundamentally different from previous versions, in that it is provided as a Cocoa (Objective-C) framework and breaks compatibility with the previous QuickTime 7 C-based APIs that were previously used. QuickTime X was completely rewritten to implement modern audio video codecs in 64-bit. QuickTime X is a combination of two technologies: QuickTime Kit
Quoin or Du Quoin may also refer to: Places Quoin Bluff, Western Australia Quoin Hill Airfield, Vanuatu Du Quoin, Illinois, USA Du Quoin station Du Quoin State Fairgrounds DuQuoin
Quoin or Du Quoin may also refer to: Places Quoin Bluff, Western Australia Quoin Hill Airfield, Vanuatu Du Quoin,
is the most common variety of crystalline quartz. The white color is caused by minute fluid inclusions of gas, liquid, or both, trapped during crystal formation, making it of little value for optical and quality gemstone applications. Rose quartz Rose quartz is a type of quartz which exhibits a pale pink to rose red hue. The color is usually considered as due to trace amounts of titanium, iron, or manganese, in the material. Some rose quartz contains microscopic rutile needles which produces an asterism in transmitted light. Recent X-ray diffraction studies suggest that the color is due to thin microscopic fibers of possibly dumortierite within the quartz. Additionally, there is a rare type of pink quartz (also frequently called crystalline rose quartz) with color that is thought to be caused by trace amounts of phosphate or aluminium. The color in crystals is apparently photosensitive and subject to fading. The first crystals were found in a pegmatite found near Rumford, Maine, US and in Minas Gerais, Brazil. Smoky quartz Smoky quartz is a gray, translucent version of quartz. It ranges in clarity from almost complete transparency to a brownish-gray crystal that is almost opaque. Some can also be black. The translucency results from natural irradiation acting on minute traces of aluminum in the crystal structure. Prasiolite Prasiolite, also known as vermarine, is a variety of quartz that is green in color. Since 1950, almost all natural prasiolite has come from a small Brazilian mine, but it is also seen in Lower Silesia in Poland. Naturally occurring prasiolite is also found in the Thunder Bay area of Canada. It is a rare mineral in nature; most green quartz is heat-treated amethyst. Synthetic and artificial treatments Not all varieties of quartz are naturally occurring. Some clear quartz crystals can be treated using heat or gamma-irradiation to induce color where it would not otherwise have occurred naturally. Susceptibility to such treatments depends on the location from which the quartz was mined. Prasiolite, an olive colored material, is produced by heat treatment; natural prasiolite has also been observed in Lower Silesia in Poland. Although citrine occurs naturally, the majority is the result of heat-treating amethyst or smoky quartz. Carnelian has been heat-treated to deepen its color since prehistoric times. Because natural quartz is often twinned, synthetic quartz is produced for use in industry. Large, flawless, single crystals are synthesized in an autoclave via the hydrothermal process. Like other crystals, quartz may be coated with metal vapors to give it an attractive sheen. Occurrence Quartz is a defining constituent of granite and other felsic igneous rocks. It is very common in sedimentary rocks such as sandstone and shale. It is a common constituent of schist, gneiss, quartzite and other metamorphic rocks. Quartz has the lowest potential for weathering in the Goldich dissolution series and consequently it is very common as a residual mineral in stream sediments and residual soils. Generally a high presence of quartz suggests a "mature" rock, since it indicates the rock has been heavily reworked and quartz was the primary mineral that endured heavy weathering. While the majority of quartz crystallizes from molten magma, quartz also chemically precipitates from hot hydrothermal veins as gangue, sometimes with ore minerals like gold, silver and copper. Large crystals of quartz are found in magmatic pegmatites. Well-formed crystals may reach several meters in length and weigh hundreds of kilograms. Naturally occurring quartz crystals of extremely high purity, necessary for the crucibles and other equipment used for growing silicon wafers in the semiconductor industry, are expensive and rare. A major mining location for high purity quartz is the Spruce Pine Gem Mine in Spruce Pine, North Carolina, United States. Quartz may also be found in Caldoveiro Peak, in Asturias, Spain. The largest documented single crystal of quartz was found near Itapore, Goiaz, Brazil; it measured approximately 6.1×1.5×1.5 m and weighed 39,916 kilograms. Mining Quartz is extracted from open pit mines. Miners occasionally use explosives to expose deep pockets of quartz. More frequently, bulldozers and backhoes are used to remove soil and clay and expose quartz veins, which are then worked using hand tools. Care must be taken to avoid sudden temperature changes that may damage the crystals. Almost all the industrial demand for quartz crystal (used primarily in electronics) is met with synthetic quartz produced by the hydrothermal process. However, synthetic crystals are less prized for use as gemstones. The popularity of crystal healing has increased the demand for natural quartz crystals, which are now often mined in developing countries using primitive mining methods, sometimes involving child labor. Related silica minerals Tridymite and cristobalite are high-temperature polymorphs of SiO2 that occur in high-silica volcanic rocks. Coesite is a denser polymorph of SiO2 found in some meteorite impact sites and in metamorphic rocks formed at pressures greater than those typical of the Earth's crust. Stishovite is a yet denser and higher-pressure polymorph of SiO2 found in some meteorite impact sites. Lechatelierite is an amorphous silica glass SiO2 which is formed by lightning strikes in quartz sand. Safety As quartz is a form of silica, it is a possible cause for concern in various workplaces. Cutting, grinding, chipping, sanding, drilling, and polishing natural and manufactured stone products can release hazardous levels of very small, crystalline silica dust particles into the air that workers breathe. Crystalline silica of respirable size is a recognized human carcinogen and may lead to other diseases of the lungs such as silicosis and pulmonary fibrosis. History The word "quartz" comes from the German , which is of Slavic origin (Czech miners called it křemen). Other sources attribute the word's origin to the Saxon word Querkluftertz, meaning cross-vein ore. Quartz is the most common material identified as the mystical substance maban in Australian Aboriginal mythology. It is found regularly in passage tomb cemeteries in Europe in a burial context,
only one termination pyramid is present. However, doubly terminated crystals do occur where they develop freely without attachment, for instance, within gypsum. α-quartz crystallizes in the trigonal crystal system, space group P3121 or P3221 (space group 152 or 154 resp.) depending on the chirality. Above , α-quartz in P3121 becomes the more symmetric hexagonal P6422 (space group 181), and α-quartz in P3221 goes to space group P6222 (no. 180). These space groups are truly chiral (they each belong to the 11 enantiomorphous pairs). Both α-quartz and β-quartz are examples of chiral crystal structures composed of achiral building blocks (SiO4 tetrahedra in the present case). The transformation between α- and β-quartz only involves a comparatively minor rotation of the tetrahedra with respect to one another, without a change in the way they are linked. Varieties (according to microstructure) Although many of the varietal names historically arose from the color of the mineral, current scientific naming schemes refer primarily to the microstructure of the mineral. Color is a secondary identifier for the cryptocrystalline minerals, although it is a primary identifier for the macrocrystalline varieties. Varieties (according to color) Pure quartz, traditionally called rock crystal or clear quartz, is colorless and transparent or translucent, and has often been used for hardstone carvings, such as the Lothair Crystal. Common colored varieties include citrine, rose quartz, amethyst, smoky quartz, milky quartz, and others. These color differentiations arise from the presence of impurities which change the molecular orbitals, causing some electronic transitions to take place in the visible spectrum causing colors. The most important distinction between types of quartz is that of macrocrystalline (individual crystals visible to the unaided eye) and the microcrystalline or cryptocrystalline varieties (aggregates of crystals visible only under high magnification). The cryptocrystalline varieties are either translucent or mostly opaque, while the transparent varieties tend to be macrocrystalline. Chalcedony is a cryptocrystalline form of silica consisting of fine intergrowths of both quartz, and its monoclinic polymorph moganite. Other opaque gemstone varieties of quartz, or mixed rocks including quartz, often including contrasting bands or patterns of color, are agate, carnelian or sard, onyx, heliotrope, and jasper. Amethyst Amethyst is a form of quartz that ranges from a bright vivid violet to dark or dull lavender shade. The world's largest deposits of amethysts can be found in Brazil, Mexico, Uruguay, Russia, France, Namibia and Morocco. Sometimes amethyst and citrine are found growing in the same crystal. It is then referred to as ametrine. An amethyst derives its color from traces of iron in its structure. Blue quartz Blue quartz contains inclusions of fibrous magnesio-riebeckite or crocidolite. Dumortierite quartz Inclusions of the mineral dumortierite within quartz pieces often result in silky-appearing splotches with a blue hue. Shades of purple or grey sometimes also are present. "Dumortierite quartz" (sometimes called "blue quartz") will sometimes feature contrasting light and dark color zones across the material. "Blue quartz" is a minor gemstone. Citrine Citrine is a variety of quartz whose color ranges from a pale yellow to brown due to a submicroscopic distribution of colloidal ferric hydroxide impurities. Natural citrines are rare; most commercial citrines are heat-treated amethysts or smoky quartzes. However, a heat-treated amethyst will have small lines in the crystal, as opposed to a natural citrine's cloudy or smoky appearance. It is nearly impossible to differentiate between cut citrine and yellow topaz visually, but they differ in hardness. Brazil is the leading producer of citrine, with much of its production coming from the state of Rio Grande do Sul. The name is derived from the Latin word citrina which means "yellow" and is also the origin of the word "citron". Sometimes citrine and amethyst can be found together in the same crystal, which is then referred to as ametrine. Citrine has been referred to as the "merchant's stone" or "money stone", due to a superstition that it would bring prosperity. Citrine was first appreciated as a golden-yellow gemstone in Greece between 300 and 150 BC, during the Hellenistic Age. The yellow quartz was used prior to that to decorate jewelry and tools but it was not highly sought after. Milky quartz Milk quartz or milky quartz is the most common variety of crystalline quartz. The white color is caused by minute fluid inclusions of gas, liquid, or both, trapped during crystal formation, making it of little value for optical and quality gemstone applications. Rose quartz Rose quartz is a type of quartz which exhibits a pale pink to rose red hue. The color is usually considered as due to trace amounts of titanium, iron, or manganese, in the material. Some rose quartz contains microscopic rutile needles which produces an asterism in transmitted light. Recent X-ray diffraction studies suggest that the color is due to thin microscopic fibers of possibly dumortierite within the quartz. Additionally, there is a rare type of pink quartz (also frequently called crystalline rose quartz) with color that is thought to be caused by trace amounts of phosphate or aluminium. The color in crystals is apparently photosensitive and subject to fading. The first crystals were found in a pegmatite found near Rumford, Maine, US and in Minas Gerais, Brazil. Smoky quartz Smoky quartz is a gray, translucent version of quartz. It ranges in clarity from almost complete transparency to a brownish-gray crystal that is almost opaque. Some can also be black. The translucency results from natural irradiation acting on minute traces of aluminum in the crystal structure. Prasiolite Prasiolite, also known as vermarine, is a variety of quartz that is green in color. Since 1950, almost all natural prasiolite has come from a small Brazilian mine, but it is also seen in Lower Silesia in Poland. Naturally occurring prasiolite is also found in the Thunder Bay area of Canada. It is a rare mineral in nature; most green quartz is heat-treated amethyst. Synthetic and artificial treatments Not all varieties of quartz are naturally occurring. Some clear quartz crystals can be treated using heat or gamma-irradiation to induce color where it would not otherwise have occurred naturally. Susceptibility to such treatments depends on the location from which the quartz was mined. Prasiolite, an olive colored material, is produced by heat treatment; natural prasiolite has also been observed in Lower Silesia in Poland. Although citrine occurs naturally, the majority is the result of heat-treating amethyst or smoky quartz. Carnelian has been heat-treated to deepen its color since prehistoric times. Because natural quartz is often twinned, synthetic quartz is produced for use in industry. Large, flawless, single crystals are synthesized in an autoclave via the hydrothermal process. Like other crystals, quartz may be coated with metal vapors to give it an attractive sheen. Occurrence Quartz is a defining constituent of granite and other felsic igneous rocks. It is very common in sedimentary rocks such as sandstone and shale. It is a common constituent of schist, gneiss, quartzite and other metamorphic rocks. Quartz has the lowest potential for weathering in the Goldich dissolution series and consequently it is very common as a residual mineral in stream sediments and residual soils. Generally a high presence of quartz suggests a "mature" rock, since it indicates the rock has been heavily reworked and quartz was the primary mineral that endured heavy weathering. While the majority of quartz crystallizes from molten magma, quartz also chemically precipitates from hot hydrothermal veins as gangue, sometimes with ore minerals like gold, silver and copper. Large crystals of quartz are found in magmatic pegmatites. Well-formed crystals may reach several meters in length and weigh hundreds of kilograms. Naturally occurring quartz crystals of extremely high purity, necessary for the crucibles and other equipment used for growing silicon wafers in the semiconductor industry, are expensive and rare. A major mining location for high purity quartz is the Spruce Pine Gem Mine in Spruce Pine, North Carolina, United States. Quartz may also be found in Caldoveiro Peak, in Asturias, Spain. The largest documented single crystal of quartz was found near Itapore, Goiaz, Brazil; it measured approximately 6.1×1.5×1.5 m and weighed 39,916 kilograms. Mining Quartz is extracted from open pit mines. Miners occasionally use explosives to expose deep pockets of quartz. More frequently, bulldozers and backhoes are used to remove soil and clay and expose quartz veins, which are then worked using hand tools. Care must be taken to avoid sudden temperature changes that may damage the crystals. Almost all
another quantity, magnitudes as either stationary or in motion. Arithmetic, then, studies quantities as such, music the relations between quantities, geometry magnitude at rest, spherics [astronomy] magnitude inherently moving. Medieval usage At many medieval universities, this would have been the course leading to the degree of Master of Arts (after the BA). After the MA, the student could enter for bachelor's degrees of the higher faculties (Theology, Medicine or Law). To this day, some of the postgraduate degree courses lead to the degree of Bachelor (the B.Phil and B.Litt. degrees are examples in the field of philosophy). The study was eclectic, approaching the philosophical objectives sought by considering it from each aspect of the quadrivium within the general structure demonstrated by Proclus (AD 412–485), namely arithmetic and music on the one hand and geometry and cosmology on the other. The subject of music within the quadrivium was originally the classical subject of harmonics, in particular the study of the proportions between the musical intervals created by the division of a monochord. A relationship to music as actually practised was not part of this study, but the framework of classical harmonics would substantially influence the content and structure of music theory as practised in both European and Islamic cultures. Modern usage In modern applications of the liberal arts as curriculum in colleges or universities, the quadrivium may be considered to be the study of number and its relationship to space or time: arithmetic was pure number, geometry was number in space, music was number in time, and astronomy was number in space and time. Morris Kline classified the four elements of the quadrivium as pure (arithmetic), stationary (geometry), moving (astronomy), and applied (music) number. This schema is sometimes referred to as "classical education", but it is more accurately a development of the 12th- and 13th-century Renaissance with recovered classical elements, rather than an organic growth from the educational systems of antiquity. The term continues to be used by the Classical education
a subject (nor faculty) in its own right, but was rather present implicitly as an 'auxiliary tool' within the discourses of the High faculties (especially theology); the complete emancipation of philosophy from theology happened only after the Medieval era. Origins These four studies compose the secondary part of the curriculum outlined by Plato in The Republic and are described in the seventh book of that work (in the order Arithmetic, Geometry, Astronomy, Music). The quadrivium is implicit in early Pythagorean writings and in the De nuptiis of Martianus Capella, although the term quadrivium was not used until Boethius, early in the sixth century. As Proclus wrote: The Pythagoreans considered all mathematical science to be divided into four parts: one half they marked off as concerned with quantity, the other half with magnitude; and each of these they posited as twofold. A quantity can be considered in regard to its character by itself or in its relation to another quantity, magnitudes as either stationary or in motion. Arithmetic, then, studies quantities as such, music the relations between quantities, geometry magnitude at rest, spherics [astronomy] magnitude inherently moving. Medieval usage At many medieval universities, this would have been the course leading to the degree of Master of Arts (after the BA). After the MA, the student could enter for bachelor's degrees of the higher
other roles (e.g., hands in the case of humans, wings in the case of birds, and fins in the case of whales). All of these animals are tetrapods, but none is a quadruped. Even snakes, whose limbs have become vestigial or lost entirely, are, nevertheless, tetrapods. In humans In July 2005, in rural Turkey, scientists discovered five Turkish siblings who had learned to walk naturally on their hands and feet. Unlike chimpanzees, which ambulate on their knuckles, the Ulas Family walked on their palms, allowing them to preserve the dexterity of their fingers. Many people, especially practitioners of parkour and freerunning and Georges Hébert's natural method, find benefit in quadrupedal movements to build full body strength. Kenichi Ito is a Japanese man famous for speed running on four limbs. Quadrupedalism is sometimes referred to as being on all fours, and is observed in crawling, especially by infants. Quadrupedal robots BigDog is a dynamically stable quadruped robot created in 2005 by Boston Dynamics with Foster-Miller, the NASA Jet Propulsion Laboratory, and the Harvard University Concord Field Station. Also by NASA JPL, in collaboration with University of California, Santa Barbara Robotics Lab, is RoboSimian, with emphasis on stability and deliberation. It has been demonstrated at the DARPA Robotics Challenge. Pronograde posture A related concept to quadrupedalism
maintains a four-legged posture and moves using all four limbs is said to be a quadruped (from Latin quattuor for "four", and pes, pedis for "foot"). Most quadrupeds are terrestrial vertebrates, including mammals and reptiles, though some are largely aquatic such as turtles, amphibians, and pinnipeds. Bipedal tetrapods such as some birds (such as the shoebill) sometimes use their wings to right themselves after lunging at prey. Quadrupeds vs. tetrapods Although the words ‘quadruped’ and ‘tetrapod’ are both derived from terms meaning ‘four-footed’, they have distinct meanings. A tetrapod is any member of the taxonomic unit Tetrapoda (which is defined by descent from a specific four-limbed ancestor), whereas a quadruped actually uses four limbs for locomotion. Not all tetrapods are quadrupeds and not all entities that could be described as ‘quadrupedal’ are tetrapods. This last meaning includes certain artificial objects; almost all quadruped organisms are tetrapods (with the exception of some raptorial arthropods adapted for four-footed locomotion, such as the Mantodea). The distinction between quadrupeds and tetrapods is important in evolutionary biology, particularly in the context of tetrapods whose limbs have adapted to other roles (e.g., hands in the case of humans, wings in the case of birds, and fins in the case of whales). All of these animals are tetrapods, but none is a quadruped. Even snakes, whose limbs have become vestigial or lost entirely, are, nevertheless, tetrapods. In humans In July 2005,
quarantine to wider society can be favourable." Short-term quarantines, e.g. for decontamination Quarantine periods can be very short, such as in the case of a suspected anthrax attack, in which people are allowed to leave as soon as they shed their potentially contaminated garments and undergo a decontamination shower. For example, an article entitled "Daily News workers quarantined" describes a brief quarantine that lasted until people could be showered in a decontamination tent. The February–March 2003 issue of HazMat Magazine suggests that people be "locked in a room until proper decon could be performed", in the event of "suspect anthrax". Standard-Times senior correspondent Steve Urbon (14 February 2003) describes such temporary quarantine powers: The purpose of such quarantine-for-decontamination is to prevent the spread of contamination and to contain the contamination such that others are not put at risk from a person fleeing a scene where contamination is suspect. It can also be used to limit exposure, as well as eliminate a vector. New developments for quarantine include new concepts in quarantine vehicles such as the ambulance bus, mobile hospitals, and lockdown/invacuation (inverse evacuation) procedures, as well as docking stations for an ambulance bus to dock to a facility under lockdown. Standard quarantine practices in different countries Australia Biosecurity in Australia is governed by the Biosecurity Act 2015. The Australian Quarantine and Inspection Service (AQIS) is responsible for border inspection of products brought into Australia, and assesses the risks the products might harm Australian environment. No person, goods, and vessels are permitted into Australia without clearance from AQIS. Visitors are required to fill in the information card on arriving in Australia. Besides other risk factors, visitors are required to declare what food and products made of wood and other natural materials they have. Visitors who fail to do so may be subject to a fine of A$444, or may face criminal prosecution and be fined up to A$444,000 or imprisonment of up to 10 years. Australia has very strict quarantine standards. Quarantine in northern Australia is especially important because of its proximity to South-East Asia and the Pacific, which have many pests and diseases not present in Australia. For this reason, the region from Cairns to Broome—including the Torres Strait—is the focus for quarantine activities that protect all Australians. As Australia has been geographically isolated from other major continents for millions of years, there is an endemically unique ecosystem free of several severe pests and diseases that are present in many parts of the world. If other products are brought inside along with pests and diseases, it would damage the ecosystem seriously and add millions of costs in the local agricultural businesses. Canada There are three quarantine Acts of Parliament in Canada: Quarantine Act (humans) and Health of Animals Act (animals) and Plant Protection Act (vegetations). The first legislation is enforced by the Canada Border Services Agency after a complete rewrite in 2005. The second and third legislations are enforced by the Canadian Food Inspection Agency. If a health emergency exists, the Governor in Council can prohibit importation of anything that it deems necessary under the Quarantine Act. Under the Quarantine Act, all travellers must submit to screening and if they believe they might have come into contact with communicable diseases or vectors, they must disclose their whereabouts to a Border Services Officer. If the officer has reasonable grounds to believe that the traveller is or might have been infected with a communicable disease or refused to provide answers, a quarantine officer (QO) must be called and the person is to be isolated. If a person refuses to be isolated, any peace officer may arrest without warrant. A QO who has reasonable grounds to believe that the traveller has or might have a communicable disease or is infested with vectors, after the medical examination of a traveller, can order him/her into treatment or measures to prevent the person from spreading the disease. QO can detain any traveller who refuses to comply with his/her orders or undergo health assessments as required by law. Under the Health of Animals Act and Plant Protection Act, inspectors can prohibit access to an infected area, dispose or treat any infected or suspected to be infected animals or plants. The Minister can order for compensation to be given if animals/plants were destroyed pursuant to these acts. Each province also enacts its own quarantine/environmental health legislation. Hong Kong Under the Prevention and Control of Disease Ordinance (HK Laws. Chap 599), a health officer may seize articles they believe to be infectious or containing infectious agents. All travellers, if requested, must submit themselves to a health officer. Failure to do so is against the law and is subject to arrest and prosecution. The law allows for health officers who have reasonable grounds to detain, isolate, quarantine anyone or anything believed to be infected, and to restrict any articles from leaving a designated quarantine area. He/she may also order the Civil Aviation Department to prohibit the landing or leaving, embarking or disembarking of an aircraft. This power also extends to land, sea or air crossings. Under the same ordinance, any police officer, health officer, member of the Civil Aid Service, or member of the Auxiliary Medical Service can arrest a person who obstructs or escapes from detention. United Kingdom To reduce the risk of introducing rabies from continental Europe, the United Kingdom used to require that dogs, and most other animals introduced to the country, spend six months in quarantine at an HM Customs and Excise pound; this policy was abolished in 2000 in favour of a scheme generally known as Pet Passports, where animals can avoid quarantine if they have documentation showing they are up to date on their appropriate vaccinations. British maritime quarantine rules 1711–1896 The plague had disappeared from England for more than thirty years before the practice of quarantine against it was definitely established by the Quarantine Act 1710 (9 Ann.). The first act was called for due to fears that the plague might be imported from Poland and the Baltic states. The second act of 1721 was due to the prevalence of plague at Marseille and other places in Provence, France. It was renewed in 1733 after a new outbreak in continental Europe, and again in 1743, due to an epidemic in Messina. In 1752 a rigorous quarantine clause was introduced into an act regulating trade with the Levant, and various arbitrary orders were issued during the next twenty years to meet the supposed danger of infection from the Baltic states. Although no plague cases ever came to England during that period, the restrictions on traffic became more stringent, and in 1788 a very strict Quarantine Act was passed, with provisions affecting cargoes in particular. The act was revised in 1801 and 1805, and in 1823–24 an elaborate inquiry was followed by an act making quarantine only at discretion of the privy council, which recognised yellow fever or other highly infectious diseases as calling for quarantine, along with plague. The threat of cholera in 1831 was the last occasion in England of the use of quarantine restrictions. Cholera affected every country in Europe, despite all efforts to keep it out. When cholera returned to England in 1849, 1853 and 1865–66, no attempt was made to seal the ports. In 1847 the privy council ordered all arrivals with a clean bill of health from the Black Sea and the Levant to be admitted, provided there had been no case of plague during the voyage, and afterwards the practice of quarantine was discontinued. After the passing of the first Quarantine Act (1710) the protective practices in England were haphazard and arbitrary. In 1721 two vessels carrying cotton goods from Cyprus, then affected by the plague, were ordered to be burned with their cargoes, the owners receiving an indemnity. By the clause in the Levant Trade Act of 1752, ships arriving in the United Kingdom with a "foul bill" (i.e. coming from a country where plague existed) had to return to the lazarets of Malta, Venice, Messina, Livorno, Genoa, or Marseille, to complete a quarantine or to have their cargoes opened and aired. Since 1741 Stangate Creek (on the Medway) had been the quarantine station but it was available only for vessels with clean bills of health. In 1755 lazarets in the form of floating hulks were established in England for the first time, the cleansing of cargo (particularly by exposure to dews) having been done previously on the ship's deck. No medical inspections were conducted, but control was the responsibility of the Officers of Royal Customs and quarantine. In 1780, when plague was in Poland, even vessels with grain from the Baltic had to spend forty days in quarantine, and unpack and air their cargoes, but due to complaints mainly from Edinburgh and Leith, an exception was made for grain after that date. About 1788 an order of the council required every ship liable to quarantine to hoist a yellow flag in the daytime and show a light at the main topmast head at night, in case of meeting any vessel at sea, or upon arriving within four leagues of the coast of Great Britain or Ireland. After 1800, ships from plague-affected countries (or with foul bills) were permitted to complete their quarantine in the Medway instead of at a Mediterranean port on the way, and an extensive lazaret was built on Chetney Hill near Chatham (although it was later demolished). The use of floating hulks as lazarets continued as before. In 1800 two ships with hides from Mogador in Morocco were ordered to be sunk with their cargoes at the Nore, the owners receiving an indemnity. Animal hides were suspected of harbouring infections, along with a long list of other items, and these had to be exposed on the ship's deck for twenty-one days or less (six days for each instalment of the cargo), and then transported to the lazaret, where they were opened and aired for another forty days. The whole detention of the vessel was from sixty to sixty-five days, including the time for reshipment of her cargo. Pilots had to pass fifteen days on board a convalescent ship. From 1846 onwards the quarantine establishments in the United Kingdom were gradually reduced, while the last vestige of the British quarantine law was removed by the Public Health Act of 1896, which repealed the Quarantine Act of 1825 (with dependent clauses of other acts), and transferred from the privy council to the Local Government Board the powers to deal with ships arriving infected with yellow fever or plague. The powers to deal with cholera ships had been already transferred by the Public Health Act 1875. British regulations of 9 November 1896 applied to yellow fever, plague and cholera. Officers of the Customs, as well as of Royal Coast Guard and the Board of Trade (for signalling), were empowered to take the initial steps. They certified in writing the master of a supposedly infected ship, and detained the vessel provisionally for not more than twelve hours, giving notice meanwhile to the port sanitary authority. The medical officer of the port boarded the ship and examined every person in it. Every person found infected was taken to a hospital and quarantined under the orders of the medical officer, and the vessel remained under his orders. Every person suspected could be detained on board for 48 hours or removed to the hospital for a similar period. All others were free to land upon giving the addresses of their destinations to be sent to the respective local authorities, so that the dispersed passengers and crew could be kept individually under observation for a few days. The ship was then disinfected, dead bodies buried at sea, infected clothing, bedding, etc., destroyed or disinfected, and bilge-water and water-ballast pumped out at a suitable distance before the ship entered a dock or basin. Mail was subject to no detention. A stricken ship within 3 miles of the shore had to fly a yellow and black flag at the main mast from sunrise to sunset. United States In the United States, authority to quarantine people with infectious diseases is split between the state and federal governments. States (and tribal governments recognised by the federal government) have primary authority to quarantine people within their boundaries. Federal jurisdiction only applies to people moving across state or national borders, or people on federal property. Federal rules Communicable diseases for which apprehension, detention, or conditional release of people are authorised must be specified in Executive Orders of the President. As of 2014, these include Executive Orders 13295 13375, and 13674; the latest executive order specifies the following infectious diseases: cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral haemorrhagic fevers (Lassa, Marburg, Ebola, Crimean-Congo, South American, and others not yet isolated or named), severe acute respiratory syndromes (SARS), and influenza from a novel or re-emergent source. The Department of Health and Human Services is responsible for quarantine decisions, specifically the Centers for Disease Control and Prevention's Division of Global Migration and Quarantine. As of 21 March 2017, Centers for Disease Control and Prevention (CDC) regulations specify: All commercial passenger flights must report deaths or illnesses to the CDC. Individuals must apply for a travel permit if they are under a Federal quarantine, isolation, or conditional release order. When an individual who is moving between U.S. states is "reasonably believed to be infected" with a quarantinable communicable disease in a "qualifying stage", the CDC may apprehend or examine that individual for potential infection. This includes new regulatory authority permitting the CDC Director to prohibit the importation of animals or products that pose a threat to public health. The rules: Do not authorise compulsory medical testing, vaccination, or medical treatment without prior informed consent. Require CDC to advise individuals subject to medical examinations that they will be conducted by an authorised health worker and with prior informed consent. Include strong due process protections for individuals subject to public health orders, including a right to counsel for indigent individuals. Limit to 72 hours the amount of time that an individual may be apprehended pending the issuance of a federal order for isolation, quarantine, or conditional release. US quarantine facilities The Division of Global Migration and Quarantine (DGMQ) of the US Centers for Disease Control (CDC) operates small quarantine facilities at a number of US ports of entry. As of 2014, these included one land crossing (in El Paso, Texas) and 19 international airports. Besides the port of entry where it is located, each station is also responsible for quarantining potentially infected travellers entering through any ports of entry in its assigned region. These facilities are fairly small; each one is operated by a few staff members and capable of accommodating 1–2 travellers for a short observation period. Cost estimates for setting up a temporary larger facility, capable of accommodating 100 to 200 travellers for several weeks, have been published by the Airport Cooperative Research Program (ACRP) in 2008 of the Transportation Research Board. US quarantine of imported goods The United States puts immediate quarantines on imported products if a contagious disease is identified and can be traced back to a certain shipment or product. All imports will also be quarantined if the disease appears in other countries. According to Title 42 U.S.C. §§264 and 266 , these statutes provide the Secretary of Health and Human Services peacetime and wartime authority to control the movement of people into and within the United States to prevent the spread of communicable disease. History of quarantine laws in the US Quarantine law began in Colonial America in 1663, when in an attempt to curb an outbreak of smallpox, the city of New York established a quarantine. In the 1730s, the city built a quarantine station on the Bedloe's Island. The Philadelphia Lazaretto was the first quarantine hospital in the United States, built in 1799, in Tinicum Township, Delaware County, Pennsylvania. There are similar national landmarks such as the Columbia River Quarantine Station, Swinburne Island and Angel Island. The Pest House in Concord, Massachusetts was used as early as 1752 to quarantine those suffering from cholera, tuberculosis and smallpox. In early June 1832, during the cholera epidemic in New York, Governor Enos Throop called a special session of the Legislature for 21 June, to pass a Public Health Act by both Houses of the State Legislature. It included to a strict quarantine along the Upper and Lower New York-Canadian frontier. In addition, New York City Mayor Walter Browne established a quarantine against all peoples and products of Europe and Asia, which prohibited ships from approaching closer than 300 yards to the city, and all vehicles were ordered to stop 1.5 miles away. The Immigrant Inspection Station on Ellis Island, built in 1892, is often mistakenly assumed to have been a quarantine station, however its marine hospital (Ellis Island Immigrant Hospital) only qualified as a contagious disease facility to handle less virulent diseases like measles, trachoma and less advanced stages of tuberculosis and diphtheria; those afflicted with smallpox, yellow fever, cholera, leprosy or typhoid fever, could neither be received nor treated there. Mary Mallon was quarantined in 1907 under the Greater New York Charter, Sections 1169–1170, which permitted the New York City Board of Health to "remove to a proper place…any person sick with any contagious, pestilential or infectious disease." During the 1918 flu pandemic, people were also quarantined. Most commonly suspect cases of infectious diseases are requested to voluntarily quarantine themselves, and Federal and local quarantine statutes only have been uncommonly invoked since then, including for a suspected smallpox case in 1963. The 1944 Public Health Service Act "to apprehend, detain, and examine certain infected persons who are peculiarly likely to cause the interstate spread of disease" clearly established the federal government's quarantine authority for the first time. It gave the United States Public Health Service responsibility for preventing the introduction, transmission and spread of communicable diseases from foreign countries into the United States, and expanded quarantine authority to include incoming aircraft. The act states that "...any individual reasonably believed to be infected with a communicable disease in a qualifying stage and...if found to be infected, may be detained for such time and in such manner as may be reasonably necessary." No federal quarantine orders were issued from 1963 until 2020, as American citizens were evacuated from China during the COVID-19 pandemic. List of quarantine services in the world Australian Quarantine and Inspection Service MAF Quarantine Service, in the New Zealand Quarantine, Western Australia Samoa Quarantine Service, in the West Samoa Racehorse & Equine Quarantine Services, A company built & developed by Frankie Thevarasa Kuala Lumpur Malaysia Federal Service for Supervision of Consumer Rights Protection and Human Welfare, a Federal Quarantine Service of the Government of Russia. Notable quarantines Eyam village, 1665 (plague) Eyam was a village in Britain that imposed a cordon sanitaire on itself to stop the spread of the bubonic plague to other communities in 1665. The plague ran its course over 14 months and one account states that it killed at least 260 villagers. The church in Eyam has a record of 273 individuals who were victims of the plague. Convict ship Surry, Sydney Harbour, 1814 (typhoid) On 28 July 1814, the convict ship Surry arrived in Sydney Harbour from England. Forty-six people had died of typhoid during the voyage, including 36 convicts, and the ship was placed in quarantine on the North Shore. Convicts were landed, and a camp was established in the immediate vicinity of what is now Jeffrey Street in Kirribilli. This was the first site in Australia to be used for quarantine purposes. 'Typhoid Mary' (US), 1907–1910 and 1915–1938 Mary Mallon was a cook who was found to be a carrier of Salmonella enterica subsp. enterica, the cause of typhoid fever, and was forcibly isolated from 1907 to 1910. At least 53 cases of the infection were traced to her, and three deaths. Subsequently she spent a further 23 years
1780, when plague was in Poland, even vessels with grain from the Baltic had to spend forty days in quarantine, and unpack and air their cargoes, but due to complaints mainly from Edinburgh and Leith, an exception was made for grain after that date. About 1788 an order of the council required every ship liable to quarantine to hoist a yellow flag in the daytime and show a light at the main topmast head at night, in case of meeting any vessel at sea, or upon arriving within four leagues of the coast of Great Britain or Ireland. After 1800, ships from plague-affected countries (or with foul bills) were permitted to complete their quarantine in the Medway instead of at a Mediterranean port on the way, and an extensive lazaret was built on Chetney Hill near Chatham (although it was later demolished). The use of floating hulks as lazarets continued as before. In 1800 two ships with hides from Mogador in Morocco were ordered to be sunk with their cargoes at the Nore, the owners receiving an indemnity. Animal hides were suspected of harbouring infections, along with a long list of other items, and these had to be exposed on the ship's deck for twenty-one days or less (six days for each instalment of the cargo), and then transported to the lazaret, where they were opened and aired for another forty days. The whole detention of the vessel was from sixty to sixty-five days, including the time for reshipment of her cargo. Pilots had to pass fifteen days on board a convalescent ship. From 1846 onwards the quarantine establishments in the United Kingdom were gradually reduced, while the last vestige of the British quarantine law was removed by the Public Health Act of 1896, which repealed the Quarantine Act of 1825 (with dependent clauses of other acts), and transferred from the privy council to the Local Government Board the powers to deal with ships arriving infected with yellow fever or plague. The powers to deal with cholera ships had been already transferred by the Public Health Act 1875. British regulations of 9 November 1896 applied to yellow fever, plague and cholera. Officers of the Customs, as well as of Royal Coast Guard and the Board of Trade (for signalling), were empowered to take the initial steps. They certified in writing the master of a supposedly infected ship, and detained the vessel provisionally for not more than twelve hours, giving notice meanwhile to the port sanitary authority. The medical officer of the port boarded the ship and examined every person in it. Every person found infected was taken to a hospital and quarantined under the orders of the medical officer, and the vessel remained under his orders. Every person suspected could be detained on board for 48 hours or removed to the hospital for a similar period. All others were free to land upon giving the addresses of their destinations to be sent to the respective local authorities, so that the dispersed passengers and crew could be kept individually under observation for a few days. The ship was then disinfected, dead bodies buried at sea, infected clothing, bedding, etc., destroyed or disinfected, and bilge-water and water-ballast pumped out at a suitable distance before the ship entered a dock or basin. Mail was subject to no detention. A stricken ship within 3 miles of the shore had to fly a yellow and black flag at the main mast from sunrise to sunset. United States In the United States, authority to quarantine people with infectious diseases is split between the state and federal governments. States (and tribal governments recognised by the federal government) have primary authority to quarantine people within their boundaries. Federal jurisdiction only applies to people moving across state or national borders, or people on federal property. Federal rules Communicable diseases for which apprehension, detention, or conditional release of people are authorised must be specified in Executive Orders of the President. As of 2014, these include Executive Orders 13295 13375, and 13674; the latest executive order specifies the following infectious diseases: cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral haemorrhagic fevers (Lassa, Marburg, Ebola, Crimean-Congo, South American, and others not yet isolated or named), severe acute respiratory syndromes (SARS), and influenza from a novel or re-emergent source. The Department of Health and Human Services is responsible for quarantine decisions, specifically the Centers for Disease Control and Prevention's Division of Global Migration and Quarantine. As of 21 March 2017, Centers for Disease Control and Prevention (CDC) regulations specify: All commercial passenger flights must report deaths or illnesses to the CDC. Individuals must apply for a travel permit if they are under a Federal quarantine, isolation, or conditional release order. When an individual who is moving between U.S. states is "reasonably believed to be infected" with a quarantinable communicable disease in a "qualifying stage", the CDC may apprehend or examine that individual for potential infection. This includes new regulatory authority permitting the CDC Director to prohibit the importation of animals or products that pose a threat to public health. The rules: Do not authorise compulsory medical testing, vaccination, or medical treatment without prior informed consent. Require CDC to advise individuals subject to medical examinations that they will be conducted by an authorised health worker and with prior informed consent. Include strong due process protections for individuals subject to public health orders, including a right to counsel for indigent individuals. Limit to 72 hours the amount of time that an individual may be apprehended pending the issuance of a federal order for isolation, quarantine, or conditional release. US quarantine facilities The Division of Global Migration and Quarantine (DGMQ) of the US Centers for Disease Control (CDC) operates small quarantine facilities at a number of US ports of entry. As of 2014, these included one land crossing (in El Paso, Texas) and 19 international airports. Besides the port of entry where it is located, each station is also responsible for quarantining potentially infected travellers entering through any ports of entry in its assigned region. These facilities are fairly small; each one is operated by a few staff members and capable of accommodating 1–2 travellers for a short observation period. Cost estimates for setting up a temporary larger facility, capable of accommodating 100 to 200 travellers for several weeks, have been published by the Airport Cooperative Research Program (ACRP) in 2008 of the Transportation Research Board. US quarantine of imported goods The United States puts immediate quarantines on imported products if a contagious disease is identified and can be traced back to a certain shipment or product. All imports will also be quarantined if the disease appears in other countries. According to Title 42 U.S.C. §§264 and 266 , these statutes provide the Secretary of Health and Human Services peacetime and wartime authority to control the movement of people into and within the United States to prevent the spread of communicable disease. History of quarantine laws in the US Quarantine law began in Colonial America in 1663, when in an attempt to curb an outbreak of smallpox, the city of New York established a quarantine. In the 1730s, the city built a quarantine station on the Bedloe's Island. The Philadelphia Lazaretto was the first quarantine hospital in the United States, built in 1799, in Tinicum Township, Delaware County, Pennsylvania. There are similar national landmarks such as the Columbia River Quarantine Station, Swinburne Island and Angel Island. The Pest House in Concord, Massachusetts was used as early as 1752 to quarantine those suffering from cholera, tuberculosis and smallpox. In early June 1832, during the cholera epidemic in New York, Governor Enos Throop called a special session of the Legislature for 21 June, to pass a Public Health Act by both Houses of the State Legislature. It included to a strict quarantine along the Upper and Lower New York-Canadian frontier. In addition, New York City Mayor Walter Browne established a quarantine against all peoples and products of Europe and Asia, which prohibited ships from approaching closer than 300 yards to the city, and all vehicles were ordered to stop 1.5 miles away. The Immigrant Inspection Station on Ellis Island, built in 1892, is often mistakenly assumed to have been a quarantine station, however its marine hospital (Ellis Island Immigrant Hospital) only qualified as a contagious disease facility to handle less virulent diseases like measles, trachoma and less advanced stages of tuberculosis and diphtheria; those afflicted with smallpox, yellow fever, cholera, leprosy or typhoid fever, could neither be received nor treated there. Mary Mallon was quarantined in 1907 under the Greater New York Charter, Sections 1169–1170, which permitted the New York City Board of Health to "remove to a proper place…any person sick with any contagious, pestilential or infectious disease." During the 1918 flu pandemic, people were also quarantined. Most commonly suspect cases of infectious diseases are requested to voluntarily quarantine themselves, and Federal and local quarantine statutes only have been uncommonly invoked since then, including for a suspected smallpox case in 1963. The 1944 Public Health Service Act "to apprehend, detain, and examine certain infected persons who are peculiarly likely to cause the interstate spread of disease" clearly established the federal government's quarantine authority for the first time. It gave the United States Public Health Service responsibility for preventing the introduction, transmission and spread of communicable diseases from foreign countries into the United States, and expanded quarantine authority to include incoming aircraft. The act states that "...any individual reasonably believed to be infected with a communicable disease in a qualifying stage and...if found to be infected, may be detained for such time and in such manner as may be reasonably necessary." No federal quarantine orders were issued from 1963 until 2020, as American citizens were evacuated from China during the COVID-19 pandemic. List of quarantine services in the world Australian Quarantine and Inspection Service MAF Quarantine Service, in the New Zealand Quarantine, Western Australia Samoa Quarantine Service, in the West Samoa Racehorse & Equine Quarantine Services, A company built & developed by Frankie Thevarasa Kuala Lumpur Malaysia Federal Service for Supervision of Consumer Rights Protection and Human Welfare, a Federal Quarantine Service of the Government of Russia. Notable quarantines Eyam village, 1665 (plague) Eyam was a village in Britain that imposed a cordon sanitaire on itself to stop the spread of the bubonic plague to other communities in 1665. The plague ran its course over 14 months and one account states that it killed at least 260 villagers. The church in Eyam has a record of 273 individuals who were victims of the plague. Convict ship Surry, Sydney Harbour, 1814 (typhoid) On 28 July 1814, the convict ship Surry arrived in Sydney Harbour from England. Forty-six people had died of typhoid during the voyage, including 36 convicts, and the ship was placed in quarantine on the North Shore. Convicts were landed, and a camp was established in the immediate vicinity of what is now Jeffrey Street in Kirribilli. This was the first site in Australia to be used for quarantine purposes. 'Typhoid Mary' (US), 1907–1910 and 1915–1938 Mary Mallon was a cook who was found to be a carrier of Salmonella enterica subsp. enterica, the cause of typhoid fever, and was forcibly isolated from 1907 to 1910. At least 53 cases of the infection were traced to her, and three deaths. Subsequently she spent a further 23 years in isolation prior to her death in 1938. The presence of the bacteria in her gallbladder was confirmed on autopsy. East Samoa, 1918 (flu pandemic) During the 1918 flu pandemic, the then Governor of American Samoa, John Martin Poyer, imposed a full protective sequestration of the islands from all incoming ships, successfully preventing influenza from infecting the population and thus achieving zero deaths within the territory. In contrast, the neighbouring New Zealand-controlled Western Samoa was among the hardest hit, with a 90% infection rate and over 20% of its adults dying from the disease. This failure by the New Zealand government to prevent and contain the Spanish Flu subsequently rekindled Samoan anti-colonial sentiments that led to its eventual independence. Gruinard Island, 1942–1990 (anthrax) In 1942, during World War II, British forces tested out their biological weapons program on Gruinard Island and infected it with anthrax. Subsequently a quarantine order was placed on the island. The quarantine was lifted in 1990, when the
observationally for the first time with images of the double quasar 0957+561. A study published in February, 2021, showed that there are more quasars in one direction (towards Hydra) than in the opposite direction, seemingly indicating that we are moving in that direction. But the direction of this dipole is about 28° away from the direction of our motion relative to the cosmic microwave background radiation. In March, 2021, a collaboration of scientists, related to the Event Horizon Telescope, presented, for the first time, a polarized-based image of a black hole, particularly the black hole at the center of Messier 87, an elliptical galaxy approximately 55 million light-years away in the constellation Virgo, revealing the forces giving rise to quasars. Current understanding It is now known that quasars are distant but extremely luminous objects, so any light that reaches the Earth is redshifted due to the metric expansion of space. Quasars inhabit the centers of active galaxies and are among the most luminous, powerful, and energetic objects known in the universe, emitting up to a thousand times the energy output of the Milky Way, which contains 200–400 billion stars. This radiation is emitted across the electromagnetic spectrum, almost uniformly, from X-rays to the far infrared with a peak in the ultraviolet optical bands, with some quasars also being strong sources of radio emission and of gamma-rays. With high-resolution imaging from ground-based telescopes and the Hubble Space Telescope, the "host galaxies" surrounding the quasars have been detected in some cases. These galaxies are normally too dim to be seen against the glare of the quasar, except with special techniques. Most quasars, with the exception of 3C 273, whose average apparent magnitude is 12.9, cannot be seen with small telescopes. Quasars are believed—and in many cases confirmed—to be powered by accretion of material into supermassive black holes in the nuclei of distant galaxies, as suggested in 1964 by Edwin Salpeter and Yakov Zel'dovich. Light and other radiation cannot escape from within the event horizon of a black hole. The energy produced by a quasar is generated outside the black hole, by gravitational stresses and immense friction within the material nearest to the black hole, as it orbits and falls inward. The huge luminosity of quasars results from the accretion discs of central supermassive black holes, which can convert between 6% and 32% of the mass of an object into energy, compared to just 0.7% for the p–p chain nuclear fusion process that dominates the energy production in Sun-like stars. Central masses of 105 to 109 solar masses have been measured in quasars by using reverberation mapping. Several dozen nearby large galaxies, including our own Milky Way galaxy, that do not have an active center and do not show any activity similar to a quasar, are confirmed to contain a similar supermassive black hole in their nuclei (galactic center). Thus it is now thought that all large galaxies have a black hole of this kind, but only a small fraction have sufficient matter in the right kind of orbit at their center to become active and power radiation in such a way as to be seen as quasars. This also explains why quasars were more common in the early universe, as this energy production ends when the supermassive black hole consumes all of the gas and dust near it. This means that it is possible that most galaxies, including the Milky Way, have gone through an active stage, appearing as a quasar or some other class of active galaxy that depended on the black-hole mass and the accretion rate, and are now quiescent because they lack a supply of matter to feed into their central black holes to generate radiation. The matter accreting onto the black hole is unlikely to fall directly in, but will have some angular momentum around the black hole, which will cause the matter to collect into an accretion disc. Quasars may also be ignited or re-ignited when normal galaxies merge and the black hole is infused with a fresh source of matter. In fact, it has been suggested that a quasar could form when the Andromeda Galaxy collides with our own Milky Way galaxy in approximately 3–5 billion years. In the 1980s, unified models were developed in which quasars were classified as a particular kind of active galaxy, and a consensus emerged that in many cases it is simply the viewing angle that distinguishes them from other active galaxies, such as blazars and radio galaxies. The highest-redshift quasar known () was ULAS J1342+0928, with a redshift of 7.54, which corresponds to a comoving distance of approximately 29.36 billion light-years from Earth (these distances are much larger than the distance light could travel in the universe's 13.8-billion-year history because space itself has also been expanding). Properties More than quasars have been found (as of August 2020), most from the Sloan Digital Sky Survey. All observed quasar spectra have redshifts between 0.056 and 7.64 (as of 2021). Applying Hubble's law to these redshifts, it can be shown that they are between 600 million and 29.36 billion light-years away (in terms of comoving distance). Because of the great distances to the farthest quasars and the finite velocity of light, they and their surrounding space appear as they existed in the very early universe. The power of quasars originates from supermassive black holes that are believed to exist at the core of most galaxies. The Doppler shifts of stars near the cores of galaxies indicate that they are revolving around tremendous masses with very steep gravity gradients, suggesting black holes. Although quasars appear faint when viewed from Earth, they are visible from extreme distances, being the most luminous objects in the known universe. The brightest quasar in the sky is 3C 273 in the constellation of Virgo. It has an average apparent magnitude of 12.8 (bright enough to be seen through a medium-size amateur telescope), but it has an absolute magnitude of −26.7. From a distance of about 33 light-years, this object would shine in the sky about as brightly as our Sun. This quasar's luminosity is, therefore, about 4 trillion (4) times that of the Sun, or about 100 times that of the total light of giant galaxies like the Milky Way. This assumes that the quasar is radiating energy in all directions, but the active galactic nucleus is believed to be radiating preferentially in the direction of its jet. In a universe containing hundreds of billions of galaxies, most of which had active nuclei billions of years ago but only seen today, it is statistically certain that thousands of energy jets should be pointed toward the Earth, some more directly than others. In many cases it is likely that the brighter the quasar, the more directly its jet is aimed at the Earth. Such quasars are called blazars. The hyperluminous quasar APM 08279+5255 was, when discovered in 1998, given an absolute magnitude of −32.2. High-resolution imaging with the Hubble Space Telescope and the 10 m Keck Telescope revealed that this system is gravitationally lensed. A study of the gravitational lensing of this system suggests that the light emitted has been magnified by a factor of ~10. It is still substantially more luminous than nearby quasars such as 3C 273. Quasars were much more common in the early universe than they are today. This discovery by Maarten Schmidt in 1967 was early strong evidence against steady-state cosmology and in favor of the Big Bang cosmology. Quasars show the locations where supermassive black holes are growing rapidly (by accretion). Detailed simulations reported in 2021 showed that galaxy structures, such as spiral arms, use gravitational forces to 'put the brakes on' gas that would otherwise orbit galaxy centers forever; instead the braking mechanism enabled the gas to fall into the supermassive black holes, releasing enormous radiant energies. These black holes co-evolve with the mass of stars in their host galaxy in a way not fully understood at present. One idea is that jets, radiation and winds created by the quasars shut down the formation of new stars in the host galaxy, a process called "feedback". The jets that produce strong radio emission in some quasars at the centers of clusters of galaxies are known to have enough power to prevent the hot gas in those clusters from cooling and falling on to the central galaxy. Quasars' luminosities are variable, with time scales that range from months to hours. This means that quasars generate and emit their energy from a very small region, since each part of the quasar would have to be in contact with other parts on such a time scale as to allow the coordination of the luminosity variations. This would mean that a quasar varying on a time scale of a few weeks cannot be larger than a few light-weeks across. The emission of large amounts of power from a small region requires a power source far more efficient than the nuclear fusion that powers stars. The conversion of gravitational potential energy to radiation by infalling to a black hole converts between 6% and 32% of the mass to energy, compared to 0.7% for the conversion of mass to energy in a star like our Sun. It is the only process known that can produce such high power over a very long term. (Stellar
(QSOs), a name which reflected their unknown nature, and this became shortened to "quasar". Early observations (1960s and earlier) The first quasars (3C 48 and 3C 273) were discovered in the late 1950s, as radio sources in all-sky radio surveys. They were first noted as radio sources with no corresponding visible object. Using small telescopes and the Lovell Telescope as an interferometer, they were shown to have a very small angular size. By 1960, hundreds of these objects had been recorded and published in the Third Cambridge Catalogue while astronomers scanned the skies for their optical counterparts. In 1963, a definite identification of the radio source 3C 48 with an optical object was published by Allan Sandage and Thomas A. Matthews. Astronomers had detected what appeared to be a faint blue star at the location of the radio source and obtained its spectrum, which contained many unknown broad emission lines. The anomalous spectrum defied interpretation. British-Australian astronomer John Bolton made many early observations of quasars, including a breakthrough in 1962. Another radio source, 3C 273, was predicted to undergo five occultations by the Moon. Measurements taken by Cyril Hazard and John Bolton during one of the occultations using the Parkes Radio Telescope allowed Maarten Schmidt to find a visible counterpart to the radio source and obtain an optical spectrum using the Hale Telescope on Mount Palomar. This spectrum revealed the same strange emission lines. Schmidt was able to demonstrate that these were likely to be the ordinary spectral lines of hydrogen redshifted by 15.8%, at the time, a high redshift (with only a handful of much fainter galaxies known with higher redshift). If this was due to the physical motion of the "star", then 3C 273 was receding at an enormous velocity, around , far beyond the speed of any known star and defying any obvious explanation. Nor would an extreme velocity help to explain 3C 273's huge radio emissions. If the redshift was cosmological (now known to be correct), the large distance implied that 3C 273 was far more luminous than any galaxy, but much more compact. Also, 3C 273 was bright enough to detect on archival photographs dating back to the 1900s; it was found to be variable on yearly timescales, implying that a substantial fraction of the light was emitted from a region less than 1 light-year in size, tiny compared to a galaxy. Although it raised many questions, Schmidt's discovery quickly revolutionized quasar observation. The strange spectrum of 3C 48 was quickly identified by Schmidt, Greenstein and Oke as hydrogen and magnesium redshifted by 37%. Shortly afterwards, two more quasar spectra in 1964 and five more in 1965 were also confirmed as ordinary light that had been redshifted to an extreme degree. While the observations and redshifts themselves were not doubted, their correct interpretation was heavily debated, and Bolton's suggestion that the radiation detected from quasars were ordinary spectral lines from distant highly redshifted sources with extreme velocity was not widely accepted at the time. Development of physical understanding (1960s) An extreme redshift could imply great distance and velocity but could also be due to extreme mass or perhaps some other unknown laws of nature. Extreme velocity and distance would also imply immense power output, which lacked explanation. The small sizes were confirmed by interferometry and by observing the speed with which the quasar as a whole varied in output, and by their inability to be seen in even the most powerful visible-light telescopes as anything more than faint starlike points of light. But if they were small and far away in space, their power output would have to be immense and difficult to explain. Equally, if they were very small and much closer to our galaxy, it would be easy to explain their apparent power output, but less easy to explain their redshifts and lack of detectable movement against the background of the universe. Schmidt noted that redshift is also associated with the expansion of the universe, as codified in Hubble's law. If the measured redshift was due to expansion, then this would support an interpretation of very distant objects with extraordinarily high luminosity and power output, far beyond any object seen to date. This extreme luminosity would also explain the large radio signal. Schmidt concluded that 3C 273 could either be an individual star around 10 km wide within (or near to) our galaxy, or a distant active galactic nucleus. He stated that a distant and extremely powerful object seemed more likely to be correct. Schmidt's explanation for the high redshift was not widely accepted at the time. A major concern was the enormous amount of energy these objects would have to be radiating, if they were distant. In the 1960s no commonly accepted mechanism could account for this. The currently accepted explanation, that it is due to matter in an accretion disc falling into a supermassive black hole, was only suggested in 1964 by Edwin Salpeter and Yakov Zel'dovich, and even then it was rejected by many astronomers, because in the 1960s, the existence of black holes was still widely seen as theoretical and too exotic, and because it was not yet confirmed that many galaxies (including our own) have supermassive black holes at their center. The strange spectral lines in their radiation, and the speed of change seen in some quasars, also suggested to many astronomers and cosmologists that the objects were comparatively small and therefore perhaps bright, massive and not far away; accordingly that their redshifts were not due to distance or velocity, and must be due to some other reason or an unknown process, meaning that the quasars were not really powerful objects nor at extreme distances, as their redshifted light implied. A common alternative explanation was that the redshifts were caused by extreme mass (gravitational redshifting explained by general relativity) and not by extreme velocity (explained by special relativity). Various explanations were proposed during the 1960s and 1970s, each with their own problems. It was suggested that quasars were nearby objects, and that their redshift was not due to the expansion of space but rather to light escaping a deep gravitational well. This would require a massive object, which would also explain the high luminosities. However, a star of sufficient mass to produce the measured redshift would be unstable and in excess of the Hayashi limit. Quasars also show forbidden spectral emission lines, previously only seen in hot gaseous nebulae of low density, which would be too diffuse to both generate the observed power and fit within a deep gravitational well. There were also serious concerns regarding the idea of cosmologically distant quasars. One strong argument against them was that they implied energies that were far in excess of known energy conversion processes, including nuclear fusion. There were suggestions that quasars were made of some hitherto unknown stable form of antimatter in similarly unknown types of region of space, and that this might account for their brightness. Others speculated that quasars were a white hole end of a wormhole, or a chain reaction of numerous supernovae. Eventually, starting from about the 1970s, many lines of evidence (including the first X-ray space observatories, knowledge of black holes and modern models of cosmology) gradually demonstrated that the quasar redshifts are genuine and due to the expansion of space, that quasars are in fact as powerful and as distant as Schmidt and some other astronomers had suggested, and that their energy source is matter from an accretion disc falling onto a supermassive black hole. This included crucial evidence from optical and X-ray viewing of quasar host galaxies, finding of "intervening" absorption lines, which explained various spectral anomalies, observations from gravitational lensing, Peterson and Gunn's 1971 finding that galaxies containing quasars showed the same redshift as the quasars, and Kristian's 1973 finding that the "fuzzy" surrounding of many quasars was consistent with a less luminous host galaxy. This model also fits well with other observations suggesting that many or even most galaxies have a massive central black hole. It would also explain why quasars are more common in the early universe: as a quasar draws matter from its accretion disc, there comes a point when there is less matter nearby, and energy production falls off or ceases, as the quasar becomes a more ordinary type of galaxy. The accretion-disc energy-production mechanism was finally modeled in the 1970s, and black holes were also directly detected (including evidence showing that supermassive black holes could be found at the centers of our own and many other galaxies), which resolved the concern that quasars were too luminous to be a result of very distant objects or that a suitable mechanism could not be confirmed to exist in nature. By 1987 it was "well accepted" that this was the correct explanation for quasars, and the cosmological distance and energy output of quasars was accepted by almost all researchers. Modern observations (1970s onward) Later it was found that not all quasars have strong radio emission; in fact only about 10% are "radio-loud". Hence the name "QSO" (quasi-stellar object) is used (in addition to "quasar") to refer to these objects, further categorised into the "radio-loud" and the "radio-quiet" classes. The discovery of the quasar had large implications for the field of astronomy in the 1960s, including drawing physics and astronomy closer together. In 1979 the gravitational lens effect predicted by Albert Einstein's general theory of relativity was confirmed observationally for the first time with images of the double quasar 0957+561. A study published in February, 2021, showed that there are more quasars in one direction (towards Hydra) than in the opposite direction, seemingly indicating that we are moving in that direction. But the direction of this dipole is about 28° away from the direction of our motion relative to the cosmic microwave background radiation. In March, 2021, a collaboration of scientists, related to the Event Horizon Telescope, presented, for the first time, a polarized-based image of a black hole, particularly the black hole at the center of Messier 87, an elliptical galaxy approximately 55 million light-years away in the constellation Virgo, revealing the forces giving rise to quasars. Current understanding It is now known that quasars are distant but extremely luminous objects, so any light that reaches the Earth is redshifted due to the metric expansion of space. Quasars inhabit the centers of active galaxies and are among the most luminous, powerful, and energetic objects known in the universe, emitting up to a thousand times the energy output of the Milky Way, which contains 200–400 billion stars. This radiation is emitted across the electromagnetic spectrum, almost uniformly, from X-rays to the far infrared with a peak in the ultraviolet optical bands, with some quasars also being strong sources of radio emission and of gamma-rays. With high-resolution imaging from ground-based telescopes and the Hubble Space Telescope, the "host galaxies" surrounding the quasars have been detected in some cases. These galaxies are normally too dim to be seen against the glare of the quasar, except with special techniques. Most quasars, with the exception of 3C 273, whose average apparent magnitude is 12.9, cannot be seen with small telescopes. Quasars are believed—and in many cases confirmed—to be powered by accretion of material into supermassive black holes in the nuclei of distant galaxies, as suggested in 1964 by Edwin Salpeter and Yakov Zel'dovich. Light and other radiation cannot escape from within the event horizon of a black hole. The energy produced by a quasar is generated outside the black hole, by gravitational stresses and immense friction within the material nearest to the black hole, as it orbits and falls inward. The huge luminosity of quasars results from the accretion discs of central supermassive black holes, which can convert between 6% and 32% of the mass of an object into energy, compared to just 0.7% for the p–p chain nuclear fusion process that dominates the energy production in Sun-like stars. Central masses of 105 to 109 solar masses have been measured in quasars by using reverberation mapping. Several dozen nearby large galaxies, including our own Milky Way galaxy, that do not have an active center and do not show any activity similar to a quasar, are confirmed to contain a similar supermassive black hole in their nuclei (galactic center). Thus it is now thought that all large galaxies have a black hole of this kind, but only a small fraction have sufficient matter in the right kind of orbit at their center to become active and power radiation in such a way as to be seen as quasars. This also explains why quasars were more common in the early universe, as this energy production ends when the supermassive black hole consumes all of the gas and dust near it. This means that it is possible that most galaxies, including the Milky Way, have gone through an active stage, appearing as a quasar or some other class of active galaxy that depended on the black-hole mass and the accretion rate, and are now quiescent because they lack a supply of matter to feed into their central black holes to generate radiation. The matter accreting onto the black hole is unlikely to fall directly in, but will have some angular momentum around the black hole, which will cause the matter to collect into an accretion disc. Quasars may also be ignited or re-ignited when normal galaxies merge and the black hole is infused with a fresh source of matter. In fact, it has been suggested that a
Shrovetide rods (fastelavnsris), which "branches decorated with sweets, little presents, etc., that are used to decorate the home or give to children." In the Revised Common Lectionary the Sunday before Lent is designated "Transfiguration Sunday", and the gospel reading is the story of the Transfiguration of Jesus from Matthew, Mark, or Luke. Some churches whose lectionaries derive from the RCL, e.g. the Church of England, use these readings but do not designate the Sunday "Transfiguration Sunday". Etymology The name Quinquagesima originates from Latin quinquagesimus (fiftieth). This is in reference to the fifty days before Easter Day using inclusive counting which counts both Sundays (normal counting would count only one of these). Since the forty days of Lent do not include Sundays, the first day of Lent, Ash Wednesday, succeeds Quinquagesima Sunday by only three days. The name Estomihi is derived from the incipit or opening words of the Introit for the Sunday, Esto mihi in Deum protectorem, et in locum refugii, ut salvum me facias, ("Be Thou unto me a God, a Protector, and a place of refuge, to save me") . Dates and significance The earliest Quinquagesima Sunday can occur is February 1 and the latest is March 7. Recent and upcoming dates: Western Christianity Roman Catholic Church In the Roman Catholic Church, the terms for this Sunday (and the two immediately before it — Sexagesima and Septuagesima Sundays) were eliminated in the reforms following the Second Vatican Council, and these Sundays are part of Ordinary Time. According to the reformed Roman Rite Roman Catholic calendar, this Sunday is now known by its number within Ordinary Time — fourth through ninth, depending upon the date of Easter. The earlier form of the Roman Rite, with its references to Quinquagesima Sunday, and to the Sexagesima and Septuagesima Sundays, continues to be observed in some communities. In traditional lectionaries, the Sunday concentrates on , "Jesus took the twelve aside and said, 'Lo, we go to Jerusalem, and everything written by the prophets about the Son of Man shall be fulfilled' ... The disciples, however, understood none of this," which from verse 35 is followed by Luke's version of Healing the blind near Jericho. The passage presages the themes of Lent and Holy Week. In most churches, palms blessed on Palm Sunday of the previous year are burned on this day after the last Mass of the day, the ashes of these burned palms are used for the liturgy of Ash Wednesday. Lutheran Churches In Lutheranism is combined with (Paul's praise of love). Composers writing cantatas for Estomihi Sunday include: Johann Sebastian Bach: BWV 22, 23, 127 and 159 (see Church cantata (Bach)#Estomihi) Christoph Graupner: 25 cantatas (see List of cantatas
Roman Rite, with its references to Quinquagesima Sunday, and to the Sexagesima and Septuagesima Sundays, continues to be observed in some communities. In traditional lectionaries, the Sunday concentrates on , "Jesus took the twelve aside and said, 'Lo, we go to Jerusalem, and everything written by the prophets about the Son of Man shall be fulfilled' ... The disciples, however, understood none of this," which from verse 35 is followed by Luke's version of Healing the blind near Jericho. The passage presages the themes of Lent and Holy Week. In most churches, palms blessed on Palm Sunday of the previous year are burned on this day after the last Mass of the day, the ashes of these burned palms are used for the liturgy of Ash Wednesday. Lutheran Churches In Lutheranism is combined with (Paul's praise of love). Composers writing cantatas for Estomihi Sunday include: Johann Sebastian Bach: BWV 22, 23, 127 and 159 (see Church cantata (Bach)#Estomihi) Christoph Graupner: 25 cantatas (see List of cantatas by Christoph Graupner#GWV 1119) Georg Philipp Telemann: 21 extant cantatas, including TWV 1:1258 (Harmonischer Gottes-Dienst). Lutheran countries such as Denmark mark Quinquagesima Sunday as the peak of the Fastelavn. After attending the Mass on Shrove Sunday, congregants enjoy Shrovetide buns (fastelavnsboller), "round sweet buns that are covered with icing and filled with cream and/or jam." Children often dress up and collect money from people while singing. They also practice the tradition of hitting a barrel, which represents fighting Satan; after doing this, children enjoy the sweets inside the barrel. Lutheran Christians in these nations carry Shrovetide rods (fastelavnsris), which "branches decorated with sweets, little presents, etc., that are used to decorate the home or give to children." Anglican Communion This Sunday
the family Simaroubaceae. Its size is disputed; some botanists treat it as consisting of only one species, Quassia amara from tropical South America, while others treat it in a wide circumscription as a pantropical genus containing up to 40 species of trees and shrubs. The genus was named after a former slave from Suriname, Graman Quassi in the eighteenth century. He discovered the medicinal properties of the bark of Quassia amara. Distribution Members of the genus are found in the Tropics throughout the world. Countries and regions where species are native include: Andaman Islands, Angola, Bangladesh, Belize, Benin, Bismarck Archipelago, Borneo, North and Northeast Brazil, Burkina,
America, while others treat it in a wide circumscription as a pantropical genus containing up to 40 species of trees and shrubs. The genus was named after a former slave from Suriname, Graman Quassi in the eighteenth century. He discovered the medicinal properties of the bark of Quassia amara. Distribution Members of the genus are found in the Tropics throughout the world. Countries and regions where species are native include: Andaman Islands, Angola, Bangladesh, Belize, Benin, Bismarck Archipelago, Borneo, North and Northeast Brazil, Burkina, Cabinda, Cambodia, Cameroon, Central African Republic, Chad, Colombia, Comoros, Congo, Costa Rica, El Salvador, Equatorial Guinea, Gabon, Gambia, Ghana, Guatemala, Guinea, Guinea-Bissau, Gulf of Guinea Islands, Honduras, India, Ivory Coast, Kenya, Laos, Leeward Islands, Liberia, Madagascar, Malaya, Mali, Central, Southeast
surname as a term predates World War II. The first recorded use of the term was by Norwegian Labour Party politician Oscar Torp in a 2 January 1933 newspaper interview, where he used it as a general term for Quisling's followers. Quisling was at this point in the process of establishing the Nasjonal Samling (National Unity) party, a fascist party modelled on the German Nazi Party. Further uses of the term were made by Aksel Sandemose, in a newspaper article in Dagbladet in 1934, and by the newspaper Vestfold Arbeiderblad, in 1936. The term with the opposite meaning, a Norwegian patriot, is Jøssing. Popularization in World War II The use of the name as a term for collaborators or traitors in general probably came about upon Quisling's unsuccessful 1940 coup d'état, when he attempted to seize power and make Norway cease resisting the invading Germans. The term was widely introduced to an English-speaking audience by the British newspaper The Times. It published an editorial on 19 April 1940 titled "Quislings everywhere", in which it was asserted that "To writers, the word Quisling is a gift from the gods. If they had been ordered to invent a new word for traitor... they could hardly have hit upon a more brilliant combination of letters. Aurally it contrives to suggest something at once slippery and tortuous." The Daily Mail picked up the term four days after The Times editorial was published. The War Illustrated discussed "potential Quislings" among the Dutch during the German invasion of the Netherlands. Subsequently, the BBC brought the word into common use internationally. Chips Channon described how during the Norway Debate of 7–8 May 1940, he and other Conservative MPs who supported Prime Minister of the United Kingdom Neville Chamberlain called those who voted against a motion of no confidence "Quislings". Chamberlain's successor Winston Churchill used the term during an address to the Allied Delegates at St. James's Palace on 21 June 1941, when he said: "A vile race of Quislings—to use a new word which will carry the scorn of mankind down the centuries—is hired to fawn upon the conqueror, to collaborate in his designs and to enforce his rule upon their fellow countrymen while grovelling low themselves." He used the term again in an address to both houses of Congress in the United States of America on 26 December 1941. Commenting upon the effect of a number of Allied victories against Axis forces, and
to enforce his rule upon their fellow countrymen while grovelling low themselves." He used the term again in an address to both houses of Congress in the United States of America on 26 December 1941. Commenting upon the effect of a number of Allied victories against Axis forces, and moreover the United States’ decision to enter the war, Churchill opined: "Hope has returned to the hearts of scores of millions of men and women, and with that hope there burns the flame of anger against the brutal, corrupt invader. And still more fiercely burn the fires of hatred and contempt for the filthy Quislings whom he has suborned." The term subsequently entered the language and became a target for political cartoonists. In the United States, it was used often. In the Warner Bros. cartoon Tom Turk and Daffy (1944), it was uttered by a Thanksgiving turkey whose presence is betrayed to Porky Pig by Daffy Duck. In the American film Edge of Darkness (1943), about the Resistance in Norway, the heroine's brother is often described as a quisling. Verb form The back-formed verb, to quisle () exists, and gave rise to a much less common version of the noun: quisler. However, the verb form was rare even during World War II and has entirely disappeared from contemporary usage. Postwar use "Quisling" was applied to some who cooperated with communist takeovers. As an illustration, the renegade social democrat Zdeněk Fierlinger of Czechoslovakia was frequently derided as "Quislinger" for his collaboration with the Communist Party of Czechoslovakia. "The Patriot Game", one of the best known songs to emerge from the Irish nationalist struggle, includes the line "...those quislings who sold out the Patriot Game" in some versions (although the original uses "cowards" and other versions substitute "rebels" or "traitors"). In the Norwegian television series Occupied, Norwegians who are seen as collaborating with the Russian invaders and later with European Union peacekeepers are called Quislings. In the epilogue of Farnham's Freehold by Robert A. Heinlein, a sign is posted listing available goods and services. One of the items listed is "Jerked Quisling (by the neck)". 21st century In the early 21st century, the term demonstrated continued currency as it was used by some American writers to describe President Donald Trump and his associates. In a June 2018 New York Times column, Paul Krugman called US President Trump a "quisling", in reference to what Krugman described as Trump's "serv[ing] the interests of foreign masters at his own country’s expense" and "defend[ing] Russia while attacking our closest allies". Other publications also applied the term. For instance, Joe Scarborough in the Washington Post ("These are desperate times
North America Quadrangle (Springfield, Massachusetts), a cluster of museums and cultural institutions Quadrangle Dormitories (University of Pennsylvania) Francis Quadrangle, University of Missouri Memorial Quadrangle, Yale University Radcliffe Quadrangle (Harvard) Schenley Quadrangle, University of Pittsburgh University of Alabama Quad, University of Alabama Europe Mob Quad, Merton College, Oxford Radcliffe Quadrangle, University College, Oxford Tom Quad (Great Quadrangle), Christ Church, Oxford Main Quad at the Main Building of University College London Oceania University of Sydney Quadrangle, a sandstone building at the University of Sydney (Camperdown) Other Quadrangle (geography), a United States Geological Survey topographical map Quadrangle (horse), American thoroughbred,
College, Oxford Radcliffe Quadrangle, University College, Oxford Tom Quad (Great Quadrangle), Christ Church, Oxford Main Quad at the Main Building of University College London Oceania University of Sydney Quadrangle, a sandstone building at the University of Sydney (Camperdown) Other Quadrangle (geography), a United States Geological Survey topographical map Quadrangle (horse), American thoroughbred, winner of the 1964 Belmont Stakes Quadrangle Books, an imprint of Times Books Quadrangle Group investment fund in New York City BDP
images on manuscripts, although many illuminators and painters preferred fine brushes for their work. The variety of different strokes in formal hands was accomplished by good penmanship as the tip was square cut and rigid, exactly as it is today with modern steel pens. It was much later, in the 1600s, with the increased popularity of writing, especially in the copperplate script promoted by the many printed manuals available from the 'Writing Masters', that quills became more pointed and flexible. Quills are denominated from the order in which they are fixed in the wing; the first is favoured by the expert calligrapher, the second and third quills being very satisfactory also, plus the pinion feather. Flags the 5th and 6th feathers are also used. No other feather on the wing would be considered suitable by a professional scribe. Information can be obtained on the techniques of curing and cutting quills: In order to harden a quill that is soft, thrust the barrel into hot ashes, stirring it till it is soft; then taking it out, press it almost flat upon your knees with the back of a penknife, and afterwards reduce it to a roundness with your fingers. If you have a number to harden, set water and alum over the fire; and while it is boiling put in a handful of quills, the barrels only, for a minute, and then lay them by. An accurate account of the Victorian process by William Bishop, from researches with one of the last London quill dressers, is recorded in the Calligrapher's Handbook cited on this page. As a symbol From the 19th century in radical and socialist symbolism, quills have been used to symbolize clerks and intelligentsia. Some notable examples are the Radical Civic Union, the Czech National Social Party in combination with the hammer, symbol of the labour movement, or the Democratic Party of Socialists of Montenegro. Quills appear on the seals of the United States Census Bureau and the Administrative Office of the United States Courts. They also appear in the coats of arms of several US Army Adjutant general units which focus on administrative duties. Quills are on the coats of arms of a number of municipalities such as Bargfeld-Stegen in Germany and La Canonja in Spain. Three books and a quill pen are the symbols of Saint Hilary of Poitiers. Quill and pen knives A quill knife was the original primary tool used for cutting and sharpening quills, known as "dressing". Following the decline of the quill in the 1820s, after the introduction of the maintenance-free, mass-produced steel dip nib by John Mitchell, knives were still manufactured but became known as desk knives, stationery knives or latterly as the name stuck "pen" knives. There is a small but significant difference between a pen knife
clerks and intelligentsia. Some notable examples are the Radical Civic Union, the Czech National Social Party in combination with the hammer, symbol of the labour movement, or the Democratic Party of Socialists of Montenegro. Quills appear on the seals of the United States Census Bureau and the Administrative Office of the United States Courts. They also appear in the coats of arms of several US Army Adjutant general units which focus on administrative duties. Quills are on the coats of arms of a number of municipalities such as Bargfeld-Stegen in Germany and La Canonja in Spain. Three books and a quill pen are the symbols of Saint Hilary of Poitiers. Quill and pen knives A quill knife was the original primary tool used for cutting and sharpening quills, known as "dressing". Following the decline of the quill in the 1820s, after the introduction of the maintenance-free, mass-produced steel dip nib by John Mitchell, knives were still manufactured but became known as desk knives, stationery knives or latterly as the name stuck "pen" knives. There is a small but significant difference between a pen knife and a quill knife, in that the quill knife has a blade that is flat on one side and convex on the other which facilitates the round cuts required to shape a quill. A "pen" knife by contrast has two flat sides. This distinction is not recognised by modern traders, dealers or collectors, who define a quill knife as any small knife with a fixed or hinged blade, including such items as ornamental fruit knives. Today While quills are rarely used as writing instruments in the modern day, they are still being produced as specialty items, mostly for hobbyists. Such quills tend to have metal nibs or are sometimes even outfitted with a ballpoint pen inside to remove the need for a separate source of ink. According to the Supreme Court Historical Society, 20 goose-quill pens, neatly crossed, are placed at the four counsel tables each day the U.S. Supreme Court is in session; "most lawyers appear before the Court only once, and gladly take the quills home as souvenirs." This has been done since the earliest sessions of the Court. In the Jewish tradition quill pens, called kulmus (), are used by scribes to write Torah Scrolls, Mezuzot, and Tefillin. Music Plectra for psalteries and lutes can be cut similarly to writing pens. The rachis, the portion of the stem between the barbs, not the calamus, of the primary flight feathers of birds of the Corvidae was preferred for harpsichords. In modern instruments, plastic is more common, but they are
going to Rome to be crucified again"). Peter then gains the courage to continue his ministry and returns to the city, where he is martyred by being crucified upside-down. The Church of Domine Quo Vadis in Rome is built where the meeting between Peter and Jesus allegedly took place. The words "quo vadis" as a question also occur at least seven times in the Latin Vulgate. In culture The Polish writer Henryk Sienkiewicz wrote the novel Quo Vadis: A
risen Christ during their encounter along the Appian Way. According to the apocryphal Acts of Peter (Vercelli Acts XXXV; late 2nd century AD), as Peter flees from crucifixion in Rome at the hands of the government, and along the road outside the city, he meets the risen Jesus. In the Latin translation, Peter asks Jesus, "Quō vādis?" He replies, "Rōmam eō iterum crucifīgī" ("I am going to Rome to be crucified again"). Peter then gains the courage to continue his ministry and returns to the city, where he is martyred by being crucified upside-down. The Church of Domine Quo Vadis in Rome is built where the meeting between Peter and Jesus allegedly took place. The words "quo vadis" as a question also occur at least seven times in the Latin Vulgate. In
Pennsylvania Music QED (band), a 1980s Australian band Q.E.D. (Terje Rypdal album), 1993 Q.E.D. (Jim Allchin album) QED Records or Emanem Records Other uses Granville Gee Bee R-6, named "Q.E.D.", a 1930s racing monoplane QED: The Strange Theory of Light and Matter, a 1985 physics book by Richard Feynman Quod Erat Demonstrandum, a 1903 novel by Gertrude Stein ''Q.E.D. (novel), a 1930 mystery novel by Lynn Brock Quod Erat Demonstrandum (film), a 2013 Romanian drama film QED (play), a 2001 play by Peter Parnell about Richard Feynman Q.E.D. (manga), a 1997 manga by Motohiro Katou QED International, a film company QED: Question,
disk image format for machine emulation and virtualization Quantum Effect Devices, a microprocessor design company Television KQED (TV), public television station in San Francisco, California Q.E.D. (U.S. TV series) Q.E.D. (UK TV series) WQED (TV), public television station in Pittsburgh, Pennsylvania Music QED (band), a 1980s Australian band Q.E.D. (Terje Rypdal album), 1993 Q.E.D. (Jim Allchin album) QED Records or Emanem Records Other uses Granville Gee Bee R-6, named "Q.E.D.", a 1930s racing
served pride cookies to its passengers. It had a rainbow roo float in the Mardi Gras parade. There has been criticism of Qantas using its corporate power to prosecute the private interests on their staff and the community. Peter Dutton has said that chief executives such as Alan Joyce at Qantas should "stick to their knitting" rather than using the company's brand to advocate for political causes. A senior church leader has made similar comments. Despite the criticism, Qantas will continue to advocate for marriage equality which will include offering customers specially commissioned rings with the phrase, "until we all belong". This phrase will also appear on Qantas boarding passes and other paraphernalia. The cost of the campaign by Qantas and other participating companies is expected to be more than $5 million. Joyce has pledged Qantas will, "continue social-justice campaigning". In relation to a rugby player, sacked by Rugby Australia which is financially supported by Qantas, following his social media postings on homosexuality. Fundamental structural change In August 2011, the company announced that following financial losses of A$200 million ($209 million) for the year ending June 2011 and a decline in market share, major structural changes would be made. One planned change that did not come to fruition was the plan to create a new Asia-based premium airline that would operate under a different name. In addition to this plan, Qantas announced it planned to cut 1,000 jobs. The reforms included route changes, in particular the cessation of services to London via Hong Kong and Bangkok. While Qantas still operated in these cities, onward flights to London would be via its Oneworld partner British Airways under a code-share service. The following year Qantas reported an A$245 million full-year loss to the end of June 2012, citing high fuel prices, intense competition and industrial disputes. This was the first full year loss since Qantas was fully privatised 17 years previously, in 1995, and led to the airline cancelling its order of 35 new Boeing 787 aircraft, to reduce its spending. Qantas subsequently divested itself of its 50% holding of StarTrack, Australia's largest road freight company, in part for acquiring full interest in Australian airExpress. On 26 March 2012, Qantas set up Jetstar Hong Kong with China Eastern Airlines Corporation, which was intended to begin flights in 2013, but became embroiled in a protracted approval process. Qantas and Emirates began an alliance on 31 March 2013, in which their combined carriers offered 98 flights per week to Dubai, that saw bookings up six-fold. In September 2013, following the announcement the carrier expected another A$250 million ( million) net loss for the half-year period that ended on 31 December and the implementation of further cost-cutting measures that would see the cut of 1,000 jobs within a year, S&P downgraded Qantas credit from BBB- (the lowest investment grade) to BB+. Moody's applied a similar downgrading a month later. Losses continued into 2014 reporting year, with the Qantas Group reporting a half year loss of A$235 million ( million) and eventual full year loss of A$2.84 billion. In February 2014 additional cost-cutting measures to save A$2 billion, including the loss of 5,000 jobs that will see the workforce lowered from 32,000 to 27,000 by 2017 were announced. In May 2014 the company stated it expected to shed 2,200 jobs by June 2014, including those of 100 pilots. The carrier also reduced the size of its fleet by retiring aircraft and deferring deliveries; and planned to sell some of its assets. With 2,200 employees laid off by June 2014, another 1,800 job positions were planned to be cut by June 2015. Also during 2014 the Qantas Sale Act, under which the airline was privatised, was amended to repeal parts of section 7. That act limits foreign ownership of Qantas to 49 percent, with foreign airlines subject to further restrictions, including a 35-percent limit for all foreign airline shareholdings combined. In addition, a single foreign entity can hold no more than 25 percent of the airline's shares. The airline returned to profit in 2015, announcing a A$557 million after tax profit in August 2015, in contrast with a A$2.84 billion loss the year earlier. In 2015, Qantas sold its lease of Terminal 3 at Sydney Airport, which was due to continue until 2019, back to Sydney Airport Corporation for $535 million. This meant Sydney Airport resumed operational responsibility of the terminal, including the lucrative retail areas. Uniform Paris-based Australian designer Martin Grant is responsible for the new Qantas airline staff uniforms that were publicly unveiled on 16 April 2013. These were to replace the previous uniforms, dubbed colloquially as "Morrisey" by staff after the designer, Peter Morrissey. The new outfits combine the colours of navy blue, red and fuchsia pink. Qantas chief executive Alan Joyce stated that the new design "speaks of Australian style on the global stage" at the launch event that involved Qantas employees modelling the uniforms. Grant consulted with Qantas staff members over the course of one year to finalise the 35 styles that were eventually created. Not all employees were happy with the new uniform, however, with one flight attendant being quoted as saying "The uniforms are really tight and they are simply not practical for the very physical job we have to do." Destinations Qantas operates flightseeing charters to Antarctica on behalf of Croydon Travel. It first flew Antarctic flightseeing trips in 1977. They were suspended for a number of years due to the crash of Air New Zealand Flight 901 on Mount Erebus in 1979. Qantas restarted the flights in 1994. Although these flights do not touch down, they require specific polar operations and crew training due to factors like sector whiteout, which contributed to the 1979 Air New Zealand disaster. With Flights 7 and 8 – a non-stop service between Sydney and Dallas/Fort Worth operated by the Airbus A380 – commencing on 29 September 2014, Qantas operated the world's longest passenger flight on the world's largest passenger aircraft. This was overtaken on 1 March 2016 by Emirates' new Auckland-Dubai service. After it ordered Boeing 787 aircraft, Qantas announced an intention to launch non-stop flights between Australia and the United Kingdom during March 2018 from Perth to London. The inaugural flight left Perth on 24 March. On 19 March 2020, Qantas confirmed it would suspend all international flights and about 60% of domestic flights from the end of March until at least 31 May 2020 following expanded government travel restrictions due to the COVID-19 pandemic. Codeshare agreements , Qantas had codeshare agreements with the following airlines: Air France Air New Zealand Air Niugini Air Tahiti Nui Air Vanuatu Aircalin Airnorth Alaska Airlines American Airlines Asiana Airlines Bangkok Airways British Airways Cathay Pacific China Airlines China Eastern Airlines China Southern Airlines El Al Emirates Fiji Airways Finnair ITA Airways Japan Airlines Jetstar Jetstar Asia Jetstar Japan Jetstar Pacific KLM LATAM Chile Solomon Airlines SriLankan Airlines Vietnam Airlines WestJet Joint ventures In addition to the above codeshares, Qantas has entered into joint ventures with the following airlines: American Airlines China Eastern Airlines Emirates Fleet , Qantas and its subsidiaries operated 297 aircraft, including 71 aircraft by Jetstar Airways; 90 by the various QantasLink-branded airlines and six by Express Freighters Australia (on behalf of Qantas Freight, which also wet leases three Atlas Air Boeing 747-400Fs). Liveries Indigenous Art liveries Two Qantas aircraft are currently decorated with an Indigenous Australian art scheme. One aircraft, a Boeing 737–800, wears a livery called Mendoowoorrji, which was revealed in November 2013. The design was drawn from the late West Australian Aboriginal artist Paddy Bedford. A Boeing 787–9 Dreamliner is adorned in a paint scheme inspired by the late Emily Kame Kngwarreye's 1991 painting Yam Dreaming. The adaptation of Yam Dreaming to the aircraft, led by Balarinji, a Sydney-based and Aboriginal-owned design firm, incorporates the red Qantas tailfin into the design, which includes white dots with red and orange tones. The design depicts the yam plant, an important and culturally significant symbol in Kngwarreye's Dreaming stories, and a staple food source in her home region of Utopia. The design was applied to the aircraft during manufacture, prior to its delivery in March 2018 to Alice Springs Airport, situated 230 kilometers southeast of Utopia, where the aircraft was met by Kngwarreye's descendants, the local community, and Qantas executives. The aircraft would later operate Qantas' inaugural nonstop services between Perth and London Heathrow, and between Melbourne and San Francisco, scheduled with Boeing 787 aircraft. Australian Aboriginal art designs have previously adorned some Qantas aircraft; the first design was called Wunala Dreaming, which was unveiled in 1994 and had been painted on now-retired Boeing 747–400 and 747-400ER aircraft between 1994 and 2012. The motif was an overall-red design depicting ancestral spirits in the form of kangaroos travelling in the outback. The second design was called Nalanji Dreaming and was depicted on a Boeing 747–300 from 1995 until its retirement in 2005. Nalanji Dreaming was a bright blue design inspired by rainforest landscape and tropical seas. The third design was titled Yananyi Dreaming, and featured a depiction of Uluru. The scheme was designed by Uluru-based artist Rene Kulitja, in collaboration with Balarinji. It was painted on the 737 at the Boeing factory prior to its delivery in 2002. It was repainted into the standard livery in 2014. Retro Roo liveries In November 2014 the airline revealed that the 75th Boeing 737–800 jet to be delivered would carry a 'retro-livery' based on the airline's 1971 'ochre' colour scheme design featuring the iconic 'Flying Kangaroo' on its tail and other aspects drawn from its 1970s fleet. The aircraft was delivered on 17 November. Qantas announced a second 737–800 would receive a 'retro roo' livery in October 2015. On 16 November 2015 the airline unveiled the second 'retro roo' 737, bearing a replica livery from 1959 to celebrate the airline's 95th birthday. Other liveries Several Qantas aircraft have been decorated with promotional liveries, promoting telecommunications company Optus; the Disney motion picture Planes; the Australian national association football team, the Socceroos; and the Australian national rugby union team, the Wallabies. Two aircraft – an Airbus A330-200 and a Boeing 747-400ER – were decorated with special liveries promoting the Oneworld airline alliance (of which Qantas is a member) in 2009. On 29 September 2014, nonstop Airbus A380 service to Dallas/Fort Worth International Airport was inaugurated using an A380 decorated with a commemorative cowboy hat and bandana on the kangaroo tail logo. Prior to the 2017 Sydney Mardi Gras, Qantas decorated one of its Airbus A330-300 aircraft with rainbow lettering and depicted a rainbow flag on the tail of the aircraft. Cabin Domestic Qantas domestic flights are primarily operated by Boeing 737–800 and Airbus A330-200 aircraft; Airbus A330-300s sometimes operate domestically as well. A two-class configuration (Business and Economy) is offered. Business Domestic Business Class is offered on all Boeing 737 and Airbus A330 aircraft. On the Boeing 737, Business is exclusively available in the first three rows of the cabin, with a seat configuration of 2–2, seat recline, and a larger pitch between seats. As the A330s operate international flights, Business Suites are sometimes available on domestic routes. These seats feature all-aisle access in a 1-2-1 configuration and a fully flat bed. Economy Domestic Economy Class is offered on all Boeing 737 and Airbus A330 aircraft. Seat pitch is usually and seat width ranges from . Layouts are 3–3 on the 737 and 2-4-2 on the A330. International Qantas international flights are primarily operated on Airbus A380s, A330-300s, Boeing 787s, and sometimes on Airbus A330-200s and Boeing 737-800s. Passenger class configuration varies by aircraft, with the Airbus A330-300 offering a two-class configuration of Business and Economy on short to medium-haul flights. This compares to the Airbus A380, which offers a four-class configuration of First, Business, Premium Economy, and Economy on selected long haul flights. First First class is offered exclusively on the Airbus A380. It offers 14 individual suites in a 1-1-1 layout. The seats rotate, facing forward for takeoff, but rotating to the side for dining and sleeping, with 83.5 in seat pitch (extending to a 212 cm fully flat bed) and a width of . Each suite has a widescreen HD monitor with 1,000 AVOD programs. In addition to 110 V AC power outlets, USB ports are offered for connectivity. Passengers are also able to make use of the on-board business lounge on the upper deck. Complimentary access to both the first class and business class lounges (or affiliated lounges) is offered. Updated versions of this seat were fitted to the airline's refurbished Airbus A380 aircraft from late 2019. This seat featured refreshed cushioning and larger entertainment screens compared to the older version seat. Business International Business class is offered on all Qantas mainline passenger aircraft. On all International and selected Domestic flights, Qantas offers two different types of Business Class seats, as listed below. Business Suites Business Suites are offered on all Boeing 787, Airbus A330-300, and selected Airbus A330-200 and A380 aircraft. These seats include beds and are in a 1-2-1 configuration. The Business Suite was introduced on the A330 in October 2014, and also contains a bed. This seat includes a Panasonic eX3 system with a touchscreen. By the end of 2016, the business class seats of Qantas' entire Airbus A330 fleet were refitted. Airbus A330 Business Suites are available on Asian routes, transcontinental routes across Australia and smaller routes such as the East Coast triangle. Updated versions of this seat were fitted to the airline's new Boeing 787 fleet from late 2017. Business Skybeds Business Skybeds are offered exclusively on selected A380 aircraft. On the Airbus A380, 64 fully-flat Skybed seats are available with seat pitch (converting to a 200 cm long bed). These seats are located on the upper deck in a 2-2-2 configuration in two separate cabins. Features include a 30 cm touchscreen monitor with 1,000 AVOD programmes and an on-board lounge. Airbus A380 Business Skybeds are available on Qantas' flagship routes such as Australia to/from London via Singapore, Los Angeles, Dallas, and Hong Kong (seasonal). The Skybed 1 (Mark I) version of the lie-flat seats, featured between 2003 and 2019 had of seat pitch and width; however passengers slept at a distinct slope to the cabin floor. The Skybed 2 (Mark II) version, introduced in 2008 has a pitch, and allows passengers to lie fully horizontal. On the now-retired Boeing 747, seating was in a 2-3-2 configuration on the main deck and a 2–2 configuration on the upper deck. Skybed seats on Boeing 747s featured a touchscreen monitor with 400 AVOD programs. Before their retirement, Boeing 747 Business Skybeds were available on Asian, African, and South American routes. In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft with new Business Suites as offered on Airbus A330 and Boeing 787 aircraft. The aircraft will gain six business class seats compared to the previous configuration. Complimentary access to the Qantas business class lounge (or affiliated lounges) is also offered. Premium Economy Premium economy class is offered on all Airbus A380 and Boeing 787–9 aircraft. On the Airbus A380, the seat pitch ranges from , with a width of . On the Boeing 787, it is configured in a 2-3-2 seating arrangement around the middle of the aircraft, whereas it is in a 2-3-2 seating arrangement at the rear of the upper deck on the A380. The total number of seats depends on the aircraft type, as A380s have 35–60 seats, (depending on the configuration) and 787s have 28 seats. Qantas premium economy is presented as a lighter business class product rather than most other airlines' premium economy, which is often presented as a higher economy class, however Qantas premium economy does not offer access to premium lounges, and meals are only a slightly uprated version of economy class meals. In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft with new Premium Economy seats, as offered on Boeing 787 aircraft. The aircraft will gain 25 premium economy seats compared to the previous configuration. Economy International Economy class is available on all Qantas mainline passenger aircraft. Seat pitch is usually and seat width ranges from . Layouts are 3–3 on the 737, 2-4-2 on the A330, 3-3-3 on the B787-9 and 3-4-3 on the 747. On the A380, the layout is 3-4-3 and there are four self-service snack bars located in between cabins. In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft which includes new Economy seats with new seat cushions and improved inflight entertainment, as offered on Boeing 787 aircraft. The aircraft will have fewer economy seats compared to the previous configuration due to an increase in the number of premium seats. In-flight entertainment Every Qantas mainline aircraft has some form of video audio entertainment. Qantas has several types of in-flight entertainment (IFE) systems installed on its aircraft and refers to the in-flight experience as "On:Q". Audio-video entertainment systems The "Total Entertainment System" by Rockwell Collins was featured on selected domestic and international aircraft between 2000 and 2019. This AVOD system included personal LCD screens in all classes, located in the seat back for economy and business class, and in the armrest for premium economy and first class. The Mainscreen System is featured on selected Boeing 737–800 aircraft. This entertainment system, introduced between 2002 and 2011, has overhead video screens as the main form of entertainment. Movies are shown on the screens for lengthier flights or TV programmes on shorter flights. A news telecast will usually feature at the start of the flight. Audio options are less varied than on Q, iQ or the Total Entertainment System. The "iQ" inflight entertainment system by Panasonic Avionics Corporation is featured on all Boeing 747, and selected Airbus A380 and Boeing 737–800 aircraft. This audio video on demand (AVOD) experience, introduced in 2008, is based on the Panasonic Avionics system and features expanded entertainment options; touch screens; and new communications-related features such as Wi-Fi and mobile phone functionality; as well as increased support for electronics (such as USB and iPod connectivity). The "Q" inflight entertainment system by Panasonic Avionics Corporation in collaboration with Massive Interactive is featured on all Airbus A330-300, A330-200, Boeing 787 and selected Airbus A380 aircraft. This audio video on demand (AVOD) experience, introduced in 2014 and updated in 2018 on selected aircraft, is based on the Panasonic eX3 system and features extensive entertainment options; enhanced touch screens; and communications-related features such as Wi-Fi and mobile phone functionality; as well as increased support for electronics (such as USB and iPod connectivity). A "my flight" feature offers access to maps, playlists, and a service timeline showing when drinks and meals will be served and the best time for resting on long-haul flights. Wireless entertainment systems and Wi-Fi Q Streaming is an in-flight entertainment system in which entertainment is streamed to iPads or personal devices available in all classes on selected aircraft. A selection of movies, TV, music, and a kids' choice are available. In 2007, Qantas conducted a trial for use of mobile telephones with AeroMobile, during domestic services for three months on a Boeing 767. During the trial, passengers were allowed to send and receive text messages and emails but were not able to make or receive calls. Since 2014, Sky News Australia has provided multiple news bulletins both in-flight and in Qantas branded lounges. Previously, the Australian Nine Network provided a news bulletin for Qantas entitled Nine's Qantas Inflight News, which was the same broadcast as Nine's Early Morning News, however Nine lost the contract to Sky News. In July 2015, Qantas signed a deal with American cable network HBO to provide over 120 hours of television programming in-flight from the network which will be updated monthly, as well as original lifestyle and entertainment programming from both Foxtel and the National Geographic Channel. In 2017 Qantas commenced rolling out complimentary high speed Wi-Fi on domestic aircraft. The services utilises NBN Co Sky Muster satellites to deliver higher speeds than generally offered by onboard Wi-Fi. Previously, in July 2007 Qantas had announced Wi-Fi on would be available on its long haul A380s and 747-400s although that system ultimately did not proceed following trials. Inflight magazine Qantas: The Australian Way is the airline's in-flight magazine. In mid-2015, the magazine ended a 14-year publishing deal with Bauer Media, switching its publisher to Medium Rare. Services The Qantas Club Facilities The Qantas Club is the airline lounge for Qantas with airport locations around Australia and the world. Additionally, Qantas operates dedicated international first-class lounges in Sydney, Melbourne, Auckland, Los Angeles and Singapore. Domestically, Qantas also offers dedicated Business Lounges at Sydney, Melbourne, Brisbane, Canberra and Perth for domestic Business Class, Qantas Platinum and Platinum One, and OneWorld Emerald frequent flyers. In April 2013, Qantas opened its new flagship lounge in Singapore, the Qantas Singapore Lounge. This replaced the former separate first- and business-class lounges as a result of the new Emirates alliance. Similar combined lounges were also opened in Hong Kong in April 2014 and in Brisbane in October 2016. These new lounges provide the same service currently offered by Sofitel in its flagship First lounges in Sydney and Melbourne and a dining experience featuring Neil Perry's Spice Temple inspired dishes and signature cocktails. Lounge access Qantas Club Members, Gold Frequent Flyers, and Oneworld Sapphire holders are permitted to enter domestic Qantas Clubs when flying on Qantas or Jetstar flights along with one guest who need not be travelling. Platinum and Oneworld Emerald Members are permitted to bring in two guests who do not need to be travelling. Internationally, members use Qantas International Business Class lounges (or the Oneworld equivalent). Guests of the member must be travelling to gain access to international lounges. When flying with American Airlines, members have access to Admirals Club lounges and when flying on British Airways, members have access to British Airways' Terraces and Galleries Lounges. Platinum Frequent Flyers had previously been able to access the Qantas Club in Australian domestic terminals at any time, regardless of whether they were flying that day. Travellers holding Oneworld Sapphire or Emerald status are also allowed in Qantas Club lounges worldwide. Access to Qantas First lounges is open to passengers travelling on internationally operated Qantas or
a rainbow flag on the tail of the aircraft. Cabin Domestic Qantas domestic flights are primarily operated by Boeing 737–800 and Airbus A330-200 aircraft; Airbus A330-300s sometimes operate domestically as well. A two-class configuration (Business and Economy) is offered. Business Domestic Business Class is offered on all Boeing 737 and Airbus A330 aircraft. On the Boeing 737, Business is exclusively available in the first three rows of the cabin, with a seat configuration of 2–2, seat recline, and a larger pitch between seats. As the A330s operate international flights, Business Suites are sometimes available on domestic routes. These seats feature all-aisle access in a 1-2-1 configuration and a fully flat bed. Economy Domestic Economy Class is offered on all Boeing 737 and Airbus A330 aircraft. Seat pitch is usually and seat width ranges from . Layouts are 3–3 on the 737 and 2-4-2 on the A330. International Qantas international flights are primarily operated on Airbus A380s, A330-300s, Boeing 787s, and sometimes on Airbus A330-200s and Boeing 737-800s. Passenger class configuration varies by aircraft, with the Airbus A330-300 offering a two-class configuration of Business and Economy on short to medium-haul flights. This compares to the Airbus A380, which offers a four-class configuration of First, Business, Premium Economy, and Economy on selected long haul flights. First First class is offered exclusively on the Airbus A380. It offers 14 individual suites in a 1-1-1 layout. The seats rotate, facing forward for takeoff, but rotating to the side for dining and sleeping, with 83.5 in seat pitch (extending to a 212 cm fully flat bed) and a width of . Each suite has a widescreen HD monitor with 1,000 AVOD programs. In addition to 110 V AC power outlets, USB ports are offered for connectivity. Passengers are also able to make use of the on-board business lounge on the upper deck. Complimentary access to both the first class and business class lounges (or affiliated lounges) is offered. Updated versions of this seat were fitted to the airline's refurbished Airbus A380 aircraft from late 2019. This seat featured refreshed cushioning and larger entertainment screens compared to the older version seat. Business International Business class is offered on all Qantas mainline passenger aircraft. On all International and selected Domestic flights, Qantas offers two different types of Business Class seats, as listed below. Business Suites Business Suites are offered on all Boeing 787, Airbus A330-300, and selected Airbus A330-200 and A380 aircraft. These seats include beds and are in a 1-2-1 configuration. The Business Suite was introduced on the A330 in October 2014, and also contains a bed. This seat includes a Panasonic eX3 system with a touchscreen. By the end of 2016, the business class seats of Qantas' entire Airbus A330 fleet were refitted. Airbus A330 Business Suites are available on Asian routes, transcontinental routes across Australia and smaller routes such as the East Coast triangle. Updated versions of this seat were fitted to the airline's new Boeing 787 fleet from late 2017. Business Skybeds Business Skybeds are offered exclusively on selected A380 aircraft. On the Airbus A380, 64 fully-flat Skybed seats are available with seat pitch (converting to a 200 cm long bed). These seats are located on the upper deck in a 2-2-2 configuration in two separate cabins. Features include a 30 cm touchscreen monitor with 1,000 AVOD programmes and an on-board lounge. Airbus A380 Business Skybeds are available on Qantas' flagship routes such as Australia to/from London via Singapore, Los Angeles, Dallas, and Hong Kong (seasonal). The Skybed 1 (Mark I) version of the lie-flat seats, featured between 2003 and 2019 had of seat pitch and width; however passengers slept at a distinct slope to the cabin floor. The Skybed 2 (Mark II) version, introduced in 2008 has a pitch, and allows passengers to lie fully horizontal. On the now-retired Boeing 747, seating was in a 2-3-2 configuration on the main deck and a 2–2 configuration on the upper deck. Skybed seats on Boeing 747s featured a touchscreen monitor with 400 AVOD programs. Before their retirement, Boeing 747 Business Skybeds were available on Asian, African, and South American routes. In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft with new Business Suites as offered on Airbus A330 and Boeing 787 aircraft. The aircraft will gain six business class seats compared to the previous configuration. Complimentary access to the Qantas business class lounge (or affiliated lounges) is also offered. Premium Economy Premium economy class is offered on all Airbus A380 and Boeing 787–9 aircraft. On the Airbus A380, the seat pitch ranges from , with a width of . On the Boeing 787, it is configured in a 2-3-2 seating arrangement around the middle of the aircraft, whereas it is in a 2-3-2 seating arrangement at the rear of the upper deck on the A380. The total number of seats depends on the aircraft type, as A380s have 35–60 seats, (depending on the configuration) and 787s have 28 seats. Qantas premium economy is presented as a lighter business class product rather than most other airlines' premium economy, which is often presented as a higher economy class, however Qantas premium economy does not offer access to premium lounges, and meals are only a slightly uprated version of economy class meals. In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft with new Premium Economy seats, as offered on Boeing 787 aircraft. The aircraft will gain 25 premium economy seats compared to the previous configuration. Economy International Economy class is available on all Qantas mainline passenger aircraft. Seat pitch is usually and seat width ranges from . Layouts are 3–3 on the 737, 2-4-2 on the A330, 3-3-3 on the B787-9 and 3-4-3 on the 747. On the A380, the layout is 3-4-3 and there are four self-service snack bars located in between cabins. In 2019, Qantas began the process of retrofitting its Airbus A380 aircraft which includes new Economy seats with new seat cushions and improved inflight entertainment, as offered on Boeing 787 aircraft. The aircraft will have fewer economy seats compared to the previous configuration due to an increase in the number of premium seats. In-flight entertainment Every Qantas mainline aircraft has some form of video audio entertainment. Qantas has several types of in-flight entertainment (IFE) systems installed on its aircraft and refers to the in-flight experience as "On:Q". Audio-video entertainment systems The "Total Entertainment System" by Rockwell Collins was featured on selected domestic and international aircraft between 2000 and 2019. This AVOD system included personal LCD screens in all classes, located in the seat back for economy and business class, and in the armrest for premium economy and first class. The Mainscreen System is featured on selected Boeing 737–800 aircraft. This entertainment system, introduced between 2002 and 2011, has overhead video screens as the main form of entertainment. Movies are shown on the screens for lengthier flights or TV programmes on shorter flights. A news telecast will usually feature at the start of the flight. Audio options are less varied than on Q, iQ or the Total Entertainment System. The "iQ" inflight entertainment system by Panasonic Avionics Corporation is featured on all Boeing 747, and selected Airbus A380 and Boeing 737–800 aircraft. This audio video on demand (AVOD) experience, introduced in 2008, is based on the Panasonic Avionics system and features expanded entertainment options; touch screens; and new communications-related features such as Wi-Fi and mobile phone functionality; as well as increased support for electronics (such as USB and iPod connectivity). The "Q" inflight entertainment system by Panasonic Avionics Corporation in collaboration with Massive Interactive is featured on all Airbus A330-300, A330-200, Boeing 787 and selected Airbus A380 aircraft. This audio video on demand (AVOD) experience, introduced in 2014 and updated in 2018 on selected aircraft, is based on the Panasonic eX3 system and features extensive entertainment options; enhanced touch screens; and communications-related features such as Wi-Fi and mobile phone functionality; as well as increased support for electronics (such as USB and iPod connectivity). A "my flight" feature offers access to maps, playlists, and a service timeline showing when drinks and meals will be served and the best time for resting on long-haul flights. Wireless entertainment systems and Wi-Fi Q Streaming is an in-flight entertainment system in which entertainment is streamed to iPads or personal devices available in all classes on selected aircraft. A selection of movies, TV, music, and a kids' choice are available. In 2007, Qantas conducted a trial for use of mobile telephones with AeroMobile, during domestic services for three months on a Boeing 767. During the trial, passengers were allowed to send and receive text messages and emails but were not able to make or receive calls. Since 2014, Sky News Australia has provided multiple news bulletins both in-flight and in Qantas branded lounges. Previously, the Australian Nine Network provided a news bulletin for Qantas entitled Nine's Qantas Inflight News, which was the same broadcast as Nine's Early Morning News, however Nine lost the contract to Sky News. In July 2015, Qantas signed a deal with American cable network HBO to provide over 120 hours of television programming in-flight from the network which will be updated monthly, as well as original lifestyle and entertainment programming from both Foxtel and the National Geographic Channel. In 2017 Qantas commenced rolling out complimentary high speed Wi-Fi on domestic aircraft. The services utilises NBN Co Sky Muster satellites to deliver higher speeds than generally offered by onboard Wi-Fi. Previously, in July 2007 Qantas had announced Wi-Fi on would be available on its long haul A380s and 747-400s although that system ultimately did not proceed following trials. Inflight magazine Qantas: The Australian Way is the airline's in-flight magazine. In mid-2015, the magazine ended a 14-year publishing deal with Bauer Media, switching its publisher to Medium Rare. Services The Qantas Club Facilities The Qantas Club is the airline lounge for Qantas with airport locations around Australia and the world. Additionally, Qantas operates dedicated international first-class lounges in Sydney, Melbourne, Auckland, Los Angeles and Singapore. Domestically, Qantas also offers dedicated Business Lounges at Sydney, Melbourne, Brisbane, Canberra and Perth for domestic Business Class, Qantas Platinum and Platinum One, and OneWorld Emerald frequent flyers. In April 2013, Qantas opened its new flagship lounge in Singapore, the Qantas Singapore Lounge. This replaced the former separate first- and business-class lounges as a result of the new Emirates alliance. Similar combined lounges were also opened in Hong Kong in April 2014 and in Brisbane in October 2016. These new lounges provide the same service currently offered by Sofitel in its flagship First lounges in Sydney and Melbourne and a dining experience featuring Neil Perry's Spice Temple inspired dishes and signature cocktails. Lounge access Qantas Club Members, Gold Frequent Flyers, and Oneworld Sapphire holders are permitted to enter domestic Qantas Clubs when flying on Qantas or Jetstar flights along with one guest who need not be travelling. Platinum and Oneworld Emerald Members are permitted to bring in two guests who do not need to be travelling. Internationally, members use Qantas International Business Class lounges (or the Oneworld equivalent). Guests of the member must be travelling to gain access to international lounges. When flying with American Airlines, members have access to Admirals Club lounges and when flying on British Airways, members have access to British Airways' Terraces and Galleries Lounges. Platinum Frequent Flyers had previously been able to access the Qantas Club in Australian domestic terminals at any time, regardless of whether they were flying that day. Travellers holding Oneworld Sapphire or Emerald status are also allowed in Qantas Club lounges worldwide. Access to Qantas First lounges is open to passengers travelling on internationally operated Qantas or Oneworld first-class flights, as well as Qantas platinum and Oneworld emerald frequent flyers. Emirates first-class passengers are also eligible for access to the Qantas first lounges in Sydney and Melbourne. The Qantas Club also offers membership by paid subscription (one, two, or four years) or by achievement of Gold or Platinum frequent flyer status. Benefits of membership include lounge access, priority check-in, priority luggage handling and increased luggage allowances. Qantas Frequent Flyer The Qantas frequent-flyer program is aimed at rewarding customer loyalty. The program is long-standing, although the date of the actual inception has been a matter that has generated some commentary. Qantas state the program launched in 1987 although other sources claim what is the current program was launched in the early 1990s, with a Captain's Club program existing before that. Points are accrued based on distance flown, with bonuses that vary by travel class. Points can also be earned on other Oneworld airlines as well as through other non-airline partners. Points can be redeemed for flights or upgrades on flights operated by Qantas, Oneworld airlines, and other partners. Other partners include credit cards, car rental companies, hotels and many others. Flights with Qantas and selected partner airlines earn Status Credits — and accumulation of these allows progression to Silver status (Oneworld Ruby), Gold status (Oneworld Sapphire), Platinum and Platinum One status (Oneworld Emerald). Membership of the program has grown significantly since 2000, when the program had 2.4 million members. By 2005 membership had grown to 4.3 million, then to 7.2 million by 2010 and 10.8 million in 2015. As at 2018, the program has 12.3 million members, or approaching the equivalent of half of the Australian population. Qantas has faced criticism regarding availability of seats for members redeeming points. In 2004, the Australian Competition and Consumer Commission directed Qantas to provide greater disclosure to members regarding the availability of frequent-flyer seats. In March 2008, an analyst at JPMorgan Chase suggested that the Qantas frequent-flyer program could be worth A$2 billion (US$1.9 billion), representing more than a quarter of the total market value of Qantas. On 1 July 2008 a major overhaul of the program was announced. The two key new features of the program were Any Seat rewards, in which members could now redeem any seat on an aircraft, rather than just selected seats — at a price. The second new feature was Points Plus Pay, which has enabled members to use a combination of cash and points to redeem an award. Additionally, the Frequent Flyer store was also expanded to include a greater range of products and services. Announcing the revamp, Qantas confirmed it would be seeking to raise about A$1 billion in 2008 by selling up to 40% of the frequent flyer program. However, in September 2008, it stated it would defer the float, citing volatile market conditions. Accidents and incidents It is often claimed that Qantas has never had an aircraft crash. While it is true that the company has neither lost a jet airliner nor had any jet fatalities, it had eight fatal accidents and an aircraft shot down between 1927 and 1945, with the loss of 63 people. Half of these accidents and the shoot-down occurred during World War II, when the Qantas aircraft were operating on behalf of Allied military forces. Post-war, it lost another four aircraft (one was owned by BOAC and operated by Qantas in a pooling arrangement) with a total of 21 people killed. The last fatal accidents suffered by Qantas were in 1951, with three fatal crashes in five months. Qantas' safety record allows the airline to be officially known as the world's safest airline for seven years in a row from 2012 until 2019 and again in 2021. Since the end of World War II, the following accidents and incidents have occurred: On 23 March 1946, an Avro Lancastrian registered G-AGLX disappeared while flying over the Indian Ocean. The BOAC-owned aircraft was being operated by Qantas on the Karachi—Sydney part of the two airlines' joint service from London to Sydney. It disappeared with seven passengers and crew on board between Colombo, Ceylon (now Sri Lanka), and the Cocos (Keeling) Islands, approximately three hours before it was due to arrive at the Cocos islands. On 7 April 1949, an Avro Lancastrian registered VH-EAS swung on landing at Dubbo, New South Wales during a training flight, causing the gear to collapse. The aircraft was destroyed by fire, but the crew evacuated safely. On 16 July 1951, a de Havilland Australia DHA-3 Drover registered VH-EBQ crashed off the coast of New Guinea (in the Huon Gulf near the mouth of the Markham River) after the centre engine's propeller failed. The pilot and the six passengers on board were killed. On 21 September 1951, a de Havilland DH.84 Dragon registered VH-AXL, crashed in mountainous country southeast of Arona in the central highlands of New Guinea, no passengers were on board, the pilot was killed. On 13 December 1951, a de Havilland DH.84 Dragon registered VH-URV crashed in mountainous country near Mount Hagen, central highlands of New Guinea. The pilot and the two passengers were killed. To date, this was the last fatal accident suffered by Qantas. On 24 August 1960, a Lockheed L-1049 Super Constellation registered VH-EAC crashed on take-off at Mauritius en route to the Cocos Islands, Australia. The take-off was aborted following an engine failure, the aircraft ran off the runway, and was destroyed by fire. There were no fatalities. On 23 September 1999, Qantas Flight 1, a Boeing 747–400 registered VH-OJH, overran the runway while landing at Bangkok, Thailand, during a heavy thunderstorm. The aircraft came to a stop on a golf course, but without fatalities. The Australian Transport Safety Bureau criticised numerous inadequacies in Qantas' operational and training processes. On 25 July 2008, Qantas Flight 30, a Boeing 747–400 registered VH-OJK, suffered a ruptured fuselage and decompression as a result of an oxygen tank explosion over the South China Sea. En route from Hong Kong International Airport to Melbourne Airport, the aircraft made an emergency landing in the Philippines with no injuries. On 7 October 2008, an Airbus A330-300 registered VH-QPA, travelling from Singapore Changi Airport to Perth, Western Australia as Qantas Flight 72, suffered a rapid loss of altitude in two sudden uncommanded pitch down manoeuvres causing serious injuries while from Learmonth. The aircraft safely landed in Learmonth, with 14 people requiring transportation by air ambulance to Perth. Another 30 people also required hospital treatment, while an additional 30 people had injuries not requiring hospital treatment. Initial investigations identified an inertial reference system fault in the Number-1 Air Data Inertial Reference Unit as the likely origin of the event. On receiving false indication of a very high angle of attack, the flight control systems commanded a pitch down movement, reaching a maximum of 8.5 degrees pitch down. (7news documentary) On 4 November 2010, Qantas Flight 32, an Airbus A380 registered VH-OQA, fitted with four Rolls-Royce Trent 972 engines, suffered an uncontained turbine disc failure of its left inboard engine shortly after taking off from Singapore Changi Airport en route to Sydney. The aircraft returned to Singapore and landed safely. None of the 440 passengers or 29 crew on board were injured. Extortion attempts On 26 May 1971 Qantas received a call from a "Mr. Brown" claiming that there was a bomb planted on a Hong Kong-bound jet and demanding $500,000 in unmarked $20 notes. The caller and threat were taken seriously when he directed police to an airport locker where a functional bomb was found. Arrangements were made to pick up the money in front of the head office of the airline in the heart of the Sydney business district. Qantas paid the money and
by A. W. Winklehoff. Dennis Ritchie, Ken Thompson and Brian Kernighan wrote the QED manuals used at Bell Labs. Given that the authors were the primary developers of the Unix operating system, it is natural that QED had a strong influence on the classic UNIX text editors ed, sed and their descendants such as ex and sam, and more distantly AWK and Perl. A version of QED named FRED (Friendly Editor) was written at the University of Waterloo for Honeywell systems by Peter Fraser. A University of Toronto team consisting of Tom Duff, Rob Pike, Hugh Redelmeier, and David Tilbrook implemented a version of QED that runs on UNIX; David Tilbrook later included QED as part of his QEF tool set. QED was also used as a character-oriented editor on the Norwegian-made Norsk Data systems, first Nord TSS, then Sintran III. It was implemented for the Nord-1 computer in 1971 by Bo Lewendal who after working with Deutsch and Lampson at Project Genie and at the Berkeley Computer Corporation, had taken a job with Norsk Data (and who developed the
a line-oriented computer text editor that was developed by Butler Lampson and L. Peter Deutsch for the Berkeley Timesharing System running on the SDS 940. It was implemented by L. Peter Deutsch and Dana Angluin between 1965 and 1966. QED (for "quick editor") addressed teleprinter usage, but systems "for CRT displays [were] not considered, since many of their design considerations [were] quite different." Later implementations Ken Thompson later wrote a version for CTSS; this version was notable for introducing regular expressions. Thompson rewrote QED in BCPL for Multics. The Multics version was ported to the GE-600 system used at Bell Labs in the late 1960s under GECOS and later GCOS after Honeywell took over GE's computer business. The GECOS-GCOS port used I/O routines written by A. W. Winklehoff. Dennis Ritchie, Ken Thompson and Brian Kernighan wrote the QED manuals used at Bell Labs. Given that the authors were the primary developers
found. Brigadier general Frank Helmick, the assistant commander of 101st Airborne, commented that all occupants of the house died during the gun battle before U.S. troops were able to enter. Soldiers, who tried to enter the house three times, encountered resistance with AK-47 and grenades in the first two attempts. Uday, Qusay and guard protected the street and the first floor from the bathroom at the front of the house; Qusay's son took cover from the bedroom in the back and defended themselves. The American forces then bombed the house many times and fired missiles. Three adults were thought to have died due to the TOW missile fired into the front of the house. In the third attempt, the soldiers killed Qusay's only remaining 14-year-old son after he fired. Brigade commander Col. Joe Anderson said an Arabic announcement was made at 10 A.M. on the day and called on people inside to come out peacefully. The answer he received was bullet bombardment. An experienced team of commandos tried to attack the building, but they had to retreat under fire. Four American soldiers were injured. Anderson then ordered his men to fire with 50-caliber heavy machine guns. Uday and Qusay Hussein refused to surrender even after a helicopter fired a rocket and the Strike Brigade fired 40mm grenades at them. The Colonel decided that more firepower was necessary to take down the brothers, leading to 12 TOW missiles being fired into the building. After his sons death, Saddam Hussein recorded a tape and said, "Beloved Iraqis, your brothers Uday and Qusay, and Mustafa, the son of Qusay, took a stand of faith, which pleases God, makes a friend happy, and makes an enemy angry. They stood in the arena of jihad in Mosul, after a valiant battle with the enemy that lasted six hours. The armies of aggression mobilised all types of weapons of the ground forces against them and succeeded to harm them only when they used planes against the house where they were. Thus, they adopted a stand with which God has honoured this Hussein family so that the present would be a continuation of the brilliant, genuine, faithful, and honourable past. We thank God for what he has ordained for us when he honoured us with their martyrdom for his sake. We ask Almighty God to satisfy them and all the righteous martyrs after they satisfied him with their faithful Jihadist stand. Had Saddam Hussein had 100 children, other than Uday and Qusay, Saddam Hussein would have sacrificed them on the same path. God honoured us by their martyrdom. If you had killed Uday, Qusay, Mustafa, and another mujahideen man with them, all the youths of our nation and the youths of Iraq are Uday, Qusay, and Mustafa in the fields of jihad." Later, the American command said that dental records had conclusively identified two of the dead men as Saddam Hussein's sons. They also announced that the informant (possibly the owner of the villa, Nawaf al-Zaidan, in Mosul in which the brothers were killed) would receive the combined $30 million reward previously offered for their apprehension. On 23 July 2003, the American command stated that it had conclusively identified two of the dead men as Saddam Hussein's sons from dental records.
Unlike Uday, who was known for extravagance and erratic, violent behavior, Qusay kept a low profile so details regarding his actions and roles are obscure. Iraqi dissidents claimed that Qusay was responsible for the killing of many political activists. The Sunday Times reported that Qusay Hussein ordered the killing of Khalis Mohsen al-Tikriti, an engineer at the military industrialization organization, because he believed Mohsen was planning to leave Iraq. In 1998, Iraqi opposition groups accused Qusay Hussein of ordering the execution of thousands of political prisoners after hundreds of inmates were similarly executed to make room for new prisoners in crowded jails. Hussein's service in the Iraqi Republican Guard began in 2000. It is believed that he became the supervisor of the Guard and the head of internal security forces (possibly the Special Security Organization (SSO)), and had authority over other Iraqi military units. Hours before the 2003 invasion of Iraq, Qusay withdrew approximately $1bn from the central bank in Baghdad, acting on personal orders from Saddam. He arrived at the bank in Baghdad at 4am on March 18, (hours before the first US strikes) seized around $900m in $100 bills and a further $100m in euros, loaded them into three tractor-trailers and left. Death On the afternoon of 22 July 2003, troops of the 101st Airborne 3/327th Infantry HQ and C-Company, aided by U.S. Special Forces, killed Qusay Hussein, his 14-year-old son Mustafa, his older brother Uday Hussein and a bodyguard during a raid on a house in the northern Iraqi city of Mosul. Acting on a tip provided the previous day from Nawaf al-Zaidan, an alleged cousin and friend of Saddam Hussein who had been sheltering the four in his home for numerous weeks, a special forces team attempted to apprehend everyone in the house at the time. After being fired on, the special forces moved back and called for backup. After Task Force 121 members were wounded, the 3/327th Infantry surrounded and fired on the house with a TOW missile, Mark 19 Automatic Grenade Launcher, M2 50 Caliber Machine guns and small arms. After about four hours of battle (the whole operation lasted 6 hours), the soldiers entered the house and found four dead, including the two brothers and their bodyguard. There were reports that Qusay Hussein's 14-year-old son Mustafa was the fourth body found. Brigadier general Frank Helmick, the assistant commander of 101st Airborne, commented that all occupants of the house
are fifteen possible rhyme schemes, but the most traditional and common are ABAA, AAAA, ABAB, and ABBA. Forms The heroic stanza or elegiac stanza consists of the iambic pentameter, with the rhyme scheme of ABAB or AABB. An example can be found in the following of Thomas Gray's "Elegy Written in a Country Churchyard". The curfew tolls the knell of parting day, The lowing herd wind slowly o'er the lea, The plowman homeward plods his weary way, And leaves the world to darkness and to me. The hymnal stanza consists of alternating rhymes with the iambic trimeter and the iambic tetrameter, with a rhyme scheme of ABCB. An example can be found in Robert Burns, "A Red, Red Rose". O, my luve’s like a red, red rose, That’s newly sprung in June; O, my luve’s like the melodie That’s sweetly played in tune. The memoriam stanza consists of the iambic tetrameter and a rhyme scheme of ABBA. An example can be found in Alfred Lord Tennyson's "In Memoriam A.H.H.". So word by word, and line by line, The dead man touch’d me from the past, And all at once it seem’d at last The living soul was flash’d on mine. An envelope stanza is when the same stanza starts and ends a poem with little change of wording, although this term is also used on stanzas that have a symmetrical rhyme scheme of ABBA. An example can be found in William Blake's "The Tyger". (these are the first and last stanzas of the
a particularly widespread verse form: the form rubaiyat reflects the plural. One of FitzGerald's verses may serve to illustrate: Come, fill the Cup, and in the fire of Spring Your Winter garment of Repentance fling: The Bird of Time has but a little way To flutter—and the Bird is on the Wing. The Midnight Songs poetry form is from Fourth Century China, consisting of regular five-character lines, with each quatrain formed from a pair of rhymed couplets. The person matter involves the personal thoughts and feelings of a courtesan during the four seasons, into which the quatrains are individually assigned. Shairi (also known as Rustavelian Quatrain) is an AAAA rhyming form used mainly in The Knight in the Panther's Skin. The Shichigon-zekku form used on Classical Chinese poetry and Japanese poetry. This type of quatrain uses a seven characters length of line. Both rhyme and rhythm are key elements, although the former is not restricted to falling at the end of the phrase. Ballad meter (The examples from "The Unquiet Grave" and "The Wife of Usher's Well" are both examples of ballad meter.) Decasyllabic quatrain used by John Dryden in Annus Mirabilis, William Davenant in Gondibert, and Thomas Gray Various hymns employ specific forms, such as the common meter, long meter, and short meter. In the Malay tradition, syair, pantun and pantoum are arranged
way, invented in 1961 by Gell-Mann and Yuval Ne'eman. Gell-Mann and George Zweig, correcting an earlier approach of Shoichi Sakata, went on to propose in 1963 that the structure of the groups could be explained by the existence of three flavors of smaller particles inside the hadrons: the quarks. Gell-Mann also briefly discussed a field theory model in which quarks interact with gluons. Perhaps the first remark that quarks should possess an additional quantum number was made as a short footnote in the preprint of Boris Struminsky in connection with the Ω− hyperon being composed of three strange quarks with parallel spins (this situation was peculiar, because since quarks are fermions, such a combination is forbidden by the Pauli exclusion principle): Boris Struminsky was a PhD student of Nikolay Bogolyubov. The problem considered in this preprint was suggested by Nikolay Bogolyubov, who advised Boris Struminsky in this research. In the beginning of 1965, Nikolay Bogolyubov, Boris Struminsky and Albert Tavkhelidze wrote a preprint with a more detailed discussion of the additional quark quantum degree of freedom. This work was also presented by Albert Tavkhelidze without obtaining consent of his collaborators for doing so at an international conference in Trieste (Italy), in May 1965. A similar mysterious situation was with the Δ++ baryon; in the quark model, it is composed of three up quarks with parallel spins. In 1964–65, Greenberg and Han–Nambu independently resolved the problem by proposing that quarks possess an additional SU(3) gauge degree of freedom, later called color charge. Han and Nambu noted that quarks might interact via an octet of vector gauge bosons: the gluons. Since free quark searches consistently failed to turn up any evidence for the new particles, and because an elementary particle back then was defined as a particle that could be separated and isolated, Gell-Mann often said that quarks were merely convenient mathematical constructs, not real particles. The meaning of this statement was usually clear in context: He meant quarks are confined, but he also was implying that the strong interactions could probably not be fully described by quantum field theory. Richard Feynman argued that high energy experiments showed quarks are real particles: he called them partons (since they were parts of hadrons). By particles, Feynman meant objects that travel along paths, elementary particles in a field theory. The difference between Feynman's and Gell-Mann's approaches reflected a deep split in the theoretical physics community. Feynman thought the quarks have a distribution of position or momentum, like any other particle, and he (correctly) believed that the diffusion of parton momentum explained diffractive scattering. Although Gell-Mann believed that certain quark charges could be localized, he was open to the possibility that the quarks themselves could not be localized because space and time break down. This was the more radical approach of S-matrix theory. James Bjorken proposed that pointlike partons would imply certain relations in deep inelastic scattering of electrons and protons, which were verified in experiments at SLAC in 1969. This led physicists to abandon the S-matrix approach for the strong interactions. In 1973 the concept of color as the source of a "strong field" was developed into the theory of QCD by physicists Harald Fritzsch and , together with physicist Murray Gell-Mann. In particular, they employed the general field theory developed in 1954 by Chen Ning Yang and Robert Mills (see Yang–Mills theory), in which the carrier particles of a force can themselves radiate further carrier particles. (This is different from QED, where the photons that carry the electromagnetic force do not radiate further photons.) The discovery of asymptotic freedom in the strong interactions by David Gross, David Politzer and Frank Wilczek allowed physicists to make precise predictions of the results of many high energy experiments using the quantum field theory technique of perturbation theory. Evidence of gluons was discovered in three-jet events at PETRA in 1979. These experiments became more and more precise, culminating in the verification of perturbative QCD at the level of a few percent at LEP, at CERN. The other side of asymptotic freedom is confinement. Since the force between color charges does not decrease with distance, it is believed that quarks and gluons can never be liberated from hadrons. This aspect of the theory is verified within lattice QCD computations, but is not mathematically proven. One of the Millennium Prize Problems announced by the Clay Mathematics Institute requires a claimant to produce such a proof. Other aspects of non-perturbative QCD are the exploration of phases of quark matter, including the quark–gluon plasma. The relation between the short-distance particle limit and the confining long-distance limit is one of the topics recently explored using string theory, the modern form of S-matrix theory. Theory Some definitions Every field theory of particle physics is based on certain symmetries of nature whose existence is deduced from observations. These can be local symmetries, which are the symmetries that act independently at each point in spacetime. Each such symmetry is the basis of a gauge theory and requires the introduction of its own gauge bosons. global symmetries, which are symmetries whose operations must be simultaneously applied to all points of spacetime. QCD is a non-abelian gauge theory (or Yang–Mills theory) of the SU(3) gauge group obtained by taking the color charge to define a local symmetry. Since the strong interaction does not discriminate between different flavors of quark, QCD has approximate flavor symmetry, which is broken by the differing masses of the quarks. There are additional global symmetries whose definitions require the notion of chirality, discrimination between left and right-handed. If the spin of a particle has a positive projection on its direction of motion then it is called right-handed; otherwise, it is left-handed. Chirality and handedness are not the same, but become approximately equivalent at high energies. Chiral symmetries involve independent transformations of these two types of particle. Vector symmetries (also called diagonal symmetries) mean the same transformation is applied on the two chiralities. Axial symmetries are those in which one transformation is applied on left-handed particles and the inverse on the right-handed particles. Additional remarks: duality As mentioned, asymptotic freedom means that at large energy – this corresponds also to short distances – there is practically no interaction between the particles. This is in contrast – more precisely one would say dual– to what one is used to, since usually one connects the absence of interactions with large distances. However, as already mentioned in the original paper of Franz Wegner, a solid state theorist who introduced 1971 simple gauge invariant lattice models, the high-temperature behaviour of the original model, e.g. the strong decay of correlations at large distances, corresponds to the low-temperature behaviour of the (usually ordered!) dual model, namely the asymptotic decay of non-trivial correlations, e.g. short-range deviations from almost perfect arrangements, for short distances. Here, in contrast to Wegner, we have only the dual model, which is that one described in this article. Symmetry groups The color group SU(3) corresponds to the local symmetry whose gauging gives rise to QCD. The electric charge labels a representation of the local symmetry group U(1), which is gauged to give QED: this is an abelian group. If one considers a version of QCD with Nf flavors of massless quarks, then there is a global (chiral) flavor symmetry group SUL(Nf) × SUR(Nf) × UB(1) × UA(1). The chiral symmetry is spontaneously broken by the QCD vacuum to the vector (L+R) SUV(Nf) with the formation of a chiral condensate. The vector symmetry, UB(1) corresponds to the baryon number of quarks and is an exact symmetry. The axial symmetry UA(1) is exact in the classical theory, but broken in the quantum theory, an occurrence called an anomaly. Gluon field configurations called instantons are closely related to this anomaly. There are two different types of SU(3) symmetry: there is the symmetry that acts on the different colors of quarks, and this is an exact gauge symmetry mediated by the gluons, and there is also a flavor symmetry that rotates different flavors of quarks to each other, or flavor SU(3). Flavor SU(3) is an approximate symmetry of the vacuum of QCD, and is not a fundamental symmetry at all. It is an accidental consequence of the small mass of the three lightest quarks. In the QCD vacuum there are vacuum condensates of all the quarks whose mass is less than the QCD scale. This includes the up and down quarks, and to a lesser extent the strange quark, but not any of the others. The vacuum is symmetric under SU(2) isospin rotations of up and down, and to a lesser extent under rotations of up, down, and strange, or full flavor group SU(3), and the observed particles make isospin and SU(3) multiplets. The approximate flavor symmetries do have associated gauge bosons, observed particles like the rho and the omega, but these particles are nothing like the gluons and they are not massless. They are emergent gauge bosons in an approximate string description of QCD. Lagrangian The dynamics of the quarks and gluons are controlled by the quantum chromodynamics Lagrangian. The gauge invariant QCD Lagrangian is where is the quark field, a dynamical function of spacetime, in the fundamental representation of the SU(3) gauge group, indexed by and running from to ; is the gauge covariant derivative; the γμ are Dirac matrices connecting the spinor representation to the vector representation of the Lorentz group. Herein, the gauge covariant derivative couples the quark field with a coupling strength to the gluon fields via the infinitesimal SU(3) generators in the fundamental representation. An explicit representation of these generators is given by , wherein the are the Gell-Mann matrices. The symbol represents the gauge invariant gluon field strength tensor, analogous to the electromagnetic field strength tensor, Fμν, in quantum electrodynamics. It is given by: where are the gluon fields, dynamical functions of spacetime, in the adjoint representation of the SU(3) gauge group, indexed by a, b and c running from to ; and fabc are the structure constants of SU(3). Note that the rules to move-up or pull-down the a, b, or c indices are trivial, (+, ..., +), so that fabc = fabc = fabc whereas for the μ or ν indices one has the non-trivial relativistic rules corresponding to the metric signature (+ − − −). The variables m and g correspond to the quark mass and coupling of the theory, respectively, which are subject to renormalization. An important theoretical concept is the Wilson loop (named after Kenneth G. Wilson). In lattice QCD, the final term of the above Lagrangian is discretized via Wilson loops, and more generally the behavior of Wilson loops can distinguish confined and deconfined phases. Fields Quarks are massive spin- fermions that carry a color charge whose gauging is the content of QCD. Quarks are represented by Dirac fields in the fundamental representation 3 of the gauge group SU(3). They also carry electric charge (either − or +) and participate in weak interactions as part of weak isospin doublets. They carry global quantum numbers including the baryon number, which is for each quark, hypercharge and one of the flavor quantum numbers. Gluons are spin-1 bosons that also carry color charges, since they lie in the adjoint representation 8 of SU(3). They have no electric charge, do not participate in the weak interactions, and have no flavor. They lie in the singlet representation 1 of all these symmetry groups. Each type of quark has a corresponding antiquark, of which the charge is exactly opposite. Dynamics According to the rules of quantum field theory, and the associated Feynman diagrams, the above theory gives rise to three basic interactions: a quark may emit (or absorb) a gluon, a gluon may emit (or absorb) a gluon, and two gluons may directly interact. This contrasts with QED, in which only the first kind of interaction occurs, since photons have no charge. Diagrams involving Faddeev–Popov ghosts must be considered too (except in the unitarity gauge). Area law and confinement Detailed computations with the
respectively, which are subject to renormalization. An important theoretical concept is the Wilson loop (named after Kenneth G. Wilson). In lattice QCD, the final term of the above Lagrangian is discretized via Wilson loops, and more generally the behavior of Wilson loops can distinguish confined and deconfined phases. Fields Quarks are massive spin- fermions that carry a color charge whose gauging is the content of QCD. Quarks are represented by Dirac fields in the fundamental representation 3 of the gauge group SU(3). They also carry electric charge (either − or +) and participate in weak interactions as part of weak isospin doublets. They carry global quantum numbers including the baryon number, which is for each quark, hypercharge and one of the flavor quantum numbers. Gluons are spin-1 bosons that also carry color charges, since they lie in the adjoint representation 8 of SU(3). They have no electric charge, do not participate in the weak interactions, and have no flavor. They lie in the singlet representation 1 of all these symmetry groups. Each type of quark has a corresponding antiquark, of which the charge is exactly opposite. Dynamics According to the rules of quantum field theory, and the associated Feynman diagrams, the above theory gives rise to three basic interactions: a quark may emit (or absorb) a gluon, a gluon may emit (or absorb) a gluon, and two gluons may directly interact. This contrasts with QED, in which only the first kind of interaction occurs, since photons have no charge. Diagrams involving Faddeev–Popov ghosts must be considered too (except in the unitarity gauge). Area law and confinement Detailed computations with the above-mentioned Lagrangian show that the effective potential between a quark and its anti-quark in a meson contains a term that increases in proportion to the distance between the quark and anti-quark (), which represents some kind of "stiffness" of the interaction between the particle and its anti-particle at large distances, similar to the entropic elasticity of a rubber band (see below). This leads to confinement of the quarks to the interior of hadrons, i.e. mesons and nucleons, with typical radii Rc, corresponding to former "Bag models" of the hadrons The order of magnitude of the "bag radius" is 1 fm (= 10−15 m). Moreover, the above-mentioned stiffness is quantitatively related to the so-called "area law" behavior of the expectation value of the Wilson loop product PW of the ordered coupling constants around a closed loop W; i.e. is proportional to the area enclosed by the loop. For this behavior the non-abelian behavior of the gauge group is essential. Methods Further analysis of the content of the theory is complicated. Various techniques have been developed to work with QCD. Some of them are discussed briefly below. Perturbative QCD This approach is based on asymptotic freedom, which allows perturbation theory to be used accurately in experiments performed at very high energies. Although limited in scope, this approach has resulted in the most precise tests of QCD to date. Lattice QCD Among non-perturbative approaches to QCD, the most well established is lattice QCD. This approach uses a discrete set of spacetime points (called the lattice) to reduce the analytically intractable path integrals of the continuum theory to a very difficult numerical computation that is then carried out on supercomputers like the QCDOC, which was constructed for precisely this purpose. While it is a slow and resource-intensive approach, it has wide applicability, giving insight into parts of the theory inaccessible by other means, in particular into the explicit forces acting between quarks and antiquarks in a meson. However, the numerical sign problem makes it difficult to use lattice methods to study QCD at high density and low temperature (e.g. nuclear matter or the interior of neutron stars). 1/N expansion A well-known approximation scheme, the expansion, starts from the idea that the number of colors is infinite, and makes a series of corrections to account for the fact that it is not. Until now, it has been the source of qualitative insight rather than a method for quantitative predictions. Modern variants include the AdS/CFT approach. Effective theories For specific problems, effective theories may be written down that give qualitatively correct results in certain limits. In the best of cases, these may then be obtained as systematic expansions in some parameters of the QCD Lagrangian. One such effective field theory is chiral perturbation theory or ChiPT, which is the QCD effective theory at low energies. More precisely, it is a low energy expansion based on the spontaneous chiral symmetry breaking of QCD, which is an exact symmetry when quark masses are equal to zero, but for the u, d and s quark, which have small mass, it is still a good approximate symmetry. Depending on the number of quarks that are treated as light, one uses either SU(2) ChiPT or SU(3) ChiPT. Other effective theories are heavy quark effective theory (which expands around heavy quark mass near infinity), and soft-collinear effective theory (which expands around large ratios of energy scales). In addition to effective theories, models like the Nambu–Jona-Lasinio model and the chiral model are often used when discussing general features. QCD sum rules Based on an Operator product expansion one can derive sets of relations that connect different observables with each other. Experimental tests The notion of quark flavors was prompted by the necessity of explaining the properties of hadrons during the development of the quark model. The notion of color was necessitated by the puzzle of the . This has been dealt with in the section on the history of QCD. The first evidence for quarks as real constituent elements of hadrons was obtained in deep inelastic scattering experiments at SLAC. The first evidence for gluons came in three-jet events at PETRA. Several good quantitative tests of perturbative QCD exist: The running of the QCD coupling as deduced from many observations Scaling violation in polarized and unpolarized deep inelastic scattering Vector boson production at colliders (this includes the Drell-Yan process) Direct photons produced in hadronic collisions Jet cross sections in colliders Event shape observables at the LEP Heavy-quark production in colliders Quantitative tests of non-perturbative QCD are fewer, because the predictions are harder to make. The best is probably the running of the QCD coupling as probed through lattice computations of heavy-quarkonium spectra. There is a recent claim about the mass of the heavy meson Bc . Other non-perturbative tests are currently at the level of 5% at best. Continuing work on masses and form factors of hadrons and their weak matrix elements are promising candidates for future quantitative tests. The whole subject of quark matter and the quark–gluon plasma is a non-perturbative test bed for QCD that still remains to be properly exploited. One qualitative prediction of QCD is that there exist composite particles made solely of gluons called glueballs that have not yet been definitively observed experimentally. A definitive observation of a glueball with the properties predicted by QCD would strongly confirm the theory. In principle, if glueballs could be definitively ruled out, this would be a serious experimental blow to QCD. But, as of 2013, scientists are unable to confirm or deny the existence of glueballs definitively, despite the fact that particle accelerators have sufficient energy to generate them. Cross-relations to condensed matter physics There are unexpected cross-relations to condensed matter physics. For example, the notion of gauge invariance forms the basis of the well-known Mattis spin glasses, which are systems with the usual spin degrees of freedom for i =1,...,N, with the special fixed "random" couplings Here the εi and εk quantities can independently and "randomly" take the values ±1, which corresponds to a most-simple gauge transformation This means that thermodynamic expectation values of measurable quantities, e.g. of the energy are invariant. However, here the coupling degrees of freedom , which in the QCD correspond to the gluons, are "frozen" to fixed values (quenching). In contrast, in the QCD they "fluctuate" (annealing), and through the large number of gauge degrees of freedom the entropy plays an important role (see below). For positive J0 the thermodynamics of the Mattis spin glass corresponds in fact simply to a "ferromagnet in disguise", just because these systems have no "frustration" at all. This term is a basic measure in spin glass theory. Quantitatively it is identical with the loop product along a closed loop W. However, for a Mattis spin glass – in contrast to "genuine" spin glasses – the quantity PW never becomes negative. The basic notion "frustration" of the spin-glass is actually similar to the Wilson loop quantity of the QCD. The only difference is again that in the QCD one is dealing with SU(3) matrices, and that one is dealing with a "fluctuating" quantity. Energetically, perfect absence of frustration should be non-favorable and atypical for a spin glass, which means that one should add the loop product to the Hamiltonian, by some kind of term representing a "punishment". In the QCD the Wilson loop is essential for the Lagrangian rightaway. The relation between the QCD and "disordered magnetic systems" (the spin glasses belong to them) were additionally stressed in a paper by Fradkin, Huberman and Shenker, which also stresses the notion of duality. A further analogy consists in the already mentioned similarity to polymer physics, where, analogously to Wilson Loops, so-called "entangled nets" appear, which are important for the formation of the entropy-elasticity (force proportional to the length) of a rubber band. The non-abelian character of the SU(3) corresponds thereby to the non-trivial "chemical links", which glue different loop segments together, and "asymptotic freedom" means in the polymer analogy simply the fact that in the short-wave limit, i.e. for (where Rc is a characteristic correlation length for the glued loops, corresponding to the above-mentioned "bag radius", while λw is the wavelength of an excitation) any non-trivial correlation vanishes totally, as if the system had crystallized. There is also a correspondence between confinement in QCD – the fact that the color field is only different from zero in the interior of hadrons – and the behaviour of the usual magnetic field in the theory of type-II superconductors: there the magnetism is confined to the interior of the Abrikosov flux-line lattice, i.e., the London penetration depth λ of that theory is analogous to the confinement radius Rc of quantum chromodynamics. Mathematically, this correspondendence is supported by the second term, on the r.h.s. of the Lagrangian. See also For overviews: Standard Model Standard model (basic details) – for its field theoretical formulation Strong interaction Quark Gluon Hadron Colour confinement QCD matter Quark–gluon plasma For details: Gauge theory Quantum gauge theory, BRST quantization and Faddeev–Popov ghost Quantum field theory – a more general category For techniques: Lattice QCD 1/N expansion Perturbative QCD Soft-collinear effective theory Heavy
head and tail drift around endlessly in that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a high-level language, but it does admittedly slow things down a little, because the array indices must be compared to zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds, which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for any high-level language that does not have pointer syntax. The array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects or pointers can implement or come with libraries for dynamic lists. Such data structures may have not specified a fixed capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue and queue underflow happens when trying to remove an element from an empty queue. A bounded queue is a queue limited to a fixed number of items. There are several efficient implementations of FIFO queues. An efficient implementation is one that can perform the operations—en-queuing and de-queuing—in O(1) time. Linked list A doubly linked list has O(1) insertion and deletion at both ends, so it is a natural choice for queues. A regular singly linked list only has efficient insertion and deletion at one end. However, a small modification—keeping a pointer to the last node in addition to the first one—will enable it to implement an efficient queue. A deque implemented using a modified dynamic array Queues and programming languages Queues may be implemented as a separate data type, or maybe considered a special case of a double-ended queue (deque) and not implemented separately. For example, Perl and Ruby allow pushing and popping an array from both ends, so one can use push and unshift functions to enqueue and dequeue a list (or, in reverse, one can use shift and pop), although in some cases these operations are not efficient. C++'s Standard Template Library provides a "queue" templated class which is restricted to only push/pop operations. Since J2SE5.0, Java's library contains a interface that specifies queue operations; implementing classes include and (since J2SE 1.6) . PHP has an SplQueue class and third party libraries like beanstalk'd and Gearman. Example A simple queue implemented in JavaScript: class Queue { constructor() { this.items = new Array(0); } enqueue(element) { this.items.push(element); } dequeue() { return this.items.shift(); } } Purely functional implementation Queues can also be implemented as a purely functional data structure. There are two implementations. The first one only achieves per operation on average. That is, the amortized time is , but individual operations can take where n is the number of elements in the queue. The second implementation is called a real-time queue and it allows the queue to be persistent with operations in O(1) worst-case time. It is a more complex implementation and requires lazy lists with memoization. Amortized queue This queue's data is stored in two singly-linked lists named and . The list holds the front part of the queue. The list holds the remaining elements (a.k.a., the rear of the queue) in reverse order. It is easy to insert into the front of the queue by adding a node at the head of . And, if is not empty, it is easy to remove from the end of the queue by removing the node at the head of . When is empty, the list is reversed and assigned to and then the head of is removed. The insert ("enqueue") always takes time. The removal ("dequeue") takes when the list is not empty. When is empty, the reverse takes where is the number of elements in . But, we can say it is amortized time, because every element in had to be inserted and we can assign a constant cost for each element in the reverse to when it was inserted. Real-time queue The real-time queue achieves time for all operations, without amortization. This discussion will be technical, so recall that, for a list, denotes its length, that NIL represents an empty list and represents the list whose head is h and whose tail is t. The data structure used to implement our queues consists of three singly-linked lists where f is the front of the queue, r is the rear of the queue in reverse order. The invariant of the structure is that s is the rear of f without its first elements, that is . The tail of the queue is then almost and inserting an element x to is almost .
removing an element from the front is known as dequeue. Other operations may also be allowed, often including a peek or front operation that returns the value of the next element to be dequeued without dequeuing it. The operations of a queue make it a first-in-first-out (FIFO) data structure. In a FIFO data structure, the first element added to the queue will be the first one to be removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. A queue is an example of a linear data structure, or more abstractly a sequential collection. Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes. Common implementations are circular buffers and linked lists. Queues provide services in computer science, transport, and operations research where various entities such as data, objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the function of a buffer. Another usage of queues is in the implementation of breadth-first search. Queue implementation Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again. Fixed-length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a high-level language, but it does admittedly slow things down a little, because the array indices must be compared to zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds, which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for any high-level language that does not have pointer syntax. The array size must be declared ahead
special abilities. For example, the bread-and-butter Soldier class has medium armor, medium speed, and a well-rounded selection of weapons and grenades, while the Scout class is lightly armored, very fast, has a scanner that detects nearby enemies, but has very weak offensive weapons. One of the other differences with CTF is the fact that the flag is not returned automatically when a player drops it: running over one's flag in Threewave CTF would return the flag to the base, and in TF the flag remains in the same spot for preconfigured time and it has to be defended on remote locations. This caused a shift in defensive tactics compared to Threewave CTF. Team Fortress maintained its standing as the most-played online Quake modification for many years. Team Fortress would go on to become Team Fortress Classic and get a sequel, Team Fortress 2. Another popular mod was Threewave Capture the Flag (CTF), primarily authored by Dave 'Zoid' Kirsch. Threewave CTF is a partial conversion consisting of new levels, a new weapon (a grappling hook), power-ups, new textures, and new gameplay rules. Typically, two teams (red and blue) would compete in a game of Capture the flag, though a few maps with up to four teams (red, blue, green, and yellow) were created. Capture the Flag soon became a standard game mode included in most popular multiplayer games released after Quake. Rocket Arena provides the ability for players to face each other in small, open arenas with changes in the gameplay rules so that item collection and detailed level knowledge are no longer factors. A series of short rounds, with the surviving player in each round gaining a point, instead tests the player's aiming and dodging skills and reflexes. Clan Arena is a further modification that provides team play using Rocket Arena rules. One mod category, "bots", was introduced to provide surrogate players in multiplayer mode. Arcane Dimensions is a singleplayer mod. It's a partial conversion with breakable objects and walls, enhanced particle system, numerous visual improvements and new enemies and weapons. The level design is much more complex in terms of geometry and gameplay than in the original game. There are a large number of custom levels that have been made by users and fans of Quake. , new maps are still being made, over twenty years since the game's release. Custom maps are new maps that are playable by loading them into the original game. Custom levels of various gameplay types have been made, but most are in the single-player and deathmatch genres. More than 1500 single-player and a similar number of deathmatch maps have been made for Quake. Reception Sales According to David Kushner in Masters of Doom, id Software released a retail shareware version of Quake before the game's full retail distribution by GT Interactive. These shareware copies could be converted into complete versions through passwords purchased via phone. However, Kushner wrote that "gamers wasted no time hacking the shareware to unlock the full version of the game for free." This problem, combined with the scale of the operation, led id Software to cancel the plan. As a result, the company was left with 150,000 unsold shareware copies in storage. The venture damaged Quakes initial sales and caused its retail push by GT Interactive to miss the holiday shopping season. Following the game's full release, Kushner remarked that its early "sales were good — with 250,000 units shipped — but not a phenomenon like Doom II." In the United States, Quake placed sixth on PC Data's monthly computer game sales charts for November and December 1996. Its shareware edition was the sixth-best-selling computer game of 1996 overall, while its retail SKU claimed 20th place. The shareware version sold 393,575 copies and grossed $3,005,519 in the United States during 1996. It remained in PC Data's monthly top 10 from January to April 1997, but was absent by May. During its first 12 months, Quake sold 373,000 retail copies and earned $18 million in the United States, according to PC Data. Its final retail sales for 1997 were 273,936 copies, which made it the country's 16th-highest computer game seller for the year. Sales of Quake reached 550,000 units in the United States alone by December 1999. In 1997, id estimated that there may be as many as 5 million copies of Quake circulating. The game sold over 1.4 million copies by December 1997. Critical reviews Quake was critically acclaimed on the PC. Aggregating review websites GameRankings and Metacritic gave the original PC version 93% and 94/100, and the Nintendo 64 port 76% and 74/100. A Next Generation critic lauded the game's realistic 3D physics and genuinely unnerving sound effects. GamePro said Quake had been over-hyped but is excellent nonetheless, particularly its usage of its advanced 3D engine. The review also praised the sound effects, atmospheric music, and graphics, though it criticized that the polygons used to construct the enemies are too obvious at close range. Less than a month after Quake was released (and a month before they actually reviewed the game), Next Generation listed it as number 9 on their "Top 100 Games of All Time", saying that it is similar to Doom but supports a maximum of eight players instead of four. In 1996, Computer Gaming World declared Quake the 36th-best computer game ever released, and listed "telefragged" as #1 on its list of "the 15 best ways to die in computer gaming". In 1997, the Game Developers Choice Awards gave Quake three spotlight awards for Best Sound Effects, Best Music or Soundtrack and Best On-Line/Internet Game. Entertainment Weekly gave the game a B+ and called it "an extended bit of subterranean mayhem that offers three major improvements over its immediate predecessor [Doom]." He identified these as the graphics, the audio design, and the amount of violent action. Next Generation reviewed the Macintosh version of the game, rating it four stars out of five, and stated that "Though replay value is limited by the lack of interactive environments or even the semblance of a plot, there's no doubt that Quake and its engine are something powerful and addictive." The Saturn version received mostly negative reviews, as critics generally agreed that it did not bring over the elements that make the game enjoyable. In particular, critics reviled the absence of the multiplayer mode, which they felt had eclipsed the single player campaign as the reason to play Quake. Kraig Kujawa wrote in Electronic Gaming Monthly, "Quake is not a great one-player game - it gained its notoriety on the Net as a multiplayer." and his co-reviewer Sushi-X concluded "Without multiplayer, I'd pass." Most reviews also said the controls are much worse than the PC original, in particular the difficulty of aiming at enemies without the benefit of either mouse-controlled camera or a second analog stick. GamePro noted that the graphics are very pixelated and blurry, to the point where people unfamiliar with Quake would not be able to discern what they're looking at. They concluded, "Quake may not be the worst Saturn game available, but it certainly doesn't live up to its PC heritage." Most critics did find the port technically impressive, particularly the added light sourcing. However, Next Generation pointed out that "Porting Quake to a console is nothing more than an excuse for bragging rights. It's simply a way to show that the limited architecture of a 32-bit system has the power to push the same game that those mighty Pentium PCs take for granted." Even Rich Leadbetter of Sega Saturn Magazine, which gave the port a 92%, acknowledged that it was a proverbial dancing bear, noting several conspicuous compromises the port made and stating as his concluding argument, "Look, it's Quake on the Saturn - the machine has no right to be doing this!" GameSpot opined that the game's lack of plot makes the single-player campaign feel too shallow and lacking in motivation to appeal to most gamers. Most critics compared the port unfavorably to the Saturn version of Duke Nukem 3D (which came out just a few months earlier), mainly in terms of gameplay. Next Generation reviewed the Nintendo 64 version of the game, rating it three stars out of five, and stated that "As a whole, Quake 64 doesn't live up to the experience offered by the high-end, 3D-accelerated PC version; it is, however, an entertaining gaming experience that is worthy of a close look and a nice addition to the blossoming number of first-person shooters for Nintendo 64." Next Generation reviewed the arcade version of the game, rating it three stars out of five, and stated that "For those who don't have LAN or internet capabilities, check out arcade Quake. It's a blast." In 1998, PC Gamer declared it the 28th-best computer game ever released, and the editors called it "one of the most addictive, adaptable, and pulse-pounding 3D shooters ever created". In 2003, Quake was inducted into GameSpot's list of the greatest games of all time. Speedruns As an example of the dedication that Quake has inspired in its fan community, a group of expert players recorded speedrun demos (replayable recordings of the player's movement) of Quake levels completed in record time on the "Nightmare" skill level. The footage was edited into a continuous 19 minutes, 49 seconds demo called Quake done Quick and released on June 10, 1997. Owners of Quake could replay this demo in the game engine, watching the run unfold as if they were playing it themselves. Most full-game speedruns are a collaborative effort by a number of runners (though some have been done by single runners on their own). Although each particular level is credited to one runner, the ideas and techniques used are iterative and collaborative in nature, with each runner picking up tips and ideas from the others, so that speeds keep improving beyond what was thought possible as the runs are further optimized and new tricks or routes are discovered. Further time improvements of the continuous whole game run were achieved into the 21st century. In addition, many thousands of individual level runs are kept at Speed Demos Archive's Quake section, including many on custom maps. Speedrunning is a counterpart to multiplayer modes in making Quake one of the first games promoted as a virtual sport. Legacy The source code of the Quake and QuakeWorld engines was licensed under the GNU GPL-2.0-or-later on December 21, 1999. The id Software maps, objects, textures, sounds, and other creative works remain under their original proprietary license. The shareware distribution of Quake is still freely redistributable and usable with the GPLed engine code. One must purchase a copy of Quake in order to receive the registered version of the game which includes more single-player episodes and the deathmatch maps. Based on the success of the first Quake game, and later published Quake II and Quake III Arena, Quake 4 was released in October 2005, developed by Raven Software using the Doom 3 engine. Quake was the game primarily responsible for the emergence of the machinima artform of films made in game engines, thanks to edited Quake demos such as Ranger Gone Bad and Blahbalicious, the in-game film The Devil's Covenant, and the in-game-rendered, four-hour epic film The Seal of Nehahra. On June 22, 2006, it had been ten years since the original uploading of the game to cdrom.com archives. Many Internet forums had topics about it, and it was a front-page story on Slashdot. On October 11, 2006, John Romero released the original map files for all of the levels in Quake under the GPL. Quake has four sequels: Quake II, Quake III Arena, Quake 4, and Enemy Territory: Quake Wars. In 2002, a version of Quake was produced for mobile phones. A copy of Quake was also released as a compilation in 2001, labeled Ultimate Quake, which included the original Quake, Quake II, and Quake III Arena which was published by Activision. In 2008, Quake was honored at the 59th Annual Technology & Engineering Emmy Awards for advancing the art form of user modifiable games. John Carmack accepted the award. Years after its original release, Quake is still regarded by many critics as one of the greatest and most influential games ever made. Expansions and ports There were two official expansion packs released for Quake. The expansion packs pick up where the first game left off, include all of the same weapons, power-ups, monsters, and gothic atmosphere/architecture, and continue/finish the story of the first game and its protagonist. An unofficial third expansion pack, Abyss of Pandemonium, was developed by the Impel Development Team, published by Perfect Publishing, and released on April 14, 1998; an updated version, version 2.0, titled Abyss of Pandemonium – The Final Mission was released as freeware. An authorized expansion pack, Q!ZONE was developed and published by WizardWorks, and released in 1996. An authorized level editor, Deathmatch Maker was developed by Virtus Corporation and published by Macmillan Digital Publishing in 1997. It contained an exclusive Virtus' Episode. In honor of Quake'''s 20th anniversary, MachineGames, an internal development studio of ZeniMax Media, who are the current owners of the Quake IP, released online a new expansion pack for free, called Episode 5: Dimension of the Past. Quake Mission Pack No. 1: Scourge of ArmagonQuake Mission Pack No. 1: Scourge of Armagon was the first official mission pack, released on March 5, 1997. Developed by Hipnotic Interactive, it features three episodes divided into seventeen new single-player levels (three of which are secret), a new multiplayer level, a new soundtrack composed by Jeehun Hwang, and gameplay features not originally present in Quake, including rotating structures and breakable walls. Unlike the main Quake game and Mission Pack No. 2, Scourge does away with the episode hub, requiring the three episodes to be played sequentially. The three new enemies include Centroids, large cybernetic scorpions with nailguns; Gremlins, small goblins that can steal weapons and multiply by feeding on enemy corpses; and Spike Mines, floating orbs that detonate when near the player. The three new weapons include the Mjolnir, a large lightning emitting hammer; the Laser Cannon, which shoots bouncing bolts of energy; and the Proximity Mine Launcher, which fires grenades that attach to surfaces and detonate when an opponent comes near. The three new power-ups include the Horn of Conjuring, which summons an enemy to protect the player; the Empathy Shield, which halves the damage taken by the player between the player and the attacking enemy; and the Wetsuit, which renders the player invulnerable to electricity and allows the player to stay underwater for a period of time. The storyline follows Armagon, a general of Quake's forces, planning to invade Earth via a portal known as the 'Rift'. Armagon resembles a giant gremlin with cybernetic legs and a combined rocket launcher/laser cannon for arms. Tim Soete of GameSpot gave it a score 8.6 out of 10. Quake Mission Pack No. 2: Dissolution of EternityQuake Mission Pack No. 2: Dissolution of Eternity was the second official mission pack, released on March 19, 1997. Developed by Rogue Entertainment, it features two episodes divided into fifteen new single-player levels, a new multiplayer level, a new soundtrack, and several new enemies and bosses. Notably, the pack lacks secret levels. The eight new enemies include Electric Eels, Phantom Swordsmen, Multi-Grenade Ogres (which fire cluster grenades), Hell Spawn, Wraths (floating, robed undead), Guardians (resurrected ancient Egyptian warriors), Mummies, and statues of various enemies that can come to life. The four new types of bosses include Lava Men, Overlords, large Wraths, and a dragon guarding the "temporal energy converter". The two new power-ups include the Anti Grav Belt, which allows the player to jump higher; and the Power Shield, which lowers the damage the player receives. Rather than offering new weapons, the mission pack gives the player four new types of ammo for existing weapons, such as "lava nails" for the Nailgun, cluster grenades for the Grenade Launcher,
a class system for the players. Players choose a class, which creates various restrictions on weapons and armor types available to that player, and also grants special abilities. For example, the bread-and-butter Soldier class has medium armor, medium speed, and a well-rounded selection of weapons and grenades, while the Scout class is lightly armored, very fast, has a scanner that detects nearby enemies, but has very weak offensive weapons. One of the other differences with CTF is the fact that the flag is not returned automatically when a player drops it: running over one's flag in Threewave CTF would return the flag to the base, and in TF the flag remains in the same spot for preconfigured time and it has to be defended on remote locations. This caused a shift in defensive tactics compared to Threewave CTF. Team Fortress maintained its standing as the most-played online Quake modification for many years. Team Fortress would go on to become Team Fortress Classic and get a sequel, Team Fortress 2. Another popular mod was Threewave Capture the Flag (CTF), primarily authored by Dave 'Zoid' Kirsch. Threewave CTF is a partial conversion consisting of new levels, a new weapon (a grappling hook), power-ups, new textures, and new gameplay rules. Typically, two teams (red and blue) would compete in a game of Capture the flag, though a few maps with up to four teams (red, blue, green, and yellow) were created. Capture the Flag soon became a standard game mode included in most popular multiplayer games released after Quake. Rocket Arena provides the ability for players to face each other in small, open arenas with changes in the gameplay rules so that item collection and detailed level knowledge are no longer factors. A series of short rounds, with the surviving player in each round gaining a point, instead tests the player's aiming and dodging skills and reflexes. Clan Arena is a further modification that provides team play using Rocket Arena rules. One mod category, "bots", was introduced to provide surrogate players in multiplayer mode. Arcane Dimensions is a singleplayer mod. It's a partial conversion with breakable objects and walls, enhanced particle system, numerous visual improvements and new enemies and weapons. The level design is much more complex in terms of geometry and gameplay than in the original game. There are a large number of custom levels that have been made by users and fans of Quake. , new maps are still being made, over twenty years since the game's release. Custom maps are new maps that are playable by loading them into the original game. Custom levels of various gameplay types have been made, but most are in the single-player and deathmatch genres. More than 1500 single-player and a similar number of deathmatch maps have been made for Quake. Reception Sales According to David Kushner in Masters of Doom, id Software released a retail shareware version of Quake before the game's full retail distribution by GT Interactive. These shareware copies could be converted into complete versions through passwords purchased via phone. However, Kushner wrote that "gamers wasted no time hacking the shareware to unlock the full version of the game for free." This problem, combined with the scale of the operation, led id Software to cancel the plan. As a result, the company was left with 150,000 unsold shareware copies in storage. The venture damaged Quakes initial sales and caused its retail push by GT Interactive to miss the holiday shopping season. Following the game's full release, Kushner remarked that its early "sales were good — with 250,000 units shipped — but not a phenomenon like Doom II." In the United States, Quake placed sixth on PC Data's monthly computer game sales charts for November and December 1996. Its shareware edition was the sixth-best-selling computer game of 1996 overall, while its retail SKU claimed 20th place. The shareware version sold 393,575 copies and grossed $3,005,519 in the United States during 1996. It remained in PC Data's monthly top 10 from January to April 1997, but was absent by May. During its first 12 months, Quake sold 373,000 retail copies and earned $18 million in the United States, according to PC Data. Its final retail sales for 1997 were 273,936 copies, which made it the country's 16th-highest computer game seller for the year. Sales of Quake reached 550,000 units in the United States alone by December 1999. In 1997, id estimated that there may be as many as 5 million copies of Quake circulating. The game sold over 1.4 million copies by December 1997. Critical reviews Quake was critically acclaimed on the PC. Aggregating review websites GameRankings and Metacritic gave the original PC version 93% and 94/100, and the Nintendo 64 port 76% and 74/100. A Next Generation critic lauded the game's realistic 3D physics and genuinely unnerving sound effects. GamePro said Quake had been over-hyped but is excellent nonetheless, particularly its usage of its advanced 3D engine. The review also praised the sound effects, atmospheric music, and graphics, though it criticized that the polygons used to construct the enemies are too obvious at close range. Less than a month after Quake was released (and a month before they actually reviewed the game), Next Generation listed it as number 9 on their "Top 100 Games of All Time", saying that it is similar to Doom but supports a maximum of eight players instead of four. In 1996, Computer Gaming World declared Quake the 36th-best computer game ever released, and listed "telefragged" as #1 on its list of "the 15 best ways to die in computer gaming". In 1997, the Game Developers Choice Awards gave Quake three spotlight awards for Best Sound Effects, Best Music or Soundtrack and Best On-Line/Internet Game. Entertainment Weekly gave the game a B+ and called it "an extended bit of subterranean mayhem that offers three major improvements over its immediate predecessor [Doom]." He identified these as the graphics, the audio design, and the amount of violent action. Next Generation reviewed the Macintosh version of the game, rating it four stars out of five, and stated that "Though replay value is limited by the lack of interactive environments or even the semblance of a plot, there's no doubt that Quake and its engine are something powerful and addictive." The Saturn version received mostly negative reviews, as critics generally agreed that it did not bring over the elements that make the game enjoyable. In particular, critics reviled the absence of the multiplayer mode, which they felt had eclipsed the single player campaign as the reason to play Quake. Kraig Kujawa wrote in Electronic Gaming Monthly, "Quake is not a great one-player game - it gained its notoriety on the Net as a multiplayer." and his co-reviewer Sushi-X concluded "Without multiplayer, I'd pass." Most reviews also said the controls are much worse than the PC original, in particular the difficulty of aiming at enemies without the benefit of either mouse-controlled camera or a second analog stick. GamePro noted that the graphics are very pixelated and blurry, to the point where people unfamiliar with Quake would not be able to discern what they're looking at. They concluded, "Quake may not be the worst Saturn game available, but it certainly doesn't live up to its PC heritage." Most critics did find the port technically impressive, particularly the added light sourcing. However, Next Generation pointed out that "Porting Quake to a console is nothing more than an excuse for bragging rights. It's simply a way to show that the limited architecture of a 32-bit system has the power to push the same game that those mighty Pentium PCs take for granted." Even Rich Leadbetter of Sega Saturn Magazine, which gave the port a 92%, acknowledged that it was a proverbial dancing bear, noting several conspicuous compromises the port made and stating as his concluding argument, "Look, it's Quake on the Saturn - the machine has no right to be doing this!" GameSpot opined that the game's lack of plot makes the single-player campaign feel too shallow and lacking in motivation to appeal to most gamers. Most critics compared the port unfavorably to the Saturn version of Duke Nukem 3D (which came out just a few months earlier), mainly in terms of gameplay. Next Generation reviewed the Nintendo 64 version of the game, rating it three stars out of five, and stated that "As a whole, Quake 64 doesn't live up to the experience offered by the high-end, 3D-accelerated PC version; it is, however, an entertaining gaming experience that is worthy of a close look and a nice addition to the blossoming number of first-person shooters for Nintendo 64." Next Generation reviewed the arcade version of the game, rating it three stars out of five, and stated that "For those who don't have LAN or internet capabilities, check out arcade Quake. It's a blast." In 1998, PC Gamer declared it the 28th-best computer game ever released, and the editors called it "one of the most addictive, adaptable, and pulse-pounding 3D shooters ever created". In 2003, Quake was inducted into GameSpot's list of the greatest games of all time. Speedruns As an example of the dedication that Quake has inspired in its fan community, a group of expert players recorded speedrun demos (replayable recordings of the player's movement) of Quake levels completed in record time on the "Nightmare" skill level. The footage was edited into a continuous 19 minutes, 49 seconds demo called Quake done Quick and released on June 10, 1997. Owners of Quake could replay this demo in the game engine, watching the run unfold as if they were playing it themselves. Most full-game speedruns are a collaborative effort by a number of runners (though some have been done by single runners on their own). Although each particular level is credited to one runner, the ideas and techniques used are iterative and collaborative in nature, with each runner picking up tips and ideas from the others, so that speeds keep improving beyond what was thought possible as the runs are further optimized and new tricks or routes are discovered. Further time improvements of the continuous whole game run were achieved into the 21st century. In addition, many thousands of individual level runs are kept at Speed Demos Archive's Quake section, including many on custom maps. Speedrunning is a counterpart to multiplayer modes in making Quake one of the first games promoted as a virtual sport. Legacy The source code of the Quake and QuakeWorld engines was licensed under the GNU GPL-2.0-or-later on December 21, 1999. The id Software maps, objects, textures, sounds, and other creative works remain under their original proprietary license. The shareware distribution of Quake is still freely redistributable and usable with the GPLed engine code. One must purchase a copy of Quake in order to receive the registered version of the game which includes more single-player episodes and the deathmatch maps. Based on the success of the first Quake game, and later published Quake II and Quake III Arena, Quake 4 was released in October 2005, developed by Raven Software using the Doom 3 engine. Quake was the game primarily responsible for the emergence of the machinima artform of films made in game engines, thanks to edited Quake demos such as Ranger Gone Bad and Blahbalicious, the in-game film The Devil's Covenant, and the in-game-rendered, four-hour epic film The Seal of Nehahra. On June 22, 2006, it had been ten years since the original uploading of the game to
but understandable.—J. Schwinger Standard Model In 1954, Yang Chen-Ning and Robert Mills generalised the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang–Mills theories), which are based on more complicated local symmetry groups. In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge. Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdus Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable. Peter Higgs, Robert Brout, François Englert, Gerald Guralnik, Carl Hagen, and Tom Kibble proposed in their famous Physical Review Letters papers that the gauge symmetry in Yang–Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass. By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored, until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion. Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible. These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades. The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model. Other developments The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory. Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973. Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory, itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity. Condensed matter physics Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter. Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems. Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect. Principles For simplicity, natural units are used in the following sections, in which the reduced Planck constant and the speed of light are both set to one. Classical fields A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity and the electric field and magnetic field in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom. Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantisation and path integrals are two common formulations of QFT. To motivate the fundamentals of QFT, an overview of classical field theory is in order. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as , where is the position vector, and is the time. Suppose the Lagrangian of the field, , is where is the Lagrangian density, is the time-derivative of the field, is the gradient operator, and is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian: we obtain the equations of motion for the field, which describe the way it varies in time and space: This is known as the Klein–Gordon equation. The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: where is a complex number (normalised by convention), denotes complex conjugation, and is the frequency of the normal mode: Thus each normal mode corresponding to a single can be seen as a classical harmonic oscillator with frequency . Canonical quantisation The quantisation procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by where is a complex number (normalised by convention), and is the oscillator's frequency. Note that is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label of a quantum field. For a quantum harmonic oscillator, is promoted to a linear operator : Complex numbers and are replaced by the annihilation operator and the creation operator , respectively, where denotes Hermitian conjugation. The commutation relation between the two is The vacuum state , which is the lowest energy state, is defined by Any quantum state of a single harmonic oscillator can be obtained from by successively applying the creation operator : By the same token, the aforementioned real scalar field , which corresponds to in the single harmonic oscillator, is also promoted to a quantum field operator , while the annihilation operator , the creation operator and the angular frequency are now for a particular : Their commutation relations are: where is the Dirac delta function. The vacuum state is defined by Any quantum state of the field can be obtained from by successively applying creation operators , e.g. Although the quantum field appearing in the Lagrangian is spatially continuous, the quantum states of the field are discrete. While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems. The process of quantising an arbitrary number of particles instead of a single particle is often also called second quantisation. The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantise (complex) scalar fields, Dirac fields, vector fields (e.g. the electromagnetic field), and even strings. However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field: where is a spacetime index, , etc. The summation over the index has been omitted following the Einstein notation. If the parameter is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. Path integrals The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state at time to some final state at , the total time is divided into small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let be the Hamiltonian (i.e. generator of time evolution), then Taking the limit , the above product of integrals becomes the Feynman path integral: where is the Lagrangian involving and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian via Legendre transformation. The initial and final conditions of the path integral are respectively In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand. Two-point correlation function In calculations, one often encounters expression likein the free or interacting theory, respectively. Here, and are position four-vectors, is the time ordering operator that shuffles its operands so the time-components and increase from right to left, and is the ground state (vacuum state) of the interacting theory, different from the free ground state . This expression represents the probability amplitude for the field to propagate from to , and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short. The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantisation or path integrals to be In an interacting theory, where the Lagrangian or Hamiltonian contains terms or that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantisation formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function. In canonical quantisation, the two-point correlation function can be written as: where is an infinitesimal number and is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in -theory, the interacting term of the Hamiltonian is , and the expansion of the two-point correlator in terms of becomesThis perturbation expansion expresses the interacting two-point function in terms of quantities that are evaluated in the free theory. In the path integral formulation, the two-point correlation function can be written where is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in , reducing the interacting two-point function to quantities in the free theory. Wick's theorem further reduce any -point correlation function in the free theory to a sum of products of two-point correlation functions. For example, Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory. This makes the Feynman propagator one of the most important quantities in quantum field theory. Feynman diagram Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the term in the two-point correlation function in the theory is After applying Wick's theorem, one of the terms is This term can instead be obtained from the Feynman diagram . The diagram consists of external vertices connected with one edge and represented by dots (here labelled and ). internal vertices connected with four edges and represented by dots (here labelled ). edges connecting the vertices and represented by lines. Every vertex corresponds to a single field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the Feynman rules: For every internal vertex , write down a factor . For every edge that connects two vertices and , write down a factor . Divide by the symmetry factor of the diagram. With the symmetry factor , following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space. In order to compute the -point correlation function to the -th order, list all valid Feynman diagrams with external points and or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, is equal to the sum of (expressions corresponding to) all connected diagrams with external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the interaction theory discussed above, every vertex must have four legs. In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method. Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing loops are referred to as -loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction. Lines whose end points are vertices can be thought of as the propagation of virtual particles. Renormalization Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass and the coupling constant , have no physical meaning — , , and the field strength are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off , obtain expressions for the physical quantities, and then take the limit . This is an example of regularisation, a class of methods to treat divergences in QFT, with being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalised perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of theory, the field strength is first redefined: where is the bare field, is the renormalised field, and is a constant to be determined. The Lagrangian density becomes: where and are the experimentally measurable, renormalised, mass and coupling constant, respectively, and are constants to be determined. The first three terms are the Lagrangian density written in terms of the renormalised quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularisation scheme (such as the cut-off regularisation introduced above or dimensional regularization); call the regulator . Compute Feynman diagrams, in which divergent terms will depend on . Then, define , , and such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit is taken. In this way, meaningful finite quantities are obtained. It is only possible to eliminate all infinities to obtain a finite result in renormalisable theories, whereas in non-renormalisable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalisable QFT, while quantum gravity is non-renormalisable. Renormalisation group The renormalisation group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales. The way in which each parameter changes with scale is described by its β function. Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation. As an example, the coupling constant in QED, namely the elementary charge , has the following β function: where is the energy scale under which the measurement of is performed. This differential equation implies that the observed elementary charge increases as the scale increases. The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant. The coupling constant in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group , has the following β function: where is the number of quark flavours. In the case where (the Standard Model has ), the coupling constant decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom. Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.) Examples include string theory and supersymmetric Yang–Mills theory. According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off , i.e. that the theory is no longer valid at energies higher than , and all degrees of freedom above the scale are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalisable effective field theory. The difference between renormalisable and non-renormalisable theories is that the former are
Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times previously, but they had been largely ignored.) Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible. These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles. The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades. The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model. Other developments The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft–Polyakov monopole was discovered theoretically by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov and coauthors. These objects are inaccessible through perturbation theory. Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973. Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory, itself a type of two-dimensional QFT with conformal symmetry. Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity. Condensed matter physics Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter. Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle—phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems. Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect. Principles For simplicity, natural units are used in the following sections, in which the reduced Planck constant and the speed of light are both set to one. Classical fields A classical field is a function of spatial and time coordinates. Examples include the gravitational field in Newtonian gravity and the electric field and magnetic field in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinitely many degrees of freedom. Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantisation and path integrals are two common formulations of QFT. To motivate the fundamentals of QFT, an overview of classical field theory is in order. The simplest classical field is a real scalar field — a real number at every point in space that changes in time. It is denoted as , where is the position vector, and is the time. Suppose the Lagrangian of the field, , is where is the Lagrangian density, is the time-derivative of the field, is the gradient operator, and is a real parameter (the "mass" of the field). Applying the Euler–Lagrange equation on the Lagrangian: we obtain the equations of motion for the field, which describe the way it varies in time and space: This is known as the Klein–Gordon equation. The Klein–Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: where is a complex number (normalised by convention), denotes complex conjugation, and is the frequency of the normal mode: Thus each normal mode corresponding to a single can be seen as a classical harmonic oscillator with frequency . Canonical quantisation The quantisation procedure for the above classical field to a quantum operator field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by where is a complex number (normalised by convention), and is the oscillator's frequency. Note that is the displacement of a particle in simple harmonic motion from the equilibrium position, not to be confused with the spatial label of a quantum field. For a quantum harmonic oscillator, is promoted to a linear operator : Complex numbers and are replaced by the annihilation operator and the creation operator , respectively, where denotes Hermitian conjugation. The commutation relation between the two is The vacuum state , which is the lowest energy state, is defined by Any quantum state of a single harmonic oscillator can be obtained from by successively applying the creation operator : By the same token, the aforementioned real scalar field , which corresponds to in the single harmonic oscillator, is also promoted to a quantum field operator , while the annihilation operator , the creation operator and the angular frequency are now for a particular : Their commutation relations are: where is the Dirac delta function. The vacuum state is defined by Any quantum state of the field can be obtained from by successively applying creation operators , e.g. Although the quantum field appearing in the Lagrangian is spatially continuous, the quantum states of the field are discrete. While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems. The process of quantising an arbitrary number of particles instead of a single particle is often also called second quantisation. The foregoing procedure is a direct application of non-relativistic quantum mechanics and can be used to quantise (complex) scalar fields, Dirac fields, vector fields (e.g. the electromagnetic field), and even strings. However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field: where is a spacetime index, , etc. The summation over the index has been omitted following the Einstein notation. If the parameter is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. Path integrals The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state at time to some final state at , the total time is divided into small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let be the Hamiltonian (i.e. generator of time evolution), then Taking the limit , the above product of integrals becomes the Feynman path integral: where is the Lagrangian involving and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian via Legendre transformation. The initial and final conditions of the path integral are respectively In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand. Two-point correlation function In calculations, one often encounters expression likein the free or interacting theory, respectively. Here, and are position four-vectors, is the time ordering operator that shuffles its operands so the time-components and increase from right to left, and is the ground state (vacuum state) of the interacting theory, different from the free ground state . This expression represents the probability amplitude for the field to propagate from to , and goes by multiple names, like the two-point propagator, two-point correlation function, two-point Green's function or two-point function for short. The free two-point function, also known as the Feynman propagator, can be found for the real scalar field by either canonical quantisation or path integrals to be In an interacting theory, where the Lagrangian or Hamiltonian contains terms or that describe interactions, the two-point function is more difficult to define. However, through both the canonical quantisation formulation and the path integral formulation, it is possible to express it through an infinite perturbation series of the free two-point function. In canonical quantisation, the two-point correlation function can be written as: where is an infinitesimal number and is the field operator under the free theory. Here, the exponential should be understood as its power series expansion. For example, in -theory, the interacting term of the Hamiltonian is , and the expansion of the two-point correlator in terms of becomesThis perturbation expansion expresses the interacting two-point function in terms of quantities that are evaluated in the free theory. In the path integral formulation, the two-point correlation function can be written where is the Lagrangian density. As in the previous paragraph, the exponential can be expanded as a series in , reducing the interacting two-point function to quantities in the free theory. Wick's theorem further reduce any -point correlation function in the free theory to a sum of products of two-point correlation functions. For example, Since interacting correlation functions can be expressed in terms of free correlation functions, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory. This makes the Feynman propagator one of the most important quantities in quantum field theory. Feynman diagram Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the term in the two-point correlation function in the theory is After applying Wick's theorem, one of the terms is This term can instead be obtained from the Feynman diagram . The diagram consists of external vertices connected with one edge and represented by dots (here labelled and ). internal vertices connected with four edges and represented by dots (here labelled ). edges connecting the vertices and represented by lines. Every vertex corresponds to a single field factor at the corresponding point in spacetime, while the edges correspond to the propagators between the spacetime points. The term in the perturbation series corresponding to the diagram is obtained by writing down the expression that follows from the Feynman rules: For every internal vertex , write down a factor . For every edge that connects two vertices and , write down a factor . Divide by the symmetry factor of the diagram. With the symmetry factor , following these rules yields exactly the expression above. By Fourier transforming the propagator, the Feynman rules can be reformulated from position space into momentum space. In order to compute the -point correlation function to the -th order, list all valid Feynman diagrams with external points and or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, is equal to the sum of (expressions corresponding to) all connected diagrams with external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the interaction theory discussed above, every vertex must have four legs. In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method. Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing loops are referred to as -loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction. Lines whose end points are vertices can be thought of as the propagation of virtual particles. Renormalization Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass and the coupling constant , have no physical meaning — , , and the field strength are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off , obtain expressions for the physical quantities, and then take the limit . This is an example of regularisation, a class of methods to treat divergences in QFT, with being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalised perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of theory, the field strength is first redefined: where is the bare field, is the renormalised field, and is a constant to be determined. The Lagrangian density becomes: where and are the experimentally measurable, renormalised, mass and coupling constant, respectively, and are constants to be determined. The first three terms are the Lagrangian density written in terms of the renormalised quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularisation scheme (such as the cut-off regularisation introduced above or dimensional regularization); call the regulator . Compute Feynman diagrams, in which divergent terms will depend on . Then, define , , and such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit is taken. In this way, meaningful finite quantities are obtained. It is only possible to eliminate all infinities to obtain a finite result in renormalisable theories, whereas in non-renormalisable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalisable QFT, while quantum gravity is non-renormalisable. Renormalisation group The renormalisation group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales. The way in which each parameter changes with scale is described by its β function. Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan–Symanzik equation. As an example, the coupling constant in QED, namely the elementary charge , has the following β function: where is the energy scale under which the measurement of is performed. This differential equation implies that the observed elementary charge increases as the scale increases. The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant. The coupling constant in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group , has the following β function: where is the number of quark flavours. In the case where (the Standard Model has ), the coupling constant decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom. Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing β function. (The converse is not true, however — the vanishing of all β functions does not imply conformal symmetry of the theory.) Examples include string theory and supersymmetric Yang–Mills theory. According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off , i.e. that the theory is no longer valid at energies higher than , and all degrees of freedom above the scale are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalisable effective field theory. The difference between renormalisable and non-renormalisable theories is that the former are insensitive to details at high energies, whereas the latter do depend on them. According to this view, non-renormalisable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off from calculations in such a theory merely indicates that new physical phenomena appear at scales above , where a new theory is necessary. Other theories The quantisation and renormalisation procedures outlined in the preceding sections are performed for the free theory and theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction. As an example, quantum electrodynamics contains a Dirac field representing the electron field and a vector field representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is: where are Dirac matrices, , and is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass and the (bare) elementary charge . The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories. Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of positrons, while those pointing backward in time represent the propagation of electrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg. Gauge symmetry If the following transformation to the fields is performed at every spacetime point (a local transformation), then the QED Lagrangian remains unchanged, or invariant: where is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory. Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations and is yet another symmetry transformation . For any , is an element of the group, thus QED is said to have gauge symmetry. The photon field may be referred to as the gauge boson. is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang–Mills theories). Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an gauge symmetry. It contains three Dirac fields representing quark fields as well as eight vector fields representing gluon fields, which are the gauge bosons. The QCD Lagrangian density is: where is the gauge covariant derivative: where is the coupling constant, are the eight generators of in the fundamental representation ( matrices), and are the structure constants of . Repeated indices are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation: where is an element of at every spacetime point : The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantisation, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density under a certain local transformation of the fields, the measure of the path integral may change. For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group , in which all anomalies exactly cancel. The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group. Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law. For example, the symmetry of QED implies charge conservation. Gauge transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field , being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarisation. The remaining two degrees of freedom are said to be "redundant" — apparently different ways of writing can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but a reflection of the "redundancy" of the chosen mathematical description. To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev–Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally. A more rigorous generalisation of the Faddeev–Popov procedure is given by BRST quantization. Spontaneous symmetry breaking Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it. To illustrate the mechanism, consider a linear sigma model containing real scalar fields, described by the Lagrangian density: where and are real parameters. The theory admits an global symmetry: The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field satisfying Without loss of generality, let the ground state be in the -th direction: The original fields can be rewritten as: and the original Lagrangian density as: where . The original global symmetry is no longer manifest, leaving only the subgroup . The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken. Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous global symmetry leads to a massless field called the Goldstone boson. In the above example, has continuous symmetries (the dimension of its Lie algebra), while has . The number of broken symmetries is their difference, , which corresponds to the massless fields . On the other hand, when a gauge (as opposed to global) symmetry is spontaneously broken, the resulting Goldstone boson is "eaten" by the corresponding gauge boson by becoming an additional degree of freedom for the gauge boson. The Goldstone boson equivalence theorem states that at high energy, the amplitude for emission or absorption of a longitudinally polarised massive gauge boson becomes equal to the amplitude for emission or absorption of the Goldstone boson that was eaten by the gauge boson. In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures. In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism. Supersymmetry All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesised the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions. The Standard Model obeys Poincaré symmetry, whose generators are the spacetime translations and the Lorentz transformations . In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators , called supercharges, which themselves transform as Weyl fermions. The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, , which generate the corresponding supersymmetry, supersymmetry, and so on. Supersymmetry can also be constructed in other dimensions, most notably in (1+1) dimensions for its application in superstring theory. The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group. Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM),
it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows. There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping. Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", , where we would expect, from our everyday idea of probabilities, that it would be a sum. Propagators Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows: where a shorthand symbol such as stands for the four real numbers that give the time and position in three dimensions of the point labeled A. Mass renormalization A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process". Conclusions Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem." Mathematical formulation Mathematically, QED is an abelian gauge theory with the symmetry group U(1). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field is given in natural units by the real part of where are Dirac matrices; a bispinor field of spin-1/2 particles (e.g. electron–positron field); , called "psi-bar", is sometimes referred to as the Dirac adjoint; is the gauge covariant derivative; e is the coupling constant, equal to the electric charge of the bispinor field; m is the mass of the electron or positron; is the covariant four-potential of the electromagnetic field generated by the electron itself; is the external field imposed by external source; is the electromagnetic field tensor. Equations of motion Substituting the definition of D into the Lagrangian gives From this Lagrangian, the equations of motion for the ψ and A fields can be obtained. Using the field-theoretic Euler–Lagrange equation for ψ, The derivatives of the Lagrangian concerning ψ are Inserting these into () results in with Hermitian conjugate Bringing the middle term to the right-hand side yields The left-hand side is like the original Dirac equation, and the right-hand side is the interaction with the electromagnetic field. Using the Euler–Lagrange equation for the A field, the derivatives this time are Substituting back into () leads to Now, if we impose the Lorenz gauge condition the equations reduce to which is a wave equation for the four-potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (The square represents the D'Alembert operator, .) Interaction picture This theory can be straightforwardly quantized by treating bosonic and fermionic sectors as free. This permits us to build a set of asymptotic states that can be used to start computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator, which for a given initial state will give a final state in such a way to have This technique is also known as the S-matrix. The evolution operator is obtained in the interaction picture, where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above: and so, one has where T is the time-ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine-structure constant as the development parameter. This series is called the Dyson series. Feynman diagrams Despite the conceptual clarity of this Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators. Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles' momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta. Using Wick's theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following To these rules we must add a further one for closed loops that implies an integration on momenta , since these internal ("virtual") particles are not constrained to any specific energy–momentum, even that usually required by special relativity (see Propagator for details). From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering. Feynman diagrams are in this case and so we are able to get the corresponding amplitude at the first order of a perturbation series for the S-matrix: from which we can compute the cross section for this scattering. Nonperturbative phenomena The predictive success of quantum electrodynamics largely rests on the use of perturbation theory, expressed in Feynman diagrams. However, quantum electrodynamics also leads to predictions beyond perturbation theory. In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field. This process, called the Schwinger effect, cannot be understood in terms of any finite number of Feynman diagrams and hence is described as nonperturbative. Mathematically, it can be derived by a semiclassical approximation to the path integral of quantum electrodynamics. Renormalizability Higher-order terms can be straightforwardly computed for the evolution operator, but these terms display diagrams containing the following simpler ones that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. A criterion for the theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case, the theory is said to be "renormalizable". The reason for this is that to get observables renormalized, one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio. Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories. Nonconvergence of series An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero. The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is "sick" for any negative value of the coupling constant, the series does not converge but are at best an asymptotic series. From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy. The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory. See also Abraham–Lorentz force Anomalous magnetic moment Bhabha scattering Cavity quantum electrodynamics Circuit quantum electrodynamics Compton scattering Euler–Heisenberg
any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman. The basic rules of probability amplitudes that will be used are: Basic constructions Suppose, we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability. But there are other ways in which the end result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering. There is an infinite number of other intermediate "virtual" processes in which more and more photons are absorbed and/or emitted. For each of these processes, a Feynman diagram could be drawn describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude. That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a nonzero probability amplitude of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.) Probability amplitudes Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers. Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by or The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers. Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction. That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows. There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping. Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude
i <= 25; i++)", " cout << l[i] << endl;", " for(int i = 0; i <= 34; i++)", " cout << l[0] + q + l[i] + q + ',' << endl;", " for(int i = 26; i <= 34; i++)", " cout << l[i] << endl;", " return 0;", "}", "=============<<<<<<<< Java Code >>>>>>>>=============", "public class Quine", "{", " public static void main(String[] args)", " {", " char q = 34;", " String[] l = {", " };", " for(int i = 2; i <= 9; i++)", " System.out.println( l[i] );", " for(int i = 0; i < l.length; i++)", " System.out.println(l[0] + q + l[i] + q + ',');", " for(int i = 10; i <= 18; i++)", " System.out.println(l[i]);", " }", "}", }; for(int i = 20; i <= 25; i++) cout << l[i] << endl; for(int i = 0; i <= 34; i++) cout << l[0] + q + l[i] + q + ',' << endl; for(int i = 26; i <= 34; i++) cout << l[i] << endl; return 0; }public class Quine { public static void main(String[] args) { char q = 34; String[] l = { " ", "=============<<<<<<<< C++ Code >>>>>>>>=============", "#include <iostream>", "#include <string>", "using namespace std;", "", "int main(int argc, char* argv[])", "{", " char q = 34;", " string l[] = {", " };", " for(int i = 20; i <= 25; i++)", " cout << l[i] << endl;", " for(int i = 0; i <= 34; i++)", " cout << l[0] + q + l[i] + q + ',' << endl;", " for(int i = 26; i <= 34; i++)", " cout << l[i] << endl;", " return 0;", "}", "=============<<<<<<<< Java Code >>>>>>>>==========", "public class Quine", "{", " public static void main( String[] args )", " {", " char q = 34;", " String[] l = {", " };", " for(int i = 2; i <= 9; i++)", " System.out.println(l[i]);", " for(int i = 0; i < l.length; i++)", " System.out.println( l[0] + q + l[i] + q + ',' );", " for(int i = 10; i <= 18; i++))", " System.out.println(l[i]);", " }", "}", }; for(int i = 2; i <= 9; i++) System.out.println(l[i]); for(int i = 0; i < l.length; i++) System.out.println( l[0] + q + l[i] + q + ',' ); for(int i = 10; i <= 18; i++) System.out.println(l[i]); } } Such programs have been produced with various cycle lengths: Haskell → Python → Ruby Python → Bash → Perl C → Haskell → Python → Perl Haskell → Perl → Python → Ruby → C → Java Ruby → Java → C# → Python C → C++ → Ruby → Python → PHP → Perl Ruby → Python → Perl → Lua → OCaml → Haskell → C → Java → Brainfuck → Whitespace → Unlambda Ruby → Scala → Scheme → Scilab → Shell (bash) → S-Lang → Smalltalk → Squirrel3 → Standard ML → ... → Rexx (128 (and formerly 50) programming languages) Web application → C (web application source code consists of HTML, JavaScript, and CSS) Multiquines David Madore, creator of Unlambda, describes multiquines as follows: "A multiquine is a set of r different programs (in r different languages – without this condition we could take them all equal to a single quine), each of which is able to print any of the r programs (including itself) according to the command line argument it is passed. (Note that cheating is not allowed: the command line arguments must not be too long – passing the full text of a program is considered cheating)." A multiquine consisting of 2 languages (or biquine) would be a program which: When run, is a quine in language X. When supplied with a user-defined command line argument, would print a second program in language Y. Given the second program in language Y, when run normally, would also be a quine in language Y. Given the second program in language Y, and supplied with a user-defined command line argument, would produce the original program in language X. A biquine could
"self-reproducing programs", and "self-copying programs". A quine is a fixed point of an execution environment, when the execution environment is viewed as a function transforming programs into their outputs. Quines are possible in any Turing-complete programming language, as a direct consequence of Kleene's recursion theorem. For amusement, programmers sometimes attempt to develop the shortest possible quine in any given programming language. The name "quine" was coined by Douglas Hofstadter, in his popular science book Gödel, Escher, Bach, in honor of philosopher Willard Van Orman Quine (1908–2000), who made an extensive study of indirect self-reference, and in particular for the following paradox-producing expression, known as Quine's paradox: "Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation. History The idea of self-reproducing automata came from the dawn of computing, if not before. John von Neumann theorized about them in the 1940s. Later, Paul Bratley and Jean Millo's article "Computer Recreations: Self-Reproducing Automata" discussed them in 1972. Bratley first became interested in self-reproducing programs after seeing the first known such program written in Atlas Autocode at Edinburgh in the 1960s by the University of Edinburgh lecturer and researcher Hamish Dewar. The "download source" requirement of the Affero General Public License is based on the idea of a quine. Examples Constructive quines In general, the method used to create a quine in any programming language is to have, within the program, two pieces: (a) code used to do the actual printing and (b) data that represents the textual form of the code. The code functions by using the data to print the code (which makes sense since the data represents the textual form of the code), but it also uses the data, processed in a simple way, to print the textual representation of the data itself. Here are three small examples in Python3: a='a=%s%s%s;print(a%%(chr(39),a,chr(39)))';print(a%(chr(39),a,chr(39))) b='b={}{}{};print(b.format(chr(39),b,chr(39)))';print(b.format(chr(39),b,chr(39))) c='c=%r;print(c%%c)';print(c%c) #note that %r will quote automatically In Python 3.8: exec(s:='print("exec(s:=%r)"%s)') The following Java code demonstrates the basic structure of a quine. public class Quine { public static void main(String[] args) { char q = 34; // Quotation mark character String[] l = { // Array of source code "public class Quine", "{", " public static void main(String[] args)", " {", " char q = 34; // Quotation mark character", " String[] l = { // Array of source code", " ", " };", " for(int i = 0; i < 6; i++) // Print opening code", " System.out.println(l[i]);", " for(int i = 0; i < l.length; i++) // Print string array", " System.out.println(l[6] + q + l[i] + q + ',');", " for(int i = 7; i < l.length; i++) // Print this code", " System.out.println(l[i]);", " }", "}", }; for(int i = 0; i < 6; i++) // Print opening code System.out.println(l[i]); for(int i = 0; i < l.length; i++) // Print string array System.out.println(l[6] + q + l[i] + q + ','); for(int i = 7; i < l.length; i++) // Print this code System.out.println(l[i]); } } The source code contains a string array of itself, which is output twice, once inside quotation marks. This code was adapted from an
is a categorical interpretation of this construction. Let be the category of integral domains and injective ring maps. The functor from to the category of fields which takes every integral domain to its fraction field and every homomorphism to the induced map on fields (which exists by the universal property) is the left adjoint of the inclusion functor from the category of fields to . Thus the category of fields (which is a full subcategory) is a reflective subcategory of . A multiplicative identity is not required for the role of the integral domain; this construction can be applied to any nonzero commutative rng with no nonzero zero divisors. The embedding is given by for any nonzero . Examples The field of fractions of the ring of integers is the field of rationals: . Let be the ring of Gaussian integers. Then , the field of Gaussian rationals. The field of fractions of a field is canonically isomorphic to the field itself. Given a field , the field of fractions of the polynomial ring in one indeterminate (which is an integral domain), is called the , field of rational fractions, or field of rational
consists of ratios between integral domain elements. The field of fractions of is sometimes denoted by or , and the construction is sometimes also called the fraction field, field of quotients, or quotient field of . All four are in common usage, but are not to be confused with the quotient of a ring by an ideal, which is a quite different concept. For a commutative ring which is not an integral domain, the analogous construction is called the localization or ring of quotients. Definition Given an integral domain and letting , we define an equivalence relation on by letting whenever . We denote the equivalence class of by . This notion of equivalence is motivated by the rational numbers , which have the same property with respect to the underlying ring of integers. Then the field of fractions is the set with addition given by and multiplication given by One may check that these operations are well-defined and that, for any integral domain , is indeed a field. In particular, for , the multiplicative inverse of is as expected: . The embedding of in maps each in to the fraction for any nonzero (the equivalence class is independent of the choice ). This is modeled on the identity . The field of fractions of is characterized by the following universal property: if is an injective ring homomorphism from into a field , then there exists a unique ring homomorphism which extends . There is a categorical interpretation of this construction. Let be the category of integral domains and injective ring maps. The functor from to the category of fields which takes every integral domain to its fraction field and every homomorphism to the induced map on fields (which exists by the universal property) is the left adjoint of the inclusion functor from the category of fields to . Thus the category of
Lemma, explicitly calculate this formula. The supplementary laws using Legendre symbols From these two supplements, we can obtain a third reciprocity law for the quadratic character -2 as follows: For -2 to be a quadratic residue, either -1 or 2 are both quadratic residues, or both non-residues :. So either : are both even, or they are both odd. The sum of these two expressions is which is an integer. Therefore, Legendre's attempt to prove reciprocity is based on a theorem of his: Legendre's Theorem. Let a, b and c be integers where any pair of the three are relatively prime. Moreover assume that at least one of ab, bc or ca is negative (i.e. they don't all have the same sign). If are solvable then the following equation has a nontrivial solution in integers: Example. Theorem I is handled by letting a ≡ 1 and b ≡ 3 (mod 4) be primes and assuming that and, contrary the theorem, that Then has a solution, and taking congruences (mod 4) leads to a contradiction. This technique doesn't work for Theorem VIII. Let b ≡ B ≡ 3 (mod 4), and assume Then if there is another prime p ≡ 1 (mod 4) such that the solvability of leads to a contradiction (mod 4). But Legendre was unable to prove there has to be such a prime p; he was later able to show that all that is required is: Legendre's Lemma. If p is a prime that is congruent to 1 modulo 4 then there exists an odd prime q such that but he couldn't prove that either. Hilbert symbol (below) discusses how techniques based on the existence of solutions to can be made to work. Gauss Gauss first proves the supplementary laws. He sets the basis for induction by proving the theorem for ±3 and ±5. Noting that it is easier to state for −3 and +5 than it is for +3 or −5, he states the general theorem in the form: If p is a prime of the form 4n + 1 then p, but if p is of the form 4n + 3 then −p, is a quadratic residue (resp. nonresidue) of every prime, which, with a positive sign, is a residue (resp. nonresidue) of p. In the next sentence, he christens it the "fundamental theorem" (Gauss never used the word "reciprocity"). Introducing the notation a R b (resp. a N b) to mean a is a quadratic residue (resp. nonresidue) (mod b), and letting a, a′, etc. represent positive primes ≡ 1 (mod 4) and b, b′, etc. positive primes ≡ 3 (mod 4), he breaks it out into the same 8 cases as Legendre: In the next Article he generalizes this to what are basically the rules for the Jacobi symbol (below). Letting A, A′, etc. represent any (prime or composite) positive numbers ≡ 1 (mod 4) and B, B′, etc. positive numbers ≡ 3 (mod 4): All of these cases take the form "if a prime is a residue (mod a composite), then the composite is a residue or nonresidue (mod the prime), depending on the congruences (mod 4)". He proves that these follow from cases 1) - 8). Gauss needed, and was able to prove, a lemma similar to the one Legendre needed: Gauss's Lemma. If p is a prime congruent to 1 modulo 8 then there exists an odd prime q such that: The proof of quadratic reciprocity uses complete induction. Gauss's Version in Legendre Symbols. These can be combined: Gauss's Combined Version in Legendre Symbols. Let In other words: Then: A number of proofs of the theorem, especially those based on Gauss sums derive this formula. or the splitting of primes in algebraic number fields, Other statements The statements in this section are equivalent to quadratic reciprocity: if, for example, Euler's version is assumed, the Legendre-Gauss version can be deduced from it, and vice versa. Euler's Formulation of Quadratic Reciprocity. If then This can be proven using Gauss's lemma. Quadratic Reciprocity (Gauss; Fourth Proof). Let a, b, c, ... be unequal positive odd primes, whose product is n, and let m be the number of them that are ≡ 3 (mod 4); check whether n/a is a residue of a, whether n/b is a residue of b, .... The number of nonresidues found will be even when m ≡ 0, 1 (mod 4), and it will be odd if m ≡ 2, 3 (mod 4). Gauss's fourth proof consists of proving this theorem (by comparing two formulas for the value of Gauss sums) and then restricting it to two primes. He then gives an example: Let a = 3, b = 5, c = 7, and d = 11. Three of these, 3, 7, and 11 ≡ 3 (mod 4), so m ≡ 3 (mod 4). 5×7×11 R 3; 3×7×11 R 5; 3×5×11 R 7; and 3×5×7 N 11, so there are an odd number of nonresidues. Eisenstein's Formulation of Quadratic Reciprocity. Assume Then Mordell's Formulation of Quadratic Reciprocity. Let a, b and c be integers. For every prime, p, dividing abc if the congruence has a nontrivial solution, then so does: Zeta function formulation As mentioned in the article on Dedekind zeta functions, quadratic reciprocity is equivalent to the zeta function of a quadratic field being the product of the Riemann zeta function and a certain Dirichlet L-function Jacobi symbol The Jacobi symbol is a generalization of the Legendre symbol; the main difference is that the bottom number has to be positive and odd, but does not have to be prime. If it is prime, the two symbols agree. It obeys the same rules of manipulation as the Legendre symbol. In particular and if both numbers are positive and odd (this is sometimes called "Jacobi's reciprocity law"): However, if the Jacobi symbol is 1 but the denominator is not a prime, it does not necessarily follow that the numerator is a quadratic residue of the denominator. Gauss's cases 9) - 14) above can be expressed in terms of Jacobi symbols: and since p is prime the left hand side is a Legendre symbol, and we know whether M is a residue modulo p or not. The formulas listed in the preceding section are true for Jacobi symbols as long as the symbols are defined. Euler's formula may be written Example. 2 is a residue modulo the primes 7, 23 and 31: But 2 is not a quadratic residue modulo 5, so it can't be one modulo 15. This is related to the problem Legendre had: if then a is a non-residue modulo every prime in the arithmetic progression m + 4a, m + 8a, ..., if there are any primes in this series, but that wasn't proved until decades after Legendre. Eisenstein's formula requires relative primality conditions (which are true if the numbers are prime) Let be positive odd integers such that: Then Hilbert symbol The quadratic reciprocity law can be formulated in terms of the Hilbert symbol where a and b are any two nonzero rational numbers and v runs over all the non-trivial absolute values of the rationals (the Archimedean one and the p-adic absolute values for primes p). The Hilbert symbol is 1 or −1. It is defined to be 1 if and only if the equation has a solution in the completion of the rationals at v other than . The Hilbert reciprocity law states that , for fixed a and b and varying v, is 1 for all but finitely many v and the product of over all v is 1. (This formally resembles the residue theorem from complex analysis.) The proof of Hilbert reciprocity reduces to checking a few special cases, and the non-trivial cases turn out to be equivalent to the main law and the two supplementary laws of quadratic reciprocity for the Legendre symbol. There is no kind of reciprocity in the Hilbert reciprocity law; its name simply indicates the historical source of the result in quadratic reciprocity. Unlike quadratic reciprocity, which requires sign conditions (namely positivity of the primes involved) and a special treatment of the prime 2, the Hilbert reciprocity law treats all absolute values of the rationals on an equal footing. Therefore, it is a more natural way of expressing quadratic reciprocity with a view towards generalization: the Hilbert reciprocity law extends with very few changes to all global fields and this extension can rightly be considered a generalization of quadratic reciprocity to all global fields. Connection with cyclotomic fields The early proofs of quadratic reciprocity are relatively unilluminating. The situation changed when Gauss used Gauss sums to show that quadratic fields are subfields of cyclotomic fields, and implicitly deduced quadratic reciprocity from a reciprocity theorem for cyclotomic fields. His proof was cast in modern form by later algebraic number theorists. This proof served as a template for class field theory, which can be viewed as a vast generalization of quadratic reciprocity. Robert Langlands formulated the Langlands program, which gives a conjectural vast generalization of class field theory. He wrote: I confess that, as a student unaware of the history of the subject and unaware of the connection with cyclotomy, I did not find the law or its so-called elementary proofs appealing. I suppose, although I would not have (and could not have) expressed myself in this way that I saw it as little more than a mathematical curiosity, fit more for amateurs than for the attention of the serious mathematician that I then hoped to become. It was only in Hermann Weyl's book on the algebraic theory of numbers that I appreciated it as anything more. Other rings There are also quadratic reciprocity laws in rings other than the integers. Gaussian integers In his second monograph on quartic reciprocity Gauss stated quadratic reciprocity for the ring of Gaussian integers, saying that it is a corollary of the biquadratic law in but did not provide a proof of either theorem. Dirichlet showed that the law in can be deduced from
41, or 47. The former are ≡ 1 (mod 3) and the latter ≡ 2 (mod 3). Since the only residue (mod 3) is 1, we see that −3 is a quadratic residue modulo every prime which is a residue modulo 3. q = ±5 5 is in rows 11, 19, 29, 31, and 41 but not in rows 3, 7, 13, 17, 23, 37, 43, or 47. The former are ≡ ±1 (mod 5) and the latter are ≡ ±2 (mod 5). Since the only residues (mod 5) are ±1, we see that 5 is a quadratic residue modulo every prime which is a residue modulo 5. −5 is in rows 3, 7, 23, 29, 41, 43, and 47 but not in rows 11, 13, 17, 19, 31, or 37. The former are ≡ 1, 3, 7, 9 (mod 20) and the latter are ≡ 11, 13, 17, 19 (mod 20). Higher q The observations about −3 and 5 continue to hold: −7 is a residue modulo p if and only if p is a residue modulo 7, −11 is a residue modulo p if and only if p is a residue modulo 11, 13 is a residue (mod p) if and only if p is a residue modulo 13, etc. The more complicated-looking rules for the quadratic characters of 3 and −5, which depend upon congruences modulo 12 and 20 respectively, are simply the ones for −3 and 5 working with the first supplement. Example. For −5 to be a residue (mod p), either both 5 and −1 have to be residues (mod p) or they both have to be non-residues: i.e., p ≡ ±1 (mod 5) and p ≡ 1 (mod 4) or p ≡ ±2 (mod 5) and p ≡ 3 (mod 4). Using the Chinese remainder theorem these are equivalent to p ≡ 1, 9 (mod 20) or p ≡ 3, 7 (mod 20). The generalization of the rules for −3 and 5 is Gauss's statement of quadratic reciprocity. Statement of the theorem Quadratic Reciprocity (Gauss's statement). If , then the congruence is solvable if and only if is solvable. If and , then the congruence is solvable if and only if is solvable. Quadratic Reciprocity (combined statement). Define . Then the congruence is solvable if and only if is solvable. Quadratic Reciprocity (Legendre's statement). If p or q are congruent to 1 modulo 4, then: is solvable if and only if is solvable. If p and q are congruent to 3 modulo 4, then: is solvable if and only if is not solvable. The last is immediately equivalent to the modern form stated in the introduction above. It is a simple exercise to prove that Legendre's and Gauss's statements are equivalent – it requires no more than the first supplement and the facts about multiplying residues and nonresidues. Proof Apparently, the shortest known proof yet was published by B. Veklych in the American Mathematical Monthly. Proofs of the supplements The value of the Legendre symbol of (used in the proof above) follows directly from Euler's criterion: by Euler's criterion, but both sides of this congruence are numbers of the form , so they must be equal. Whether is a quadratic residue can be concluded if we know the number of solutions of the equation with which can be solved by standard methods. Namely, all its solutions where can be grouped into octuplets of the form , and what is left are four solutions of the form and possibly four additional solutions where and , which exist precisely if is a quadratic residue. That is, is a quadratic residue precisely if the number of solutions of this equation is divisible by . And this equation can be solved in just the same way here as over the rational numbers: substitute , where we demand that (leaving out the two solutions ), then the original equation transforms into Here can have any value that does not make the denominator zero - for which there are possibilities (i.e. if is a residue, if not) - and also does not make zero, which excludes one more option, . Thus there are possibilities for , and so together with the two excluded solutions there are overall solutions of the original equation. Therefore, is a residue modulo if and only if divides . This is a reformulation of the condition stated above. History and alternative statements The theorem was formulated in many ways before its modern form: Euler and Legendre did not have Gauss's congruence notation, nor did Gauss have the Legendre symbol. In this article p and q always refer to distinct positive odd primes, and x and y to unspecified integers. Fermat Fermat proved (or claimed to have proved) a number of theorems about expressing a prime by a quadratic form: He did not state the law of quadratic reciprocity, although the cases −1, ±2, and ±3 are easy deductions from these and other of his theorems. He also claimed to have a proof that if the prime number p ends with 7, (in base 10) and the prime number q ends in 3, and p ≡ q ≡ 3 (mod 4), then Euler conjectured, and Lagrange proved, that Proving these and other statements of Fermat was one of the things that led mathematicians to the reciprocity theorem. Euler Translated into modern notation, Euler stated that for distinct odd primes p and q: If q ≡ 1 (mod 4) then q is a quadratic residue (mod p) if and only if there exists some integer b such that p ≡ b2 (mod q). If q ≡ 3 (mod 4) then q is a quadratic residue (mod p) if and only if there exists some integer b which is odd and not divisible by q such that p ≡ ±b2 (mod 4q). This is equivalent to quadratic reciprocity. He could not prove it, but he did prove the second supplement. Legendre and his symbol Fermat proved that if p is a prime number and a is an integer, Thus if p does not divide a, using the non-obvious fact (see for example Ireland and Rosen below) that the residues modulo p form a field and therefore in particular the multiplicative group is cyclic, hence there can be at most two solutions to a quadratic equation: Legendre lets a and A represent positive primes ≡ 1 (mod 4) and b and B positive primes ≡ 3 (mod 4), and sets out a table of eight theorems that together are equivalent to quadratic reciprocity: He says that since expressions of the form will come up so often he will abbreviate them as: This is now known as the Legendre symbol, and an equivalent definition is used today: for all integers a and all odd primes p Legendre's version of quadratic reciprocity He notes that these can be combined: A number of proofs, especially those based on Gauss's Lemma, explicitly calculate this formula. The supplementary laws using Legendre symbols From these two supplements, we can obtain a third reciprocity law for the quadratic character -2 as follows: For -2 to be a quadratic residue, either -1 or 2 are both quadratic residues, or both non-residues :. So either : are both even, or they are both odd. The sum of these two expressions is which is an integer. Therefore, Legendre's attempt to prove reciprocity is based on a theorem of his: Legendre's Theorem. Let a, b and c be integers where any pair of the three are relatively prime. Moreover assume that at least one of ab, bc or ca is negative (i.e. they don't all have the same sign). If are solvable then the following equation has a nontrivial solution in integers: Example. Theorem I is handled by letting a ≡ 1 and b ≡ 3 (mod 4) be primes and assuming that and, contrary the theorem, that Then has a solution, and taking congruences (mod 4) leads to a contradiction. This technique doesn't work for Theorem VIII. Let b ≡ B ≡ 3 (mod 4), and assume Then if there is another prime p ≡ 1 (mod 4) such that the solvability of leads to a contradiction (mod 4). But Legendre was unable to prove there has to be such a prime p; he was later able to show that all that is required is: Legendre's Lemma. If p is a prime that is congruent to 1 modulo 4 then there exists an odd prime q such that but he couldn't prove that either. Hilbert symbol (below) discusses how techniques based on the existence of solutions to can be made to work. Gauss Gauss first proves the supplementary laws. He sets the basis for induction by proving the theorem for ±3 and ±5. Noting that it is easier to state for −3 and +5 than it is for +3 or −5, he states the general theorem in the form: If p is a prime of the form 4n + 1 then p, but if p is of the form 4n + 3 then −p, is a quadratic residue (resp. nonresidue) of every prime, which, with a positive sign, is a residue (resp. nonresidue) of p. In the next sentence, he christens it the "fundamental theorem" (Gauss never used the word "reciprocity"). Introducing the notation a R b (resp. a N b) to mean a is a quadratic residue (resp. nonresidue) (mod b), and letting a,
conditional quantum entropy. Unlike classical digital states (which are discrete), a qubit is continuous-valued, describable by a direction on the Bloch sphere. Despite being continuously valued in this way, a qubit is the smallest possible unit of quantum information, and despite the qubit state being continuous-valued, it is impossible to measure the value precisely. Five famous theorems describe the limits on manipulation of quantum information. no-teleportation theorem, which states that a qubit cannot be (wholly) converted into classical bits; that is, it cannot be fully "read". no-cloning theorem, which prevents an arbitrary qubit from being copied. no-deleting theorem, which prevents an arbitrary qubit from being deleted. no-broadcast theorem, which prevents an arbitrary qubit from being delivered to multiple recipients, although it can be transported from place to place (e.g. via quantum teleportation). no-hiding theorem, which demonstrates the conservation of quantum information. These theorems prove that quantum information within the universe is conserved. They open up possibilities in quantum information processing. Quantum information processing The state of a qubit contains all of its information. This state is frequently expressed as a vector on the Bloch sphere. This state can be changed by applying linear transformations or quantum gates to them. These unitary transformations are described as rotations on the Bloch Sphere. While classical gates correspond to the familiar operations of Boolean logic, quantum gates are physical unitary operators. Due to the volatility of quantum systems and the impossibility of copying states, the storing of quantum information is much more difficult than storing classical information. Nevertheless, with the use of quantum error correction quantum information can still be reliably stored in principle. The existence of quantum error correcting codes has also led to the possibility of fault-tolerant quantum computation. Classical bits can be encoded into and subsequently retrieved from configurations of qubits, through the use of quantum gates. By itself, a single qubit can convey no more than one bit of accessible classical information about its preparation. This is Holevo's theorem. However, in superdense coding a sender, by acting on one of two entangled qubits, can convey two bits of accessible information about their joint state to a receiver. Quantum information can be moved about, in a quantum channel, analogous to the concept of a classical communications channel. Quantum messages have a finite size, measured in qubits; quantum channels have a finite channel capacity, measured in qubits per second. Quantum information, and changes in quantum information, can be quantitatively measured by using an analogue of Shannon entropy, called the von Neumann entropy. In some cases quantum algorithms can be used to perform computations faster than in any known classical algorithm. The most famous example of this is Shor's algorithm that can factor numbers in polynomial time, compared to the best classical algorithms that take sub-exponential time. As factorization is an important part of the safety of RSA encryption, Shor's algorithm sparked the new field of post-quantum cryptography that tries to find encryption schemes that remain safe even when quantum computers are in play. Other examples of algorithms that demonstrate quantum supremacy include Grover's search algorithm, where the quantum algorithm gives a quadratic speed-up over the best possible classical algorithm. The complexity class of problems efficiently solvable by a quantum computer is known as BQP. Quantum key distribution (QKD) allows unconditionally secure transmission of classical information, unlike classical encryption, which can always be broken in principle, if not in practice. Do note that certain subtle points regarding the safety of QKD are still hotly debated. The study of all of the above topics and differences comprises quantum information theory. Relation to quantum mechanics Quantum mechanics is the study of how microscopic physical systems change dynamically in nature. In the field of quantum information theory, the quantum systems studied are abstracted away from any real world counterpart. A qubit might for instance physically be a photon in a linear optical quantum computer, an ion in a trapped ion quantum computer, or it might be a large collection of atoms as in a superconducting quantum computer. Regardless of the physical implementation, the limits and features of qubits implied by quantum information theory hold as all these systems are mathematically described by the same apparatus of density matrices over the complex numbers. Another important difference with quantum mechanics is that, while quantum mechanics often studies infinite-dimensional systems such as a harmonic oscillator, quantum information theory concerns both with continuous-variable systems and finite-dimensional systems. Entropy and information Entropy measures the uncertainty in the state of a physical system. Entropy can be studied from the point of view of both the classical and quantum information theories. Classical information theory Classical information is based on the concepts of information laid out by Claude Shannon. Classical information, in principle, can be stored in a bit of binary strings. Any system having two states is a capable bit. Shannon entropy Shannon entropy is the quantification of the information gained by measuring the value of a random variable. Another way of thinking about it is by looking at the uncertainty of a system prior to measurement. As a result, entropy, as pictured by Shannon, can be seen either as a measure of the uncertainty prior to making a measurement or as a measure of information gained after making said measurement. Shannon entropy, written as a functional of a discrete probability distribution, associated with events , can be seen as the average information associated with this set of events, in units of bits: This definition of entropy can be used to quantify the physical resources required to store the output of an information source. The ways
processing techniques. Quantum information refers to both the technical definition in terms of Von Neumann entropy and the general computational term. It is an interdisciplinary field that involves quantum mechanics, computer science, information theory, philosophy and cryptography among other fields. Its study is also relevant to disciplines such as cognitive science, psychology and neuroscience. Its main focus is in extracting information from matter at the microscopic scale. Observation in science is one of the most important ways of acquiring information and measurement is required in order to quantify the observation, making this crucial to the scientific method. In quantum mechanics, due to the uncertainty principle, non-commuting observables cannot be precisely measured simultaneously, as an eigenstate in one basis is not an eigenstate in the other basis. As both variables are not simultaneously well defined, a quantum state can never contain definitive information about both variables. Information is something that is encoded in the state of a quantum system; it is physical. While quantum mechanics deals with examining properties of matter at the microscopic level, quantum information science focuses on extracting information from those properties, and quantum computation manipulates and processes information – performs logical operations – using quantum information processing techniques. Quantum information, like classical information, can be processed using digital computers, transmitted from one location to another, manipulated with algorithms, and analyzed with computer science and mathematics. Just like the basic unit of classical information is the bit, quantum information deals with qubits. Quantum information can be measured using Von Neumann entropy. Recently, the field of quantum computing has become an active research area because of the possibility to disrupt modern computation, communication, and cryptography. History and development Development from fundamental quantum mechanics The history of quantum information theory began at the turn of the 20th century when classical physics was revolutionized into quantum physics. The theories of classical physics were predicting absurdities such as the ultraviolet catastrophe, or electrons spiraling into the nucleus. At first these problems were brushed aside by adding ad hoc hypotheses to classical physics. Soon, it became apparent that a new theory must be created in order to make sense of these absurdities, and the theory of quantum mechanics was born. Quantum mechanics was formulated by Schrödinger using wave mechanics and Heisenberg using matrix mechanics. The equivalence of these methods was proven later. Their formulations described the dynamics of microscopic systems but had several unsatisfactory aspects in describing measurement processes. Von Neumann formulated quantum theory using operator algebra in a way that it described measurement as well as dynamics. These studies emphasized the philosophical aspects of measurement rather than a quantitative approach to extracting information via measurements. See: Dynamical Pictures Development from communication In 1960s, Stratonovich, Helstrom and Gordon proposed a formulation of optical communications using quantum mechanics. This was the first historical appearance of quantum information theory. They mainly studied error probabilities and channel capacities for communication. Later, Holevo obtained an upper bound of communication speed in the transmission of a classical message via a quantum channel. Development from atomic physics and relativity In the 1970s, techniques for manipulating single-atom quantum states, such as the atom trap and the scanning tunneling microscope, began to be developed, making it possible to isolate single atoms and arrange them in arrays. Prior to these developments, precise control over single quantum systems was not possible, and experiments utilized coarser, simultaneous control over a large number of quantum systems. The development of viable single-state manipulation techniques led to increased interest in the field of quantum information and computation. In the 1980s, interest arose in whether it might be possible to use quantum effects to disprove Einstein's theory of relativity. If it were possible to clone an unknown quantum state, it would be possible to use entangled quantum states to transmit information faster than the speed of light, disproving Einstein's theory. However, the no-cloning theorem showed that such cloning is impossible. The theorem was one of the earliest results of quantum information theory. Development from cryptography Despite all the excitement and interest over studying isolated quantum systems and trying to find a way to circumvent the theory of relativity, research in quantum information theory became stagnant in the 1980s. However, around the same time another avenue started dabbling into quantum information and computation: Cryptography. In a general sense, cryptography is the problem of doing communication or computation involving two or more parties who may not trust one another. Bennett and Brassard developed a communication channel on which it is impossible to eavesdrop without being detected, a way of communicating secretly at long distances using the BB84 quantum cryptographic protocol. The key idea was the use of the fundamental principle of quantum mechanics that observation disturbs the observed, and the introduction of an eavesdropper in a secure communication line will immediately let the two parties trying to communicate know of the presence of the eavesdropper. Development from computer science and mathematics With the advent of Alan Turing's revolutionary ideas of a programmable computer, or Turing machine, he showed that any real-world computation can be translated into an equivalent computation involving a Turing machine. This is known as the Church–Turing thesis. Soon enough, the first computers were made and computer hardware grew at such a fast pace that the growth, through experience in production, was codified into an empirical relationship called Moore's law. This 'law' is a projective trend that states that the number of transistors in an integrated circuit doubles every two years. As transistors began to become smaller and smaller in order to pack more power per surface area, quantum effects started to show up in the electronics resulting in inadvertent interference. This led to the advent of quantum computing, which used quantum mechanics to design algorithms. At this point, quantum computers showed promise of being much faster than classical computers for certain specific problems. One such example problem was developed by David Deutsch and Richard Jozsa, known as the Deutsch–Jozsa algorithm. This problem however held little to no practical applications. Peter Shor in 1994 came up with a very important and practical problem, one of finding the prime factors of an integer. The discrete logarithm problem as it was called, could be solved efficiently on a quantum computer but not on a classical computer hence showing that quantum computers are more powerful than Turing machines. Development from information theory Around the time computer science was making a revolution, so was information theory and communication, through Claude Shannon. Shannon developed two fundamental theorems of information theory: noiseless channel coding theorem and noisy channel coding theorem. He also showed that error correcting codes could be used to protect information being sent. Quantum information theory also followed a similar trajectory, Ben Schumacher in 1995 made an analogue to Shannon's noiseless coding theorem using the qubit. A theory of error-correction also developed, which allows quantum computers to make efficient computations regardless of noise, and make reliable communication over noisy quantum channels. Qubits and information theory Quantum information differs strongly from classical information, epitomized by the bit, in many striking and unfamiliar ways. While the fundamental unit of classical information is the bit, the most basic unit of quantum information is the qubit. Classical information is measured using Shannon entropy, while the quantum mechanical analogue is Von Neumann entropy. Given a statistical ensemble of quantum mechanical systems with the density matrix , it is given by Many of the same entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy and the conditional quantum entropy. Unlike classical digital states (which are discrete), a qubit is continuous-valued, describable by a direction on the Bloch sphere. Despite being continuously valued in this way, a qubit is the smallest possible unit of quantum information, and despite the qubit state being continuous-valued, it is impossible to measure the value precisely. Five famous theorems describe the limits on manipulation of quantum information. no-teleportation theorem, which states that a qubit cannot be (wholly) converted into classical bits; that is, it cannot be fully "read". no-cloning theorem, which prevents an arbitrary qubit from being copied. no-deleting theorem, which prevents an arbitrary qubit from being deleted. no-broadcast theorem, which prevents an arbitrary qubit from being delivered to multiple recipients, although it can be transported from place to place (e.g. via quantum teleportation). no-hiding theorem, which demonstrates the conservation of quantum information. These theorems prove that quantum information within the universe is conserved. They open up possibilities in quantum information processing. Quantum information processing The state of a qubit contains all of its information. This state is frequently expressed as a vector on the Bloch sphere. This state can be changed by applying linear transformations or quantum gates to them. These unitary transformations are described as rotations on the Bloch Sphere. While classical gates correspond to the familiar operations of Boolean logic, quantum gates are physical unitary operators. Due to the volatility of quantum systems and the impossibility of copying states, the storing of quantum information is much more difficult than storing classical information. Nevertheless, with the use of quantum error correction quantum information can still be reliably stored in principle. The existence of quantum error correcting codes has also led to the possibility of fault-tolerant quantum computation. Classical bits can be encoded into and subsequently retrieved from configurations of qubits, through the use of quantum gates. By itself, a single qubit can convey no more than one bit of accessible classical information about its preparation. This is Holevo's theorem. However, in superdense coding a sender, by acting on one of two entangled qubits, can convey two bits of accessible information about their joint state to a receiver. Quantum information can be moved about, in a quantum channel, analogous to the concept of a classical communications channel. Quantum messages have a finite size, measured in qubits; quantum channels have a finite channel capacity, measured in qubits per second. Quantum information, and changes in quantum information, can be quantitatively measured by using
2-Quinolone 4-Quinolone Quinolone
Quinolone antibiotics
sending in the quarterbacks with the play call from the sideline; Morton started in Super Bowl V, which his team lost, while Staubach started in Super Bowl VI the following year and won. Although Morton played most of the 1972 season due to an injury to Staubach, Staubach took back the starting job when he rallied the Cowboys in a come-from-behind win in the playoffs and Morton was subsequently traded; Staubach and Morton faced each other in Super Bowl XII. Another notable quarterback controversy involved the San Francisco 49ers, who had three capable starters: Joe Montana, Steve Young and Steve Bono. Montana suffered a season-ending injury that cost him the 1991 NFL season and was supplanted by Young. Young was injured midway through the season, but Bono held the starting job (despite Young's recovery) until Bono's own injury let Young reclaim it. Montana also missed most of the 1992 NFL season, making only one appearance, then was traded away at his request to take over as the starter for the Kansas City Chiefs; upon retirement, he was succeeded by Bono as the Chiefs' starting quarterback. Teams will often bring in a capable backup quarterback via the draft or a trade, as competition or potential replacement which would certainly threaten the starting quarterback's place in the team (see Two-quarterback system below). For instance, Drew Brees began his career with the San Diego Chargers but the team also drafted Philip Rivers; despite Brees initially retaining his starting job and being the Comeback Player of the Year he was not re-signed due to an injury and joined the New Orleans Saints as a free agent. Brees and Rivers both retired in 2021, each having been a starter for the Saints and Chargers, respectively, for over a decade. Aaron Rodgers was drafted by the Green Bay Packers as the eventual successor to Brett Favre, though Rodgers served in a backup role for a few years to develop sufficiently for the team to give him the starting job; Rodgers as of the 2021 offseason became disgruntled with the Packers as they disrespected him by not informing him on drafting quarterback Jordan Love. Similarly, Patrick Mahomes was selected by the Kansas City Chiefs to eventually supplant Alex Smith, with the latter willingly serving as a mentor. Trends and other roles In addition to their main role, quarterbacks are occasionally used in other roles. Most teams utilize a backup quarterback as their holder on placekicks. A benefit of using quarterbacks as holders is that it would be easier to pull off a fake field goal attempt, but many coaches prefer to use punters as holders because a punter will have far more time in practice sessions to work with the kicker than any quarterback would. In the Wildcat formation, where a halfback lines up behind the center and the quarterback lines up out wide, the quarterback can be used as a receiving target or a blocker. A more rare use for a quarterback is to punt the ball himself, a play known as a quick kick. Denver Broncos quarterback John Elway was known to perform quick kicks occasionally, typically when the Broncos were facing a third-and-long situation. Philadelphia Eagles quarterback Randall Cunningham, an All-America punter in college, was also known to punt the ball occasionally, and was assigned as the team's default punter for certain situations, such as when the team was backed up inside their own five-yard line. As Roger Staubach's backup, Dallas Cowboys quarterback Danny White was also the team's punter, opening strategic possibilities for coach Tom Landry. Ascending to the starting role upon Staubach's retirement, White held his position as the team's punter for several seasons—a double duty he performed to All-American standard at Arizona State University. White also had two touchdown receptions as a Dallas Cowboy, both from the halfback option. Special tactics If quarterbacks are uncomfortable with the formation the defense is using, they may call an audible change to their play. For example, if a quarterback receives the call to execute a running play, but he notices that the defense is ready to blitz—that is, to send additional defenders across the line of scrimmage in an attempt to tackle the quarterback or short his ability to pass—the quarterback may want to change the play. To do this, the quarterback yells a special code, like "Blue 42" or "Texas 29", which tells the offense to switch to a specific play or formation. Quarterbacks can also "spike" (throw the football at the ground) to stop the official game clock. For example, if a team is down by a field goal with only seconds remaining, a quarterback may spike the ball to prevent the game clock from running out. This usually allows the field goal unit to come onto the field, or attempt a final "Hail Mary pass". However, if a team is winning, a quarterback can keep the clock running by kneeling after the snap. This is normally done when the opposing team has no timeouts and there is little time left in the game, as it allows a team to burn up the remaining time on the clock without risking a turnover or injury. Dual-threat quarterbacks A dual-threat quarterback possesses the skills and physique to run with the ball if necessary. With the rise of several blitz-heavy defensive schemes and increasingly faster defensive players, the importance of a mobile quarterback has been redefined. While arm power, accuracy, and pocket presence—the ability to successfully operate from within the "pocket" formed by his blockers—are still the most important quarterback virtues, the ability to elude or run past defenders creates an additional threat that allows greater flexibility in a team's passing and running game. Dual-threat quarterbacks have historically been more prolific at the college level. Typically, a quarterback with exceptional quickness is used in an option offense, which allows the quarterback to hand the ball off, run it himself or pitch it to a running back shadowing him to the outside. This type of offense forces defenders to commit to the running back up the middle, the quarterback around the end or the running back trailing the quarterback. It is then that the quarterback has the "option" to identify which matchup is most favorable to the offense as the play unfolds and exploit that defensive weakness. In the college game, many schools employ several plays that are designed for the quarterback to run with the ball. This is much less common in professional football, except for a quarterback sneak, a play that involves the quarterback diving forward behind the offensive line to gain a small amount of yardage, but there is still an emphasis on being mobile enough to escape a heavy pass rush. Historically, high-profile dual-threat quarterbacks in the NFL were uncommon—among the notable exceptions were Steve Young and John Elway, who led their teams to one and five Super Bowl appearances respectively; and Michael Vick, whose rushing ability was a rarity in the early 2000s, although he never led his team to a Super Bowl. In the 2010s, quarterbacks with dual-threat capabilities have become more popular. Current NFL quarterbacks considered to be dual-threats include Russell Wilson, Lamar Jackson, and Josh Allen. Two-quarterback system Some teams employ a strategy that involves the use of more than one quarterback during the course of a game. This is more common at lower levels of football, such as high school or small college, but rare in major college or professional football. There are four circumstances in which a two-quarterback system may be used. The first is when a team is in the process of determining which quarterback will eventually be the starter, and may choose to use each quarterback for part of the game in order to compare the performances. For instance, the Seattle Seahawks' Pete Carroll used the preseason games in 2012 to select Russell Wilson as the starting quarterback over Matt Flynn and Tarvaris Jackson. The second is a starter–reliever system, in which the starting quarterback splits the regular season playing time with the backup quarterback, although the former will start playoff games. This strategy is rare, and was last seen in the NFL in the "WoodStrock" combination of Don Strock and David Woodley, which took the Miami Dolphins to the Epic in Miami in 1982 and Super Bowl XVII the following year. The starter–reliever system is distinct from a one-off situation in which a starter is benched in favor of the backup because the switch is part of the game plan (usually if the starter is playing poorly for that game), and the expectation is that the two players will assume the same roles game after game. The third is if a coach decides that the team has two quarterbacks who are equally effective and proceeds to rotate the quarterbacks at predetermined intervals, such as after each quarter or after each series. Southern California high school football team Corona Centennial operated this model during the 2014 football season, rotating quarterbacks after every series. In a game against the Chicago Bears in week 7 of the 1971 season, Dallas Cowboys head coach Tom Landry alternated Roger Staubach and Craig Morton on each play, sending in the quarterbacks with the playcall from the sideline. The fourth, still occasionally seen in major-college football, is the use of different quarterbacks in different game or down-and-distance situations. Generally this involves a running quarterback and a passing quarterback in an option or wishbone offense. In Canadian football, quarterback sneaks or other runs in short-yardage situations tend to be successful as a result of the distance between the offensive and defensive lines being one yard. Drew Tate, a quarterback for the Calgary Stampeders, was primarily used in short-yardage situations and led the CFL in rushing touchdowns during the 2014 season with 10 scores as the backup to Bo Levi Mitchell. This strategy had all but disappeared from professional American football, but returned to some extent with the advent of the "wildcat" offense. There is debate within football circles as to the effectiveness of the so-called "two-quarterback system". Many coaches and media personnel remain skeptical of the model. Teams such as USC (Southern California), OSU (Oklahoma State), Northwestern and smaller West Georgia have utilized the two-quarterback system; West Georgia, for example, uses the system due to the skillsets of its quarterbacks. As recently as 2020, Oregon, who had two quarterbacks capable of starting (Boston College transfer Anthony Brown and sophomore Tyler Shough), utilized a similar tactic in the 2020 Pac-12 Football Championship Game, giving Shough the start but inserting the dual-threat Brown on short-yardage plays, red zone situations and the final drive of the game. Teams like these use this situation because of the advantages it gives them against defenses of the other team, so that the defense is unable to adjust to their gameplan. History The quarterback position dates to the late 1800s, when American Ivy League schools playing a form of rugby union imported from the United Kingdom began to put their own spin on the game. Walter Camp, a prominent athlete and rugby player at Yale University, pushed through a change in rules at a meeting in 1880 that established a line of scrimmage and allowed for the football to be snapped to a quarterback. The change was meant to allow for teams to strategize their play more thoroughly and retain possession more easily than was possible in the chaos of a scrummage in rugby. In Camp's formulation, the "quarter-back" was the person who received a ball snapped back with another player's foot. Originally he was not allowed to run forward of the line of scrimmage: In the primary formation of Camp's time, there were four "back" positions, with the tailback playing furthest back, followed by the fullback, the halfback, and the quarterback closest to the line. As the quarterback was not allowed to run past the line of scrimmage, and the forward pass had not yet been invented, their primary role was to receive the snap from the center, and immediately hand or toss the ball backwards to the fullback or halfback to run. By the early 1900s, their role had been further reduced, as teams began to employ longer, direct snaps to one of the other backs (who by rule were allowed to run) and the quarterback became the primary "blocking back", leading the way through the defense but rarely carrying the ball themselves. This was the primary strategy of the single wing offense which was popular during the early decades of the 20th century. After the growth of the forward pass, the role of the quarterback changed again. The quarterback would later be returned to his role as the primary receiver of the snap after the advent of the T-formation offense, especially under the success of former single wing tailback, and later T-formation quarterback, Sammy Baugh. The requirement to stay behind the line of scrimmage was soon rescinded, but it was later reimposed in six-man football. The exchange between the person snapping the ball (typically the center) and the quarterback was initially an awkward one because it involved a kick. At first, centers gave the ball a small boot, and then picked it up and handed it to the quarterback. By 1889, Yale center Bert Hanson was bouncing the ball on the ground to the quarterback between his legs. The following year, a rule change officially made snapping the ball using the hands between the legs legal. Several years later, Amos Alonzo Stagg at the University of Chicago invented the lift-up snap: the center passed the ball off the ground and between his legs to a standing quarterback. A similar set of changes were later adopted in Canadian football as part of the Burnside rules, a set of rules proposed by John Meldrum "Thrift" Burnside, the captain of the University of Toronto's football team. The change from a scrummage to a "scrimmage" made it easier for teams to decide what plays they would run before the snap. At first, the captains of college teams were put in charge of play calling, indicating with shouted codes which players would run with the ball and how the men on the line were supposed to block. Yale later used visual signals, including adjustments of the captain's knit hat, to call plays. Centers could also signal plays based on the alignment of the ball before the snap. In 1888, however, Princeton University began to have its quarterback call plays using number signals. That system caught on and quarterbacks began to act as directors and organizers of offensive play. Early on, quarterbacks were used in a variety of formations. Harvard's team put seven men on the line of scrimmage, with three halfbacks who alternated at quarterback and a lone fullback. Princeton put six men on the line and had one designated quarterback, while Yale used seven linemen, one quarterback and two halfbacks who lined up on either side of the fullback. This was the origin of the T-formation, an offensive set that remained in use for many decades afterward and gained popularity in professional football starting in the 1930s. In 1906, the forward pass was legalized in American football; Canadian football did not adopt the forward pass until 1929. Despite the legalization of the forward pass, the most popular formations of the early 20th century focused mostly on the rushing game. The single-wing formation, a run-oriented offensive set, was invented by football coach Glenn "Pop" Warner around the year 1908. In the single-wing, the quarterback was positioned behind the line of scrimmage and was flanked by a tailback, fullback and wingback. He served largely as a blocking back; the tailback typically took the snap, either running forward with the ball or making a lateral pass to one of the other players in the backfield. The quarterback's job was usually to make blocks upfield to help the tailback or fullback gain yards. Passing plays were rare in the single-wing, an unbalanced power formation where four linemen lined up to one side of the center and two lined up to the other. The tailback was the focus of the offense, and was often a triple-threat man who would either pass, run or kick the ball. Offensive playcalling continued to focus on rushing up through the 1920s, when professional leagues began to challenge the popularity of college football. In the early days of the professional National Football League (NFL), which was founded in 1920, games were largely low-scoring affairs. Two-thirds of all games in the 1920s were shutouts, and quarterbacks/tailbacks usually passed only out of desperation. In addition to a reluctance to risk turnovers by passing, various rules existed that limited the effectiveness of the forward pass: passers were required to drop back five yards behind the line of scrimmage before they could attempt a pass, and incomplete passes in the end zone resulted in a change of possession and a touchback. Additionally, the rules required the ball to be snapped from the location on the field where it was ruled dead; if a play ended with a player going out of bounds, the center had to snap the ball from the sideline, an awkward place to start a play. Despite these constraints, player-coach Curly Lambeau of the Green Bay Packers, along with several other NFL figures of his era, was a consistent proponent
season in which every game is vitally important". Most consistently successful NFL teams (for instance, multiple Super Bowl appearances within a short period of time) have been centered around a single starting quarterback; the one exception was the Washington Redskins under head coach Joe Gibbs who won three Super Bowls with three different starting quarterbacks from 1982 to 1991. Many of these NFL dynasties ended with the departure of their starting quarterback. On a team's defense, the middle linebacker is regarded as "quarterback of the defense" and is often the defensive leader, since he must be as smart as he is athletic. The middle linebacker (MLB), sometimes known as the "Mike", is the only inside linebacker in the 4–3 scheme. Backup Compared to other positions in gridiron football, the backup quarterback gets considerably less playing time than the starting quarterback. While players at many other positions may rotate in and out during a game, and even a starter at most other positions rarely plays every snap, a team's starting quarterback often remains in the game for every play, often to give the team consistent leadership. That means that even a team's primary backup may go an entire season without a meaningful offensive snap. While their primary role may be to be available in case of injury to the starter, the backup quarterback may also have additional roles such as a holder on placekicks or as a punter, and will often play a key role in practice, serving as the upcoming opponent's quarterback during the preceding week's practices. A backup quarterback may also be put in during "garbage time" (when the score is so lopsided and the time left in the game is so short that the final outcome cannot realistically be changed), or start a meaningless late-season game (either the team has been eliminated from the postseason, or the playoff seeding cannot be affected), in order to ensure the starting quarterback does not needlessly risk an injury. Backup quarterbacks typically have the career of a journeyman quarterback and have short stints with multiple teams, a notable exception being Frank Reich, who backed up Jim Kelly for nine years at the Buffalo Bills. A quarterback controversy results when a team has two capable quarterbacks competing for the starting position. Dallas Cowboys head coach Tom Landry alternated Roger Staubach and Craig Morton on each play, sending in the quarterbacks with the play call from the sideline; Morton started in Super Bowl V, which his team lost, while Staubach started in Super Bowl VI the following year and won. Although Morton played most of the 1972 season due to an injury to Staubach, Staubach took back the starting job when he rallied the Cowboys in a come-from-behind win in the playoffs and Morton was subsequently traded; Staubach and Morton faced each other in Super Bowl XII. Another notable quarterback controversy involved the San Francisco 49ers, who had three capable starters: Joe Montana, Steve Young and Steve Bono. Montana suffered a season-ending injury that cost him the 1991 NFL season and was supplanted by Young. Young was injured midway through the season, but Bono held the starting job (despite Young's recovery) until Bono's own injury let Young reclaim it. Montana also missed most of the 1992 NFL season, making only one appearance, then was traded away at his request to take over as the starter for the Kansas City Chiefs; upon retirement, he was succeeded by Bono as the Chiefs' starting quarterback. Teams will often bring in a capable backup quarterback via the draft or a trade, as competition or potential replacement which would certainly threaten the starting quarterback's place in the team (see Two-quarterback system below). For instance, Drew Brees began his career with the San Diego Chargers but the team also drafted Philip Rivers; despite Brees initially retaining his starting job and being the Comeback Player of the Year he was not re-signed due to an injury and joined the New Orleans Saints as a free agent. Brees and Rivers both retired in 2021, each having been a starter for the Saints and Chargers, respectively, for over a decade. Aaron Rodgers was drafted by the Green Bay Packers as the eventual successor to Brett Favre, though Rodgers served in a backup role for a few years to develop sufficiently for the team to give him the starting job; Rodgers as of the 2021 offseason became disgruntled with the Packers as they disrespected him by not informing him on drafting quarterback Jordan Love. Similarly, Patrick Mahomes was selected by the Kansas City Chiefs to eventually supplant Alex Smith, with the latter willingly serving as a mentor. Trends and other roles In addition to their main role, quarterbacks are occasionally used in other roles. Most teams utilize a backup quarterback as their holder on placekicks. A benefit of using quarterbacks as holders is that it would be easier to pull off a fake field goal attempt, but many coaches prefer to use punters as holders because a punter will have far more time in practice sessions to work with the kicker than any quarterback would. In the Wildcat formation, where a halfback lines up behind the center and the quarterback lines up out wide, the quarterback can be used as a receiving target or a blocker. A more rare use for a quarterback is to punt the ball himself, a play known as a quick kick. Denver Broncos quarterback John Elway was known to perform quick kicks occasionally, typically when the Broncos were facing a third-and-long situation. Philadelphia Eagles quarterback Randall Cunningham, an All-America punter in college, was also known to punt the ball occasionally, and was assigned as the team's default punter for certain situations, such as when the team was backed up inside their own five-yard line. As Roger Staubach's backup, Dallas Cowboys quarterback Danny White was also the team's punter, opening strategic possibilities for coach Tom Landry. Ascending to the starting role upon Staubach's retirement, White held his position as the team's punter for several seasons—a double duty he performed to All-American standard at Arizona State University. White also had two touchdown receptions as a Dallas Cowboy, both from the halfback option. Special tactics If quarterbacks are uncomfortable with the formation the defense is using, they may call an audible change to their play. For example, if a quarterback receives the call to execute a running play, but he notices that the defense is ready to blitz—that is, to send additional defenders across the line of scrimmage in an attempt to tackle the quarterback or short his ability to pass—the quarterback may want to change the play. To do this, the quarterback yells a special code, like "Blue 42" or "Texas 29", which tells the offense to switch to a specific play or formation. Quarterbacks can also "spike" (throw the football at the ground) to stop the official game clock. For example, if a team is down by a field goal with only seconds remaining, a quarterback may spike the ball to prevent the game clock from running out. This usually allows the field goal unit to come onto the field, or attempt a final "Hail Mary pass". However, if a team is winning, a quarterback can keep the clock running by kneeling after the snap. This is normally done when the opposing team has no timeouts and there is little time left in the game, as it allows a team to burn up the remaining time on the clock without risking a turnover or injury. Dual-threat quarterbacks A dual-threat quarterback possesses the skills and physique to run with the ball if necessary. With the rise of several blitz-heavy defensive schemes and increasingly faster defensive players, the importance of a mobile quarterback has been redefined. While arm power, accuracy, and pocket presence—the ability to successfully operate from within the "pocket" formed by his blockers—are still the most important quarterback virtues, the ability to elude or run past defenders creates an additional threat that allows greater flexibility in a team's passing and running game. Dual-threat quarterbacks have historically been more prolific at the college level. Typically, a quarterback with exceptional quickness is used in an option offense, which allows the quarterback to hand the ball off, run it himself or pitch it to a running back shadowing him to the outside. This type of offense forces defenders to commit to the running back up the middle, the quarterback around the end or the running back trailing the quarterback. It is then that the quarterback has the "option" to identify which matchup is most favorable to the offense as the play unfolds and exploit that defensive weakness. In the college game, many schools employ several plays that are designed for the quarterback to run with the ball. This is much less common in professional football, except for a quarterback sneak, a play that involves the quarterback diving forward behind the offensive line to gain a small amount of yardage, but there is still an emphasis on being mobile enough to escape a heavy pass rush. Historically, high-profile dual-threat quarterbacks in the NFL were uncommon—among the notable exceptions were Steve Young and John Elway, who led their teams to one and five Super Bowl appearances respectively; and Michael Vick, whose rushing ability was a rarity in the early 2000s, although he never led his team to a Super Bowl. In the 2010s, quarterbacks with dual-threat capabilities have become more popular. Current NFL quarterbacks considered to be dual-threats include Russell Wilson, Lamar Jackson, and Josh Allen. Two-quarterback
the diagonals. This is sometimes known as Euler's quadrilateral theorem and is a generalization of the parallelogram law. The German mathematician Carl Anton Bretschneider derived in 1842 the following generalization of Ptolemy's theorem, regarding the product of the diagonals in a convex quadrilateral This relation can be considered to be a law of cosines for a quadrilateral. In a cyclic quadrilateral, where A + C = 180°, it reduces to pq = ac + bd. Since cos (A + C) ≥ −1, it also gives a proof of Ptolemy's inequality. Other metric relations If X and Y are the feet of the normals from B and D to the diagonal AC = p in a convex quadrilateral ABCD with sides a = AB, b = BC, c = CD, d = DA, then In a convex quadrilateral ABCD with sides a = AB, b = BC, c = CD, d = DA, and where the diagonals intersect at E, where e = AE, f = BE, g = CE, and h = DE. The shape and size of a convex quadrilateral are fully determined by the lengths of its sides in sequence and of one diagonal between two specified vertices. The two diagonals p, q and the four side lengths a, b, c, d of a quadrilateral are related by the Cayley-Menger determinant, as follows: Angle bisectors The internal angle bisectors of a convex quadrilateral either form a cyclic quadrilateral (that is, the four intersection points of adjacent angle bisectors are concyclic) or they are concurrent. In the latter case the quadrilateral is a tangential quadrilateral. In quadrilateral ABCD, if the angle bisectors of A and C meet on diagonal BD, then the angle bisectors of B and D meet on diagonal AC. Bimedians The bimedians of a quadrilateral are the line segments connecting the midpoints of the opposite sides. The intersection of the bimedians is the centroid of the vertices of the quadrilateral. The midpoints of the sides of any quadrilateral (convex, concave or crossed) are the vertices of a parallelogram called the Varignon parallelogram. It has the following properties: Each pair of opposite sides of the Varignon parallelogram are parallel to a diagonal in the original quadrilateral. A side of the Varignon parallelogram is half as long as the diagonal in the original quadrilateral it is parallel to. The area of the Varignon parallelogram equals half the area of the original quadrilateral. This is true in convex, concave and crossed quadrilaterals provided the area of the latter is defined to be the difference of the areas of the two triangles it is composed of. The perimeter of the Varignon parallelogram equals the sum of the diagonals of the original quadrilateral. The diagonals of the Varignon parallelogram are the bimedians of the original quadrilateral. The two bimedians in a quadrilateral and the line segment joining the midpoints of the diagonals in that quadrilateral are concurrent and are all bisected by their point of intersection. In a convex quadrilateral with sides a, b, c and d, the length of the bimedian that connects the midpoints of the sides a and c is where p and q are the length of the diagonals. The length of the bimedian that connects the midpoints of the sides b and d is Hence This is also a corollary to the parallelogram law applied in the Varignon parallelogram. The lengths of the bimedians can also be expressed in terms of two opposite sides and the distance x between the midpoints of the diagonals. This is possible when using Euler's quadrilateral theorem in the above formulas. Whence and Note that the two opposite sides in these formulas are not the two that the bimedian connects. In a convex quadrilateral, there is the following dual connection between the bimedians and the diagonals: The two bimedians have equal length if and only if the two diagonals are perpendicular. The two bimedians are perpendicular if and only if the two diagonals have equal length. Trigonometric identities The four angles of a simple quadrilateral ABCD satisfy the following identities: and Also, In the last two formulas, no angle is allowed to be a right angle, since tan 90° is not defined. Let , , , be the sides of not a cross quadrilateral, is the semiperimeter, and and are opposite angles, then and . We can use these identities to derive the Bretschneider's Formula. Inequalities Area If a convex quadrilateral has the consecutive sides a, b, c, d and the diagonals p, q, then its area K satisfies with equality only for a rectangle. with equality only for a square. with equality only if the diagonals are perpendicular and equal. with equality only for a rectangle. From Bretschneider's formula it directly follows that the area of a quadrilateral satisfies with equality if and only if the quadrilateral is cyclic or degenerate such that one side is equal to the sum of the other three (it has collapsed into a line segment, so the area is zero). The area of any quadrilateral also satisfies the inequality Denoting the perimeter as L, we have with equality only in the case of a square. The area of a convex quadrilateral also satisfies for diagonal lengths p and q, with equality if and only if the diagonals are perpendicular. Let a, b, c, d be the lengths of the sides of a convex quadrilateral ABCD with the area K and diagonals AC = p, BD = q. Then with equality only for a square. Let a, b, c, d be the lengths of the sides of a convex quadrilateral ABCD with the area K, then the following inequality holds: with equality only for a square. Diagonals and bimedians A corollary to Euler's quadrilateral theorem is the inequality where equality holds if and only if the quadrilateral is a parallelogram. Euler also generalized Ptolemy's theorem, which is an equality in a cyclic quadrilateral, into an inequality for a convex quadrilateral. It states that where there is equality if and only if the quadrilateral is cyclic. This is often called Ptolemy's inequality. In any convex quadrilateral the bimedians m, n and the diagonals p, q are related by the inequality with equality holding if and only if the diagonals are equal. This follows directly from the quadrilateral identity Sides The sides a, b, c, and d of any quadrilateral satisfy and Maximum and minimum properties Among all quadrilaterals with a given perimeter, the one with the largest area is the square. This is called the isoperimetric theorem for quadrilaterals. It is a direct consequence of the area inequality where K is the area of a convex quadrilateral with perimeter L. Equality holds if and only if the quadrilateral is a square. The dual theorem states that of all quadrilaterals with a given area, the square has the shortest perimeter. The quadrilateral with given side lengths that has the maximum area is the cyclic quadrilateral. Of all convex quadrilaterals with given diagonals, the orthodiagonal quadrilateral has the largest area. This is a direct consequence of the fact that the area of a convex quadrilateral satisfies where θ is the angle between the diagonals p and q. Equality holds if and only if θ = 90°. If P is an interior point in a convex quadrilateral ABCD, then From this inequality it follows that the point inside a quadrilateral that minimizes the sum of distances to the vertices is the intersection of the diagonals. Hence that point is the Fermat point of a convex quadrilateral. Remarkable points and lines in a convex quadrilateral The centre of a quadrilateral can be defined in several different ways. The "vertex centroid" comes from considering the quadrilateral as being empty but having equal masses at its vertices. The "side centroid" comes from considering the sides to have constant mass per unit length. The usual centre, called just centroid (centre of area) comes from considering the surface of the quadrilateral as having constant density. These three points are in general not all the same point. The "vertex centroid" is the intersection of the two bimedians. As with any polygon, the x and y coordinates of the vertex centroid are the arithmetic means of the x and y coordinates of the vertices. The "area centroid" of quadrilateral ABCD can be constructed in the following way. Let Ga, Gb, Gc, Gd be the centroids of triangles BCD, ACD, ABD, ABC respectively. Then the "area centroid" is the intersection of the lines GaGc and GbGd. In a general convex quadrilateral ABCD, there are no natural analogies to the circumcenter and orthocenter of a triangle. But two such points can be constructed in the following way. Let Oa, Ob, Oc, Od be the circumcenters of triangles BCD, ACD, ABD, ABC respectively; and denote by Ha, Hb, Hc, Hd the orthocenters in the same triangles. Then the intersection of the lines OaOc and ObOd is called the quasicircumcenter, and the intersection of the lines HaHc and HbHd is called the quasiorthocenter of the convex quadrilateral. These points can be used to define an Euler line of a quadrilateral. In a convex quadrilateral, the quasiorthocenter H, the "area centroid" G, and the quasicircumcenter O are collinear in this order, and HG = 2GO. There can also be defined a quasinine-point center E as the intersection of the lines EaEc and EbEd, where Ea, Eb, Ec, Ed are the nine-point centers of triangles BCD, ACD, ABD, ABC respectively. Then E is the midpoint of OH. Another remarkable line in a convex non-parallelogram quadrilateral is the Newton line, which connects the midpoints of the diagonals, the segment connecting these points being bisected by the vertex centroid. One more interesting line (in some sense dual to the Newton's one) is the line connecting the point of intersection of diagonals with the vertex centroid. The line is remarkable by the fact that it contains the (area) centroid. The vertex centroid divides the segment connecting the intersection of diagonals and the (area) centroid in the ratio 3:1. For any quadrilateral ABCD with points P and Q the intersections of AD and BC and AB and CD, respectively, the circles (PAB), (PCD), (QAD), and (QBC) pass through a common point M, called a Miquel point. For a convex quadrilateral ABCD in which E is the point of intersection of the diagonals and F is the point of intersection of the extensions of sides BC and AD, let ω be a circle through E and F which meets CB internally at M and DA internally at N. Let CA meet ω again at L and let DB meet ω again at K. Then there holds: the straight lines NK and ML intersect at point P that is located on the side AB; the straight lines NL and KM intersect at point Q that is located on the side CD. Points P and Q are called ”Pascal points” formed by circle ω on sides AB and CD. Other properties of convex quadrilaterals Let exterior squares be drawn on all sides of a quadrilateral. The segments connecting the centers of opposite squares are (a) equal in length, and (b) perpendicular. Thus these centers are the vertices of an orthodiagonal quadrilateral. This is called Van Aubel's theorem. For any simple quadrilateral with given edge lengths, there is a cyclic quadrilateral with the same edge lengths. The four smaller triangles formed by the diagonals and sides of a convex quadrilateral have the property that the product of the areas of two opposite triangles equals the product of the areas of the other two triangles. Taxonomy A hierarchical taxonomy of quadrilaterals is illustrated by the figure to the right. Lower classes are special cases of higher classes they
terms as where the lengths of the diagonals are and and the angle between them is . In the case of an orthodiagonal quadrilateral (e.g. rhombus, square, and kite), this formula reduces to since is . The area can be also expressed in terms of bimedians as where the lengths of the bimedians are and and the angle between them is . Bretschneider's formula expresses the area in terms of the sides and two opposite angles: where the sides in sequence are , , , , where is the semiperimeter, and and are two (in fact, any two) opposite angles. This reduces to Brahmagupta's formula for the area of a cyclic quadrilateral—when . Another area formula in terms of the sides and angles, with angle being between sides and , and being between sides and , is In the case of a cyclic quadrilateral, the latter formula becomes In a parallelogram, where both pairs of opposite sides and angles are equal, this formula reduces to Alternatively, we can write the area in terms of the sides and the intersection angle of the diagonals, as long is not : In the case of a parallelogram, the latter formula becomes Another area formula including the sides , , , is where is the distance between the midpoints of the diagonals, and is the angle between the bimedians. The last trigonometric area formula including the sides , , , and the angle (between and ) is: which can also be used for the area of a concave quadrilateral (having the concave part opposite to angle ), by just changing the first sign to . Non-trigonometric formulas The following two formulas express the area in terms of the sides , , and , the semiperimeter , and the diagonals , : The first reduces to Brahmagupta's formula in the cyclic quadrilateral case, since then . The area can also be expressed in terms of the bimedians , and the diagonals , : In fact, any three of the four values , , , and suffice for determination of the area, since in any quadrilateral the four values are related by The corresponding expressions are: if the lengths of two bimedians and one diagonal are given, and if the lengths of two diagonals and one bimedian are given. Vector formulas The area of a quadrilateral can be calculated using vectors. Let vectors and form the diagonals from to and from to . The area of the quadrilateral is then which is half the magnitude of the cross product of vectors and . In two-dimensional Euclidean space, expressing vector as a free vector in Cartesian space equal to and as , this can be rewritten as: Diagonals Properties of the diagonals in quadrilaterals In the following table it is listed if the diagonals in some of the most basic quadrilaterals bisect each other, if their diagonals are perpendicular, and if their diagonals have equal length. The list applies to the most general cases, and excludes named subsets. Note 1: The most general trapezoids and isosceles trapezoids do not have perpendicular diagonals, but there are infinite numbers of (non-similar) trapezoids and isosceles trapezoids that do have perpendicular diagonals and are not any other named quadrilateral. Note 2: In a kite, one diagonal bisects the other. The most general kite has unequal diagonals, but there is an infinite number of (non-similar) kites in which the diagonals are equal in length (and the kites are not any other named quadrilateral). Lengths of the diagonals The lengths of the diagonals in a convex quadrilateral ABCD can be calculated using the law of cosines on each triangle formed by one diagonal and two sides of the quadrilateral. Thus and Other, more symmetric formulas for the lengths of the diagonals, are and Generalizations of the parallelogram law and Ptolemy's theorem In any convex quadrilateral ABCD, the sum of the squares of the four sides is equal to the sum of the squares of the two diagonals plus four times the square of the line segment connecting the midpoints of the diagonals. Thus where x is the distance between the midpoints of the diagonals. This is sometimes known as Euler's quadrilateral theorem and is a generalization of the parallelogram law. The German mathematician Carl Anton Bretschneider derived in 1842 the following generalization of Ptolemy's theorem, regarding the product of the diagonals in a convex quadrilateral This relation can be considered to be a law of cosines for a quadrilateral. In a cyclic quadrilateral, where A + C = 180°, it reduces to pq = ac + bd. Since cos (A + C) ≥ −1, it also gives a proof of Ptolemy's inequality. Other metric relations If X and Y are the feet of the normals from B and D to the diagonal AC = p in a convex quadrilateral ABCD with sides a = AB, b = BC, c = CD, d = DA, then In a convex quadrilateral ABCD with sides a = AB, b = BC, c = CD, d = DA, and where the diagonals intersect at E, where e = AE, f = BE, g = CE, and h = DE. The shape and size of a convex quadrilateral are fully determined by the lengths of its sides in sequence and of one diagonal between two specified vertices. The two diagonals p, q and the four side lengths a, b, c, d of a quadrilateral are related by the Cayley-Menger determinant, as follows: Angle bisectors The internal angle bisectors of a convex quadrilateral either form a cyclic quadrilateral (that is, the four intersection points of adjacent angle bisectors are concyclic) or they are concurrent. In the latter case the quadrilateral is a tangential quadrilateral. In quadrilateral ABCD, if the angle bisectors of A and C meet on diagonal BD, then the angle bisectors of B and D meet on diagonal AC. Bimedians The bimedians of a quadrilateral are the line segments connecting the midpoints of the opposite sides. The intersection of the bimedians is the centroid of the vertices of the quadrilateral. The midpoints of the sides of any quadrilateral (convex, concave or crossed) are the vertices of a parallelogram called the Varignon parallelogram. It has the following properties: Each pair of opposite sides of the Varignon parallelogram are parallel to a diagonal in the original quadrilateral. A side of the Varignon parallelogram is half as long as the diagonal in the original quadrilateral it is parallel to. The area of the Varignon parallelogram equals half the area of the original quadrilateral. This is true in convex, concave and crossed quadrilaterals provided the area of the latter is defined to be the difference of the areas of the two triangles it is composed of. The perimeter of the Varignon parallelogram equals the sum of the diagonals of the original quadrilateral. The diagonals of the Varignon parallelogram are the bimedians of the original quadrilateral. The two bimedians in a quadrilateral and the line segment joining the midpoints of the diagonals in that quadrilateral are concurrent and are all bisected by their point of intersection. In a convex quadrilateral with sides a, b, c and d, the length of the bimedian that connects the midpoints of the sides a and c is where p and q are the length of the diagonals. The length of the bimedian that connects the midpoints of the sides b and d is Hence This is also a corollary to the parallelogram law applied in the Varignon parallelogram. The lengths of the bimedians can also be expressed in terms of two opposite sides and the distance x between the midpoints of the diagonals. This is possible when using Euler's quadrilateral theorem in the above formulas. Whence and Note that the two opposite sides in these formulas are not the two that the bimedian connects. In a convex quadrilateral, there is the following dual connection between the bimedians and the diagonals: The two bimedians have equal length if and only if the two diagonals are perpendicular. The two bimedians are perpendicular if and only if the two diagonals have equal length. Trigonometric identities The four angles of a simple quadrilateral ABCD satisfy the following identities: and Also, In the last two formulas, no angle is allowed to be a right angle, since tan 90° is not defined. Let , , , be the sides of not a cross quadrilateral, is the semiperimeter, and and are opposite angles, then and . We can use these identities to derive the Bretschneider's Formula. Inequalities Area If a convex quadrilateral has the consecutive sides a, b, c, d and the diagonals p, q, then its area K satisfies with equality only for a rectangle. with equality only for a square. with equality only if the diagonals are perpendicular and equal. with equality only for a rectangle. From Bretschneider's formula it directly follows that the area of a quadrilateral satisfies with equality if and only if the quadrilateral is cyclic or degenerate such that one side is equal to the sum of the other three (it has collapsed into a line segment, so the area is zero). The area of any quadrilateral also satisfies the inequality Denoting the perimeter as L, we have with equality only in the case of a square. The area of a convex quadrilateral also satisfies for diagonal lengths p and q, with equality if and only if the diagonals are perpendicular. Let a, b, c, d be the lengths of the sides of a convex quadrilateral ABCD with the area K and diagonals AC = p, BD = q. Then with equality only for a square. Let a, b, c, d be the lengths of the sides of a convex quadrilateral ABCD with the area K, then the following inequality holds: with equality only for a square. Diagonals and bimedians A corollary to Euler's quadrilateral theorem is the inequality where equality holds if and only if the quadrilateral is a parallelogram. Euler also generalized Ptolemy's theorem, which is an equality in a cyclic quadrilateral, into an inequality for a convex quadrilateral. It states that where there is equality if and only if the quadrilateral is cyclic. This is often called Ptolemy's inequality. In any convex quadrilateral the bimedians m, n and the diagonals p, q are related by the inequality with equality holding if and only if the diagonals are equal. This follows directly from the quadrilateral identity Sides The sides a, b, c, and d of any quadrilateral satisfy and Maximum and minimum properties Among all quadrilaterals with a given perimeter, the one with the largest area is the square. This is called the isoperimetric theorem for quadrilaterals. It is a direct consequence of the area inequality where K is the area of a convex quadrilateral with perimeter L. Equality holds if and only if the quadrilateral is a square. The dual theorem states that of all quadrilaterals with a given area, the square has the shortest perimeter. The quadrilateral with given side lengths that has
The sender does not have to know the particular quantum state being transferred. Moreover, the location of the recipient can be unknown, but classical information needs to be sent from sender to receiver to complete the teleportation. Because classical information needs to be sent, teleportation can not occur faster than the speed of light. One of the first scientific articles to investigate quantum teleportation is "Teleporting an Unknown Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels" published by C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters in 1993, in which they used dual communication methods to send/receive quantum information. It was experimentally realized in 1997 by two research groups, led by Sandu Popescu and Anton Zeilinger, respectively. Experimental determinations of quantum teleportation have been made in information content - including photons, atoms, electrons, and superconducting circuits - as well as distance with 1,400 km (870 mi) being the longest distance of successful teleportation by the group of Jian-Wei Pan using the Micius satellite for space-based quantum teleportation. Non-technical summary In matters relating to quantum information theory, it is convenient to work with the simplest possible unit of information: the two-state system of the qubit. The qubit functions as the quantum analog of the classic computational part, the bit, as it can have a measurement value of both a 0 and a 1, whereas the classical bit can only be measured as a 0 or a 1. The quantum two-state system seeks to transfer quantum information from one location to another location without losing the information and preserving the quality of this information. This process involves moving the information between carriers and not movement of the actual carriers, similar to the traditional process of communications, as two parties remain stationary while the information (digital media, voice, text, etc.) is being transferred, contrary to the implications of the word "teleport." The main components needed for teleportation include a sender, the information (a qubit), a traditional channel, a quantum channel, and a receiver. An interesting fact is that the sender does not need to know the exact contents of the information that is being sent. The measurement postulate of quantum mechanics—when a measurement is made upon a quantum state, any subsequent measurements will "collapse" or that the observed state will be lost—creates an imposition within teleportation: if a sender makes a measurement on their information, the state could collapse when the receiver obtains the information since the state has changed from when the sender made the initial measurement. For actual teleportation, it is required that an entangled quantum state or Bell state be created for the qubit to be transferred. Entanglement imposes statistical correlations between otherwise distinct physical systems by creating or placing two or more separate particles into a single, shared quantum state. This intermediate state contains two particles whose quantum states are dependent on each other as they form a connection: if one particle is moved, the other particle will move along with it. Any changes that one particle of the entanglement undergoes, the other particle will also undergo that change, causing the entangled particles to act as one quantum state. These correlations hold even when measurements are chosen and performed independently, out of causal contact from one another, as verified in Bell test experiments. Thus, an observation resulting from a measurement choice made at one point in spacetime seems to instantaneously affect outcomes in another region, even though light hasn't yet had time to travel the distance; a conclusion seemingly at odds with special relativity. This is known as the EPR paradox. However such correlations can never be used to transmit any information faster than the speed of light, a statement encapsulated in the no-communication theorem. Thus, teleportation as a whole can never be superluminal, as a qubit cannot be reconstructed until the accompanying classical information arrives. The sender will then prepare the particle (or information) in the qubit and combine with one of the entangled particles of the intermediate state, causing a change of the entangled quantum state. The changed state of the entangled particle is then sent to an analyzer that will measure this change of the entangled state. The "change" measurement will allow the receiver to recreate the original information that the sender had resulting in the information being teleported or carried between two people that have different locations. Since the initial quantum information is "destroyed" as it becomes part of the entanglement state, the no-cloning theorem is maintained as the information is recreated from the entangled state and not copied during teleportation. The quantum channel is the communication mechanism that is used for all quantum information transmission and is the channel used for teleportation (relationship of quantum channel to traditional communication channel is akin to the qubit being the quantum analog of the classical bit). However, in addition to the quantum channel, a traditional channel must also be used to accompany a qubit to "preserve" the quantum information. When the change measurement between the original qubit and the entangled particle is made, the measurement result must be carried by a traditional channel so that the quantum information can be reconstructed and the receiver can get the original information. Because of this need for the traditional channel, the speed of teleportation can be no faster than the speed of light (hence the no-communication theorem is not violated). The main advantage with this is that Bell states can be shared using photons from lasers making teleportation achievable through open space having no need to send information through physical cables or optical fibers. Quantum states can be encoded in various degrees of freedom of atoms. For example, qubits can be encoded in the degrees of freedom of electrons surrounding the atomic nucleus or in the degrees of freedom of the nucleus itself. Thus, performing this kind of teleportation requires a stock of atoms at the receiving site, available for having qubits imprinted on them. the quantum states of single photons, photon modes, single atoms, atomic ensembles, defect centers in solids, single electrons, and superconducting circuits have been employed as information bearers. Understanding quantum teleportation requires a good grounding in finite-dimensional linear algebra, Hilbert spaces and projection matrixes. A qubit is described using a two-dimensional complex number-valued vector space (a Hilbert space), which are the primary basis for the formal manipulations given below. A working knowledge of quantum mechanics is not absolutely required to understand the mathematics of quantum teleportation, although without such acquaintance, the deeper meaning of the equations may remain quite mysterious. Protocol The resources required for quantum teleportation are a communication channel capable of transmitting two classical bits, a means of generating an entangled Bell state of qubits and distributing to two different locations, performing a Bell measurement on one of the Bell state qubits, and manipulating the quantum state of the other qubit from the pair. Of course, there must also be some input qubit (in the quantum state ) to be teleported. The protocol is then as follows: A Bell state is generated with one qubit sent to location A and the other sent to location B. A Bell measurement of the Bell state qubit and the qubit to be teleported ( ) is performed at location A. This yields one of four measurement outcomes which can be encoded in two classical bits of information. Both qubits at location A are then discarded. Using the classical channel, the two bits are sent from A to B. (This is the only potentially time-consuming step after step 1 since information transfer is limited by the speed of light.) As a result of the measurement performed at location A, the Bell state qubit at location B is in one of four possible states. Of these four possible states, one is identical to the original quantum state , and the other three are closely related. The identity of the state actually obtained is encoded in two classical bits and sent to location B. The Bell state qubit at location B is then modified in one of three ways, or not at all, which results in a qubit identical to , the state of the qubit that was chosen for teleportation. It is worth noticing that the above protocol assumes that the qubits are individually addressable, meaning that the qubits are distinguishable and physically labeled. However, there can be situations where two identical qubits are indistinguishable due to the spatial overlap of their wave functions. Under this condition, the qubits cannot be individually controlled or measured. Nevertheless, a teleportation protocol analogous to that described above can still be (conditionally) implemented by exploiting two independently prepared qubits, with no need of an initial Bell state. This can be made by addressing the internal degrees of freedom of the qubits (e.g., spins or polarizations) by spatially localized measurements performed in separated regions A and B shared by the wave functions of the two indistinguishable qubits. Experimental results and records Work in 1998 verified the initial predictions, and the distance of teleportation was increased in August 2004 to 600 meters, using optical fiber. Subsequently, the record distance for quantum teleportation has been gradually increased to , then to , and is now , set in open air experiments in the Canary Islands, done between the two astronomical observatories of the Instituto de Astrofísica de Canarias. There has been a recent record set () using superconducting nanowire detectors that reached the distance of over optical fiber. For material systems, the record distance is . A variant of teleportation called "open-destination" teleportation, with receivers located at multiple locations, was demonstrated in 2004 using five-photon entanglement. Teleportation of a composite state of two single qubits has also been realized. In April 2011, experimenters reported that they had demonstrated teleportation of wave packets of light up to a bandwidth of 10 MHz while preserving strongly nonclassical superposition states. In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported. On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods. On 26 February 2015, scientists at the University of Science and Technology of China in Hefei, led by Chao-yang Lu and Jian-Wei Pan carried out the first experiment teleporting multiple degrees of freedom of a quantum particle. They managed to teleport the quantum information from ensemble of rubidium atoms to another ensemble of rubidium atoms over a distance of using entangled photons. In 2016, researchers demonstrated quantum teleportation with two independent sources which are separated by in Hefei optical fiber network. In September 2016, researchers at the University of Calgary demonstrated quantum teleportation over the Calgary metropolitan fiber network over a distance of . In December of 2020, as part of the INQNET collaboration, researchers achieved quantum teleportation over a total distance of 44 km (27.3 mi) with fidelities exceeding 90%. Researchers have also successfully used quantum teleportation to transmit information between clouds of gas atoms, notable because the clouds of gas are macroscopic atomic ensembles. It is also possible to teleport logical operations, see quantum gate teleportation. In 2018, physicists at Yale demonstrated a deterministic teleported CNOT operation between logically encoded qubits. First proposed theoretically in 1993, quantum teleportation has since been demonstrated in many different guises. It has been carried out using two-level states of a single photon, a single atom and a trapped ion – among other quantum objects – and also using two photons. In 1997, two groups experimentally achieved quantum teleportation. The first group, led by Sandu Popescu, was based out Italy. An experimental group led by Anton Zeilinger followed a few months later. The results obtained from experiments done by Popescu's group concluded that classical channels alone could not replicate the teleportation of linearly polarized state and an elliptically polarized state. The Bell state measurement distinguished between the four Bell states, which can allow for a 100% success rate of teleportation, in an ideal representation. Zeilinger's group produced a pair of entangled photons by implementing the process of parametric down-conversion. In order to ensure that the two photons cannot be distinguished by their arrival times, the photons were generated using a pulsed pump beam. The photons were then sent through narrow-bandwidth filters to produce a coherence time that is much longer than the length of the pump pulse. They then used a two-photon interferometry for analyzing the entanglement so that the quantum property could be recognized when it is transferred from one photon to the other. Photon 1 was polarized at 45° in the first experiment conducted by Zeilinger's group. Quantum teleportation is verified when both photons are detected in the state, which has a probability of 25%. Two detectors, f1 and f2, are placed behind the beam splitter, and recording the coincidence will identify the state. If there is a coincidence between detectors f1 and f2, then photon 3 is predicted to be polarized at a 45° angle. Photon 3 is passed through a polarizing beam splitter that selects +45° and -45° polarization. If quantum teleportation has happened, only detector d2, which is at the +45° output, will register a detection. Detector d1, located at the -45° output, will not detect a photon. If there is a coincidence between d2f1f2, with the 45° analysis, and a lack of a d1f1f2 coincidence, with -45° analysis, it is proof that the information from the polarized photon 1 has been teleported to photon 3 using quantum
then sent through narrow-bandwidth filters to produce a coherence time that is much longer than the length of the pump pulse. They then used a two-photon interferometry for analyzing the entanglement so that the quantum property could be recognized when it is transferred from one photon to the other. Photon 1 was polarized at 45° in the first experiment conducted by Zeilinger's group. Quantum teleportation is verified when both photons are detected in the state, which has a probability of 25%. Two detectors, f1 and f2, are placed behind the beam splitter, and recording the coincidence will identify the state. If there is a coincidence between detectors f1 and f2, then photon 3 is predicted to be polarized at a 45° angle. Photon 3 is passed through a polarizing beam splitter that selects +45° and -45° polarization. If quantum teleportation has happened, only detector d2, which is at the +45° output, will register a detection. Detector d1, located at the -45° output, will not detect a photon. If there is a coincidence between d2f1f2, with the 45° analysis, and a lack of a d1f1f2 coincidence, with -45° analysis, it is proof that the information from the polarized photon 1 has been teleported to photon 3 using quantum teleportation. Quantum teleportation over 143 km Zeilinger's group developed an experiment using active feed-forward in real time and two free-space optical links, quantum and classical, between the Canary Islands of La Palma and Tenerife, a distance of over 143 kilometers. In order to achieve teleportation, a frequency-uncorrelated polarization-entangled photon pair source, ultra-low-noise single-photon detectors and entanglement assisted clock synchronization were implemented. The two locations were entangled to share the auxiliary state: La Palma and Tenerife can be compared to the quantum characters Alice and Bob. Alice and Bob share the entangled state above, with photon 2 being with Alice and photon 3 being with Bob. A third party, Charlie, provides photon 1 (the input photon) which will be teleported to Alice in the generalized polarization state: where the complex numbers and are unknown to Alice or Bob. Alice will perform a Bell-state measurement (BSM) that randomly projects the two photons onto one of the four Bell states with each one having a probability of 25%. Photon 3 will be projected onto , the input state. Alice transmits the outcome of the BSM to Bob, via the classical channel, where Bob is able to apply the corresponding unitary operation to obtain photon 3 in the initial state of photon 1. Bob will not have to do anything if he detects the state. Bob will need to apply a phase shift to photon 3 between the horizontal and vertical component if the state is detected. The results of Zeilinger's group concluded that the average fidelity (overlap of the ideal teleported state with the measured density matrix) was 0.863 with a standard deviation of 0.038. The link attenuation during their experiments varied between 28.1 dB and 39.0 dB, which was a result of strong winds and rapid temperature changes. Despite the high loss in the quantum free-space channel, the average fidelity surpassed the classical limit of 2/3. Therefore, Zeilinger's group successfully demonstrated quantum teleportation over a distance of 143 km. Quantum teleportation across the Danube River In 2004, a quantum teleportation experiment was conducted across the Danube River in Vienna, a total of 600 meters. An 800-meter-long optical fiber wire was installed in a public sewer system underneath the Danube River, and it was exposed to temperature changes and other environmental influences. Alice must perform a joint Bell state measurement (BSM) on photon b, the input photon, and photon c, her part of the entangled photon pair (photons c and d). Photon d, Bob's receiver photon, will contain all of the information on the input photon b, except for a phase rotation that depends on the state that Alice observed. This experiment implemented an active feed-forward system that sends Alice's measurement results via a classical microwave channel with a fast electro-optical modulator in order to exactly replicate Alice's input photon. The teleportation fidelity obtained from the linear polarization state at 45° varied between 0.84 and 0.90, which is well above the classical fidelity limit of 0.66. Deterministic quantum teleportation with atoms Three qubits are required for this process: the source qubit from the sender, the ancillary qubit, and the receiver's target qubit, which is maximally entangled with the ancillary qubit. For this experiment, ^{40}Ca+ ions were used as the qubits. Ions 2 and 3 are prepared in the Bell state . The state of ion 1 is prepared arbitrarily. The quantum states of ions 1 and 2 are measured by illuminating them with light at a specific wavelength. The obtained fidelities for this experiment ranged between 73% and 76%. This is larger than the maximum possible average fidelity of 66.7% that can be obtained using completely classical resources. Ground-to-satellite quantum teleportation The quantum state being teleported in this experiment is , where and are unknown complex numbers, represents the horizontal polarization state, and represents the vertical polarization state. The qubit prepared in this state is generated in a laboratory in Ngari, Tibet. The goal was to teleport the quantum information of the qubit to the Micius satellite that was launched on August 16, 2016 at an altitude of around 500 km. When a Bell state measurement is conducted on photons 1 and 2 and the resulting state is , photon 3 carries this desired state. If the Bell state detected is , then a phase shift of is applied to the state to get the desired quantum state. The distance between the ground station and the satellite changes from as little as 500 km to as large as 1,400 km. Because of the changing distance, the channel loss of the uplink varies between 41 dB and 52 dB. The average fidelity obtained from this experiment was 0.80 with a standard deviation of 0.01. Therefore, this experiment successfully established a ground-to-satellite uplink over a distance of 500-1,400 km using quantum teleportation. This is an essential step towards creating a global-scale quantum internet. Formal presentation There are a variety of ways in which the teleportation protocol can be written mathematically. Some are very compact but abstract, and some are verbose but straightforward and concrete. The presentation below is of the latter form: verbose, but has the benefit of showing each quantum state simply and directly. Later sections review more compact notations. The teleportation protocol begins with a quantum state or qubit , in Alice's possession, that she wants to convey to Bob. This qubit can be written generally, in bra–ket notation, as: The subscript C above is used only to distinguish this state from A and B, below. Next, the protocol requires that Alice and Bob share a maximally entangled state. This state is fixed in advance, by mutual agreement between Alice and Bob, and can be any one of the four Bell states shown. It does not matter which one. , , . , In the following, assume that Alice and Bob share the state Alice obtains one of the particles in the pair, with the other going to Bob. (This is implemented by preparing the particles together and shooting them to Alice and Bob from a common source.) The subscripts A and B in the entangled state refer to Alice's or Bob's particle. At this point, Alice has two particles (C, the one she wants to teleport, and A, one of the entangled pair), and Bob has one particle, B. In the total system, the state of these three particles is given by Alice will then make a local measurement in the Bell basis (i.e. the four Bell states) on the two particles in her possession. To make the result of her measurement clear, it is best to write the state of Alice's two qubits as superpositions of the Bell basis. This is done by using the following general identities, which are easily verified: and After expanding the expression for , one applies these identities to the qubits with A and C subscripts. In particular,and the other terms follow similarly. Combining similar terms, the total three particle state of A, B and C together becomes the following four-term superposition: Note that all three particles are still in the same total state since no operations have been performed. Rather, the above is just a change of basis on Alice's part of the system. The actual teleportation occurs when Alice measures her two qubits A,C, in the Bell basis Equivalently, the measurement may be done in the computational basis, , by mapping each Bell state uniquely to one of with the quantum circuit in the figure to the right. Given the above expression, evidently the result of Alice's (local) measurement is that the three-particle state would collapse to one of the following four states (with equal probability of obtaining each): Alice's two particles are now entangled to each other, in one of the four Bell states, and the entanglement originally shared between Alice's and Bob's particles is now broken. Bob's particle takes on one of the four superposition states shown above. Note how Bob's qubit is now in a state that resembles the state to be teleported. The four possible states for Bob's qubit are unitary images of the state to be teleported. The result of Alice's Bell measurement tells her which of the above four states the system is in. She can now send her result to Bob through a classical channel. Two classical bits can communicate which of the four results she obtained. After Bob receives the message from Alice, he will know which of the four states his particle is in. Using this information, he performs a unitary operation on his particle to transform it to the desired state : If Alice indicates her result is , Bob knows his qubit is already in the desired state and does nothing. This amounts to the trivial unitary operation, the identity operator. If the message indicates , Bob would send his qubit through the unitary quantum gate given by the Pauli matrix to recover the state. If Alice's message corresponds to , Bob applies the gate to his qubit. Finally, for the remaining case, the appropriate gate is given by Teleportation is thus achieved. The above-mentioned three gates correspond to rotations of π radians (180°) about appropriate axes (X, Y and Z) in the Bloch sphere picture of a qubit. Some remarks: After this operation, Bob's qubit will take on the state , and Alice's qubit becomes an (undefined) part of an entangled state. Teleportation does not result in the copying of qubits, and hence is consistent with the no-cloning theorem. There is no transfer of matter or energy involved. Alice's particle has not been physically moved to Bob; only its state has been transferred. The term "teleportation", coined by Bennett, Brassard, Crépeau, Jozsa, Peres and Wootters, reflects the indistinguishability of quantum mechanical particles. For every qubit teleported, Alice needs to send Bob two classical bits of information. These two classical bits do not carry complete information about the qubit being teleported. If an eavesdropper
requires (2n - 1) complex numbers (or a single point in a 2n-dimensional vector space). Standard representation In quantum mechanics, the general quantum state of a qubit can be represented by a linear superposition of its two orthonormal basis states (or basis vectors). These vectors are usually denoted as and . They are written in the conventional Dirac—or "bra–ket"—notation; the and are pronounced "ket 0" and "ket 1", respectively. These two orthonormal basis states, , together called the computational basis, are said to span the two-dimensional linear vector (Hilbert) space of the qubit. Qubit basis states can also be combined to form product basis states. A set of qubits taken together is called a quantum register. For example, two qubits could be represented in a four-dimensional linear vector space spanned by the following product basis states: , , , and . In general, n qubits are represented by a superposition state vector in 2n dimensional Hilbert space. Qubit states A pure qubit state is a coherent superposition of the basis states. This means that a single qubit can be described by a linear combination of and : where α and β are the probability amplitudes, that are both complex numbers. When we measure this qubit in the standard basis, according to the Born rule, the probability of outcome with value "0" is and the probability of outcome with value "1" is . Because the absolute squares of the amplitudes equate to probabilities, it follows that and must be constrained according to the second axiom of probability theory by the equation The probability amplitudes, and , encode more than just the probabilities of the outcomes of a measurement; the relative phase between and is for example responsible for quantum interference, as seen in the two-slit experiment. Bloch sphere representation It might, at first sight, seem that there should be four degrees of freedom in , as and are complex numbers with two degrees of freedom each. However, one degree of freedom is removed by the normalization constraint . This means, with a suitable change of coordinates, one can eliminate one of the degrees of freedom. One possible choice is that of Hopf coordinates: Additionally, for a single qubit the global phase of the state has no physically observable consequences, so we can arbitrarily choose to be real (or in the case that is zero), leaving just two degrees of freedom: where is the physically significant relative phase. The possible quantum states for a single qubit can be visualised using a Bloch sphere (see picture). Represented on such a 2-sphere, a classical bit could only be at the "North Pole" or the "South Pole", in the locations where and are respectively. This particular choice of the polar axis is arbitrary, however. The rest of the surface of the Bloch sphere is inaccessible to a classical bit, but a pure qubit state can be represented by any point on the surface. For example, the pure qubit state would lie on the equator of the sphere at the positive X-axis. In the classical limit, a qubit, which can have quantum states anywhere on the Bloch sphere, reduces to the classical bit, which can be found only at either poles. The surface of the Bloch sphere is a two-dimensional space, which represents the observable state space of the pure qubit states. This state space has two local degrees of freedom, which can be represented by the two angles and . Mixed state A pure state is fully specified by a single ket, a coherent superposition, represented by a point on the Bloch sphere as described above. Coherence is essential for a qubit to be in a superposition state. With interactions and decoherence, it is possible to put the qubit in a mixed state, a statistical combination or “incoherent mixture” of different pure states. Mixed states can be represented by points inside the Bloch sphere (or in the Bloch ball). A mixed qubit state has three degrees of freedom: the angles and , as well as the length of the vector that represents the mixed state. Operations on qubits There are various kinds of physical operations that can be performed on qubits. Quantum logic gates, building blocks for a quantum circuit in a quantum computer, operate on a set of qubits (a register); mathematically, the qubits undergo a (reversible) unitary transformation described by multiplying the quantum gates unitary matrix with the quantum state vector. The result from this multiplication is a new quantum state. Quantum measurement is an irreversible operation in which information is gained about the state of a single qubit, and coherence is lost. The result of the measurement of a single qubit with the state will be either with probability or with probability . Measurement of the state of the qubit alters the magnitudes of α and β. For instance, if the result of the measurement is , α is changed to 0 and β is changed to the phase factor no longer experimentally accessible. If measurement is performed on a qubit that is entangled, the measurement may collapse the state of the other entangled qubits. Initialization or re-initialization to a known value, often . This operation collapses the quantum state (exactly like with measurement). Initialization to may be implemented logically or physically: Logically as a measurement, followed by the application of the Pauli-X gate if the result from the measurement was . Physically, for example if it is a superconducting phase qubit, by lowering the energy of the quantum system to its ground state. Sending the qubit through a quantum channel to a remote system or machine (an I/O operation), potentially as part of a quantum network. Quantum entanglement An important distinguishing feature between qubits and classical bits is that multiple qubits can exhibit quantum entanglement. Quantum entanglement is a nonlocal property of two or more
ground state. Sending the qubit through a quantum channel to a remote system or machine (an I/O operation), potentially as part of a quantum network. Quantum entanglement An important distinguishing feature between qubits and classical bits is that multiple qubits can exhibit quantum entanglement. Quantum entanglement is a nonlocal property of two or more qubits that allows a set of qubits to express higher correlation than is possible in classical systems. The simplest system to display quantum entanglement is the system of two qubits. Consider, for example, two entangled qubits in the Bell state: In this state, called an equal superposition, there are equal probabilities of measuring either product state or , as . In other words, there is no way to tell if the first qubit has value “0” or “1” and likewise for the second qubit. Imagine that these two entangled qubits are separated, with one each given to Alice and Bob. Alice makes a measurement of her qubit, obtaining—with equal probabilities—either or , i.e., she can now tell if her qubit has value “0” or “1”. Because of the qubits' entanglement, Bob must now get exactly the same measurement as Alice. For example, if she measures a , Bob must measure the same, as is the only state where Alice's qubit is a . In short, for these two entangled qubits, whatever Alice measures, so would Bob, with perfect correlation, in any basis, however far apart they may be and even though both can not tell if their qubit has value “0” or “1” — a most surprising circumstance that can not be explained by classical physics. Controlled gate to construct the Bell state Controlled gates act on 2 or more qubits, where one or more qubits act as a control for some specified operation. In particular, the controlled NOT gate (or CNOT or CX) acts on 2 qubits, and performs the NOT operation on the second qubit only when the first qubit is , and otherwise leaves it unchanged. With respect to the unentangled product basis , , , , it maps the basis states as follows: . A common application of the CNOT gate is to maximally entangle two qubits into the Bell state. To construct , the inputs A (control) and B (target) to the CNOT gate are: and After applying CNOT, the output is the Bell State: . Applications The Bell state forms part of the setup of the superdense coding, quantum teleportation, and entangled quantum cryptography algorithms. Quantum entanglement also allows multiple states (such as the Bell state mentioned above) to be acted on simultaneously, unlike classical bits that can only have one value at a time. Entanglement is a necessary ingredient of any quantum computation that cannot be done efficiently on a classical computer. Many of the successes of quantum computation and communication, such as quantum teleportation and superdense coding, make use of entanglement, suggesting that entanglement is a resource that is unique to quantum computation. A major hurdle facing quantum computing, as of 2018, in its quest to surpass classical digital computing, is noise in quantum gates that limits the size of quantum circuits that can be executed reliably. Quantum register A number of qubits taken together is a qubit register. Quantum computers perform calculations by manipulating qubits within a register. Qudits and qutrits The term qudit denotes the unit of quantum information that can be realized in suitable d-level quantum systems. A quantum register that can be measured to N states is identical to an N-level qudit. Qudits are similar to the integer types in classical computing, and may be mapped to (or realized by) arrays of qubits. Qudits where the d-level system is not an exponent of 2 can not be mapped to arrays of qubits. It is for example possible to have 5-level qudits. In 2017, scientists at the National Institute of Scientific Research constructed a pair of qudits with 10 different states each, giving more computational power than 6 qubits. Similar to the qubit, the qutrit is the unit of quantum information that can be realized in suitable 3-level quantum systems. This is analogous to the unit of classical information trit of ternary computers. Physical implementations Any two-level quantum-mechanical system can be used as a qubit. Multilevel systems can be used as well, if they possess two states that can be effectively decoupled from the rest (e.g., ground state and first excited state of a nonlinear oscillator). There are various proposals. Several physical implementations that approximate two-level systems to various degrees were successfully realized. Similarly to a classical bit where the state of a transistor in a processor, the magnetization of a surface in a hard disk and the presence of current in a cable can all be used to represent bits in the same computer, an eventual quantum computer is likely to use various combinations of qubits in its design. The following is an incomplete list of physical implementations of qubits, and the choices of basis are by convention only. Qubit storage In 2008 a team of scientists from the U.K. and U.S. reported the first relatively long (1.75 seconds) and coherent transfer of a superposition state in an electron spin "processing" qubit to a nuclear spin "memory" qubit. This event can be considered the first relatively consistent quantum data storage, a vital step towards the development of quantum computing. In 2013, a modification of similar systems (using charged rather than neutral donors) has dramatically extended this time, to 3 hours at very low temperatures and 39 minutes at room temperature. Room temperature preparation of a qubit based on electron spins instead of nuclear spin was also demonstrated by a team of scientists from Switzerland and Australia. An increased coherence of qubits is being explored by researchers who are testing the limitations of a Ge hole spin-orbit qubit structure. See also Ancilla bit Bell state, W state and GHZ state Bloch sphere Physical and logical qubits Two-state quantum system The elements of the group
and Yauyos–Chincha) have features of both Quechua I and Quechua II, and so are difficult to assign to either. Torero classifies them as the following: Quechua I or Quechua B, aka Central Quechua or Waywash, spoken in Peru's central highlands and coast. The most widely spoken varieties are Huaylas, Huaylla Wanca, and Conchucos. Quechua II or Quechua A or Peripheral Quechua or Wanp'una, divided into Yungay (Yunkay) Quechua or Quechua II A, spoken in the northern mountains of Peru; the most widely spoken dialect is Cajamarca. Northern Quechua or Quechua II B, spoken in Ecuador (Kichwa), northern Peru, and Colombia (Inga Kichwa) The most widely spoken varieties in this group are Chimborazo Highland Quichua and Imbabura Highland Quichua. Southern Quechua or Quechua II C, spoken in Bolivia, Chile, southern Peru and Northwest Argentina. The most widely spoken varieties are South Bolivian, Cusco, Ayacucho, and Puno (Collao). Willem Adelaar adheres to the Quechua I / Quechua II (central/peripheral) bifurcation. But, partially following later modifications by Torero, he reassigns part of Quechua II-A to Quechua I: Landerman (1991) does not believe a truly genetic classification is possible and divides Quechua II so that the family has four geographical–typological branches: Northern, North Peruvian, Central, and Southern. He includes Chachapoyas and Lamas in North Peruvian Quechua so Ecuadorian is synonymous with Northern Quechua. Geographical distribution Quechua I (Central Quechua, Waywash) is spoken in Peru's central highlands, from the Ancash Region to Huancayo. It is the most diverse branch of Quechua, to the extent that its divisions are commonly considered different languages. Quechua II (Peripheral Quechua, Wamp'una "Traveler") II-A: Yunkay Quechua (North Peruvian Quechua) is scattered in Peru's occidental highlands. II-B: Northern Quechua (also known as Runashimi or, especially in Ecuador, Kichwa) is mainly spoken in Colombia and Ecuador. It is also spoken in the Amazonian lowlands of Colombia, Ecuador, and in pockets in Peru. II-C: Southern Quechua, in the highlands further south, from Huancavelica through the Ayacucho, Cusco, and Puno regions of Peru, across much of Bolivia, and in pockets in north-western Argentina. It is the most influential branch, with the largest number of speakers and the most important cultural and literary legacy. Cognates This is a sampling of words in several Quechuan languages: Quechua and Aymara Quechua shares a large amount of vocabulary, and some striking structural parallels, with Aymara, and the two families have sometimes been grouped together as a "Quechumaran family." That hypothesis is generally rejected by specialists, however. The parallels are better explained by mutual influence and borrowing through intensive and long-term contact. Many Quechua–Aymara cognates are close, often closer than intra-Quechua cognates, and there is little relationship in the affixal system. The Puquina language of the Tiwanaku Empire is a possible source for some of the shared vocabulary between Quechua and Aymara. Language contact Jolkesky (2016) notes that there are lexical similarities with the Kunza, Leko, Mapudungun, Mochika, Uru-Chipaya, Zaparo, Arawak, Kandoshi, Muniche, Pukina, Pano, Barbakoa, Cholon-Hibito, Jaqi, Jivaro, and Kawapana language families due to contact. Vocabulary Quechua has borrowed a large number of Spanish words, such as piru (from pero, "but"), bwenu (from bueno, "good"), iskwila (from escuela, "school"), waka (from vaca, "cow") and wuru (from burro, "donkey"). A number of Quechua words have entered English and French via Spanish, including coca, condor, guano, jerky, llama, pampa, poncho, puma, quinine, quinoa, vicuña (vigogne in French), and, possibly, gaucho. The word lagniappe comes from the Quechuan word yapay "to increase, to add." The word first came into Spanish then Louisiana French, with the French or Spanish article la in front of it, la ñapa in Louisiana French or Creole, or la yapa in Spanish. A rare instance of a Quechua word being taken into general Spanish use is given by carpa for "tent" (Quechua karpa). The Quechua influence on Latin American Spanish includes such borrowings as papa "potato", chuchaqui "hangover" in Ecuador, and diverse borrowings for "altitude sickness": suruqch'i in Bolivia, sorojchi in Ecuador, and soroche in Peru. In Bolivia, particularly, Quechua words are used extensively even by non-Quechua speakers. These include wawa "baby, infant," ch'aki "hangover," misi "cat," juk'ucho "mouse," q'omer uchu "green pepper," jacu "lets go," chhiri and chhurco "curly haired," among many others. Quechua grammar also enters Bolivian Spanish, such as the use of the suffix -ri. In Bolivian Quechua, -ri is added to verbs to signify an action is performed with affection or, in the imperative, as a rough equivalent to "please". In Bolivia, -ri is often included in the Spanish imperative to imply "please" or to soften commands. For example, the standard pásame "pass me [something]" becomes pasarime. Etymology of Quechua At first, Spaniards referred to the language of the Inca empire as the lengua general, the general language. The name quichua was first used in 1560 by Domingo de Santo Tomás in his Grammatica o arte de la lengua general de los indios de los reynos del Perú. It is not known what name the native speakers gave to their language before colonial times and whether it was Spaniards who called it quechua. There are two possible etymologies of Quechua as the name of the language. There is a possibility that the name Quechua was derived from *qiĉ.wa, the native word which originally meant the "temperate valley" altitude ecological zone in the Andes (suitable for maize cultivation) and to its inhabitants. Alternatively, Pedro Cieza de León and Inca Garcilaso de la Vega, the early Spanish chroniclers, mention the existence of a people called Quichua in the present Apurímac Region, and it could be inferred that their name was given to the entire language. The Hispanicised spellings Quechua and Quichua have been used in Peru and Bolivia since the 17th century, especially after the Third Council of Lima. Today, the various local pronunciations of "Quechua Simi" include , , , and . Another name that native speakers give to their own language is runa simi, "language of man/people"; it also seems to have emerged during the colonial period. Phonology The description below applies to Cusco Quechua; there are significant differences in other varieties of Quechua. Vowels Quechua only has three vowel phonemes: and , as in Aymara (including Jaqaru). Monolingual speakers pronounce them as respectively, but Spanish realizations may also be found. When the vowels appear adjacent to uvular consonants (, , and ), they are rendered more like [, , ], respectively. Consonants Gemination of the tap results in a trill . About 30% of the modern Quechua vocabulary is borrowed from Spanish, and some Spanish sounds (such as , , , ) may have become phonemic even among monolingual Quechua-speakers. Voicing is not phonemic in Cusco Quechua. Cusco Quechua, North Bolivian Quechua, and South Bolivian Quechua are the only varieties to have glottalized consonants. They, along with certain kinds of Ecuadorian Kichwa, are the only varieties which have aspirated consonants. Because reflexes of a given Proto-Quechua word may have different stops in neighboring dialects (Proto-Quechua *čaki 'foot' becomes č'aki and čaka 'bridge' becomes čaka), they are thought to be innovations in Quechua from Aymara, borrowed independently after branching off from Proto-Quechua. Stress Stress is penultimate in most dialects of Quechua. In some varieties, factors such as apocope of word-final vowels may cause exceptional final stress. Orthography Quechua has been written using the Roman alphabet since the Spanish conquest of Peru. However, written Quechua is rarely used by Quechua speakers due to limited amounts of printed material in the language. Until the 20th century, Quechua was written with a Spanish-based orthography, for example Inca, Huayna Cápac, Collasuyo, Mama Ocllo, Viracocha, quipu, tambo, condor. This orthography is the most familiar to Spanish speakers, and so it has been used for most borrowings into English, which essentially always happen through Spanish. In 1975, the Peruvian government of Juan Velasco Alvarado adopted a new orthography for Quechua. This is the system preferred by the Academia Mayor de la Lengua Quechua, which results in the following spellings of the examples listed above: Inka, Wayna Qhapaq, Qollasuyu, Mama Oqllo, Wiraqocha, khipu, tampu, kuntur. This orthography has the following features: It uses w instead of hu for . It distinguishes velar k from uvular q, both of which were spelled c or qu in the traditional system. It distinguishes simple, ejective, and aspirated stops in dialects that make these distinctions, such as that of the Cusco Region, e.g. the aspirated khipu 'knot'. It continues to use the Spanish five-vowel system. In 1985, a variation of this system was adopted by the Peruvian government that uses the Quechuan three-vowel system, resulting in the following spellings: Inka, Wayna Qhapaq, Qullasuyu, Mama Uqllu, Wiraqucha, khipu, tampu, kuntur. The different orthographies are still highly controversial in Peru. Advocates of the traditional system believe that the new orthographies look too foreign and believe that it makes Quechua harder to learn for people who have first been exposed to written Spanish. Those who prefer the new system maintain that it better matches the phonology of Quechua, and they point to studies showing that teaching the five-vowel system to children later causes reading difficulties in Spanish. For more on this, see Quechuan and Aymaran spelling shift. Writers differ in the treatment of Spanish loanwords. These are sometimes adapted to the modern orthography and sometimes left as in Spanish. For instance, "I am Roberto" could be written Robertom kani or Ruwirtum kani. (The -m is not part of the name; it is an evidential suffix, showing how the information is known: firsthand, in this case.) The Peruvian linguist Rodolfo Cerrón Palomino has proposed an orthographic norm for all of Southern Quechua: this Standard Quechua (el Quechua estándar or Hanan Runasimi) conservatively integrates features of the two widespread dialects Ayacucho Quechua and Cusco Quechua. For instance: The Spanish-based orthography is now in conflict with Peruvian law. According to article 20 of the decree Decreto Supremo No 004-2016-MC, which approves regulations relative to Law 29735, published in the official newspaper El Peruano on July 22, 2016, adequate spellings of the toponyms in the normalized alphabets of the indigenous languages must progressively be proposed, with the aim of standardizing the spellings used by the National Geographic Institute (Instituto Geográfico Nacional, IGN) The IGN implements the necessary changes on the official maps of Peru. Grammar Morphological type
differ in the treatment of Spanish loanwords. These are sometimes adapted to the modern orthography and sometimes left as in Spanish. For instance, "I am Roberto" could be written Robertom kani or Ruwirtum kani. (The -m is not part of the name; it is an evidential suffix, showing how the information is known: firsthand, in this case.) The Peruvian linguist Rodolfo Cerrón Palomino has proposed an orthographic norm for all of Southern Quechua: this Standard Quechua (el Quechua estándar or Hanan Runasimi) conservatively integrates features of the two widespread dialects Ayacucho Quechua and Cusco Quechua. For instance: The Spanish-based orthography is now in conflict with Peruvian law. According to article 20 of the decree Decreto Supremo No 004-2016-MC, which approves regulations relative to Law 29735, published in the official newspaper El Peruano on July 22, 2016, adequate spellings of the toponyms in the normalized alphabets of the indigenous languages must progressively be proposed, with the aim of standardizing the spellings used by the National Geographic Institute (Instituto Geográfico Nacional, IGN) The IGN implements the necessary changes on the official maps of Peru. Grammar Morphological type Quechua is an agglutinating language, meaning that words are built up from basic roots followed by several suffixes, each of which carry one meaning. Their large number of suffixes changes both the overall meaning of words and their subtle shades of meaning. All varieties of Quechua are very regular agglutinative languages, as opposed to isolating or fusional ones [Thompson]. Their normal sentence order is SOV (subject–object–verb). Notable grammatical features include bipersonal conjugation (verbs agree with both subject and object), evidentiality (indication of the source and veracity of knowledge), a set of topic particles, and suffixes indicating who benefits from an action and the speaker's attitude toward it, but some varieties may lack some of the characteristics. Pronouns In Quechua, there are seven pronouns. First-person plural pronouns (equivalent to "we") may be inclusive or exclusive; which mean, respectively, that the addressee ("you") is and is not part of the "we." Quechua also adds the suffix -kuna to the second and third person singular pronouns qam and pay to create the plural forms, qam-kuna and pay-kuna. In Quechua IIB, or "Kichwa", the exclusive first-person plural pronoun, "ñuqayku", is generally obsolete. Adjectives Adjectives in Quechua are always placed before nouns. They lack gender and number and are not declined to agree with nouns. Numbers Cardinal numbers. ch'usaq (0), huk (1), iskay (2), kimsa (3), tawa (4), pichqa (5), suqta (6), qanchis (7), pusaq (8), isqun (9), chunka (10), chunka hukniyuq (11), chunka iskayniyuq (12), iskay chunka (20), pachak (100), waranqa (1,000), hunu (1,000,000), lluna (1,000,000,000,000). Ordinal numbers. To form ordinal numbers, the word ñiqin is put after the appropriate cardinal number (iskay ñiqin = "second"). The only exception is that, in addition to huk ñiqin ("first"), the phrase ñawpaq is also used in the somewhat more restricted sense of "the initial, primordial, the oldest." Nouns Noun roots accept suffixes that indicate person (defining of possession, not identity), number, and case. In general, the personal suffix precedes that of number. In the Santiago del Estero variety, however, the order is reversed. From variety to variety, suffixes may change. Adverbs Adverbs can be formed by adding -ta or, in some cases, -lla to an adjective: allin – allinta ("good – well"), utqay – utqaylla ("quick – quickly"). They are also formed by adding suffixes to demonstratives: chay ("that") – chaypi ("there"), kay ("this") – kayman ("hither"). There are several original adverbs. For Europeans, it is striking that the adverb qhipa means both "behind" and "future" and ñawpa means "ahead, in front" and "past." Local and temporal concepts of adverbs in Quechua (as well as in Aymara) are associated to each other reversely, compared to European languages. For the speakers of Quechua, we are moving backwards into the future (we cannot see it: it is unknown), facing the past (we can see it: it is remembered). Verbs The infinitive forms have the suffix -y (e.g., much'a 'kiss'; much'a-y 'to kiss'). These are the typical endings for the indicative in a Southern Quechua (IIC) dialect: The suffixes shown in the table above usually indicate the subject; the person of the object is also indicated by a suffix, which precedes the suffixes in the table. For the second person, it is -su-, and for the first person, it is -wa- in most Quechua II dialects. In such cases, the plural suffixes from the table (-chik and -ku) can be used to express the number of the object rather than the subject. There is a lot of variation between the dialects in the exact rules which determine this. In Central Quechua, however, the verbal morphology differs in a number of respects: most notably, the verbal plural suffixes -chik and -ku are not used, and plurality is expressed by different suffixes that are located before rather than after the personal suffixes. Furthermore, the 1st person singular object suffix is -ma-, rather than -wa-. Grammatical particles Particles are indeclinable: they do not accept suffixes. They are relatively rare, but the most common are arí 'yes' and mana 'no', although mana can take some suffixes, such as -n/-m (manan/manam), -raq (manaraq 'not yet') and -chu (manachu? 'or not?'), to intensify the meaning. Other particles are yaw 'hey, hi', and certain loan words from Spanish, such as piru (from Spanish pero 'but') and sinuqa (from sino 'rather'). Evidentiality The Quechuan languages have three different morphemes that mark evidentiality. Evidentiality refers to a morpheme whose primary purpose is to indicate the source of information. In Quechuan languages, evidentiality is a three-term system: there are three evidential morphemes that mark varying levels of source information. The markers can apply to first, second, and third persons. The chart below depicts an example of these morphemes from Wanka Quechua: DIR:direct evidence CONJ:conjecture The parentheses around the vowels indicate that the vowel can be dropped when following an open vowel. For the sake of cohesiveness, the above forms are used to discuss the evidential morphemes. There are dialectal variations to the forms. The variations will be presented in the following descriptions. The following sentences provide examples of the three evidentials and further discuss the meaning behind each of them. -m(i) : Direct evidence and commitment Regional variations: In Cusco Quechua, the direct evidential presents itself as –mi and –n. The evidential –mi indicates that the speaker has a "strong personal conviction the veracity of the circumstance expressed." It has the basis of direct personal experience. Wanka Quechua -chr(a) : Inference and attenuation In Quechuan languages, not specified by the source, the inference morpheme appears as -ch(i), -ch(a), -chr(a). The -chr(a) evidential indicates that the utterance is an inference or form of conjecture. That inference relays the speaker's non-commitment to the truth-value of the statement. It also appears in cases such as acquiescence, irony, interrogative constructions, and first person inferences. These uses constitute nonprototypical use and will be discussed later in the changes in meaning and other uses section. Wanka Quechua -sh(i) : Hearsay Regional variations: It can appear as –sh(i) or –s(i) depending on the dialect. With the use of this morpheme, the speaker "serves as a conduit through which information from another source passes." The information being related is hearsay or revelatory in nature. It also works to express the uncertainty of the speaker regarding the situation. However, it also appears in other constructions that are discussed in the changes in meaning section. Wanka Quechua Hintz discusses an interesting case of evidential behavior found in the Sihaus dialect of Ancash Quechua. The author postulates that instead of three single evidential markers, that Quechuan language contains three pairs of evidential markers. Affix or clitic The evidential morphemes have been referred to as markers or morphemes. The literature seems to differ on whether or not the evidential morphemes are acting as affixes or clitics, in some cases, such as Wanka Quechua, enclitics. Lefebvre and Muysken (1998) discuss this issue in terms of case but remark the line between affix and clitic is not clear. Both terms are used interchangeably throughout these sections. Position in the sentence Evidentials in the Quechuan languages are "second position enclitics", which usually attach to the first constituent in the sentence, as shown in this example. They can, however, also occur on a focused constituent. Sometimes, the affix is described as attaching to the focus, particularly in the Tarma dialect of Yaru Quechua, but this does not hold true for all varieties of Quechua. In Huanuco Quechua, the evidentials may follow any number of topics, marked by the topic marker –qa, and the element with the evidential must precede the main verb or be the main verb. However, there are exceptions to that rule, and the more topics there are in a sentence, the more likely the sentence is to deviate from the usual pattern. Changes in meaning and other uses Evidentials can be used to relay different meanings depending on the context and perform other functions. The following examples are restricted to Wanka Quechua. The direct evidential, -mi The direct evidential appears in wh-questions and yes/no questions. By considering the direct evidential in terms of prototypical semantics, it seems somewhat counterintuitive to have a direct evidential, basically an evidential that confirms the speaker's certainty about a topic, in a question. However, if one focuses less on the structure and more on the situation, some sense can be made. The speaker is asking the addressee for information so the speaker assumes the speaker knows the answer. That assumption is where the direct evidential comes into play. The speaker holds a certain amount of certainty that the addressee will know the answer. The speaker interprets the addressee as being in "direct relation" to the proposed content; the situation is the same as when, in regular sentences, the speaker assumes direct relation to the proposed information. The direct evidential affix is also seen in yes/no questions, similar to the situation with wh-questions. Floyd describes yes/no questions as being "characterized as instructions to the addressee to assert one of the propositions of a disjunction." Once again, the burden of direct evidence is being placed on the addressee, not on the speaker. The question marker in Wanka Quechua, -chun, is derived from the negative –chu marker and the direct evidential (realized as –n in some dialects). Inferential evidential, -chr(a) While –chr(a) is usually used in an inferential context, it has some non-prototypical uses. Mild Exhortation In these constructions the evidential works to reaffirm and encourage the addressee's actions or thoughts. This example comes from a conversation between husband and wife, discussing the reactions of their family and friends after they have been gone for a while. The husband says he plans to stretch the truth and tell them about distant places to which he has gone, and his wife (in the example above) echoes and encourages his thoughts. Acquiescence With these, the evidential is used to highlight the speaker's assessment of inevitability of an event and acceptance of it. There is a sense of resistance, diminished enthusiasm, and disinclination in these constructions. This example comes from a discourse where a woman demands compensation from the man (the speaker in the example) whose pigs ruined her potatoes. He denies the pigs as being his but finally realizes he may be responsible and produces the above example. Interrogative Somewhat similar to the –mi evidential, the inferential evidential can be found in content questions. However, the salient difference between the uses of the evidentials in questions is that in the –m(i) marked questions, an answer is expected. That is not the case with –chr(a) marked questions. Irony Irony in language can be a somewhat complicated topic in how it functions differently in languages, and by its semantic nature, it is already somewhat vague. For these purposes, it is suffice to say that when irony takes place in Wanka Quechua, the –chr(a) marker is used. This example comes from discourse between a father and daughter about her refusal to attend school. It can be interpreted as a genuine statement (perhaps one can learn by resisting school) or as an ironic statement (that is an absurd idea). Hearsay evidential, -sh(i) Aside from being used to express hearsay and revelation, this affix also has other uses. Folktales, myths, and legends Because folktales, myths, and legends are, in essence, reported speech, it follows that the hearsay marker would be used with them. Many of these types of stories are passed down through generations, furthering this aspect of reported speech. A difference between simple hearsay and folktales can be seen in the frequency of the –sh(i) marker. In normal conversation using reported speech, the marker is used less, to avoid redundancy. Riddles Riddles are somewhat similar to myths and folktales in that their nature is to be passed by word of mouth. Omission and overuse of evidential affixes In certain grammatical structures, the evidential marker does not appear at all. In all Quechuan languages the evidential will not appear in a dependent clause. Sadly, no example was given to depict this omission. Omissions occur in Quechua. The sentence is understood to have the same evidentiality as the other sentences in the context. Quechuan speakers vary as to how much they omit evidentials, but they occur only in connected speech. An interesting contrast to omission of evidentials is overuse of evidentials. If a speaker uses evidentials too much with no reason, competence is brought into question. For example, the overuse of –m(i) could lead others to believe that the speaker is not a native speaker or, in some extreme cases, that one is mentally ill. Cultural aspect By using evidentials, the Quechua culture has certain assumptions about the information being relayed. Those who do not abide by the cultural customs should not be trusted. A passage from Weber (1986) summarizes them nicely below: (Only) one's experience is reliable. Avoid unnecessary risk by assuming responsibility for information of which one is not absolutely certain. Do not be gullible. There are many folktales in which the villain is foiled by his gullibility. Assume responsibility only if it is safe to do so. Successful assumption of responsibility builds stature in the community. Evidentials also show that being precise and stating the source of one's information is extremely important in the language and the culture. Failure to use them correctly can lead to diminished standing in the community. Speakers are aware of the evidentials and even use proverbs to teach children the importance of being precise and truthful. Precision and information source are of the utmost importance. They are a powerful and resourceful method of human communication. Literature As in the case of the pre-Columbian Mesoamerica, there are a number of Andean texts in the local language which were written down in Latin characters after the European conquest, but which express, to a great extent, the culture of pre-Conquest times. For example, Quechua poems thought to date from Inca times are preserved as quotations within some Spanish-language chronicles dealing with the pre-Conquest period. However, the most important specimen of Quechua literature of this type is the so-called Huarochirí Manuscript (1598), which describes the mythology and religion of the valley of Huarochirí and has been compared to "an Andean Bible" and to the Mayan Popol Vuh. From the post-conquest period (starting from the middle of the 17th century), there are a number of anonymous or signed Quechua dramas, some of which deal with the Inca era, while most are on religious topics and of European inspiration. The most famous dramas are Ollantay and the plays describing the death of Atahualpa. Juan de Espinosa Medrano wrote several dramas in the language. Poems in Quechua were also composed during the colonial period. A notable example are the works of Juan Wallparrimachi, a participant in the Bolivian War of Independence. As for Christian literature, as early as 1583, the Third Provincial Church Council of Lima, which took place in 1583, published a number of texts dealing with Christian doctrine and rituals, including a trilingual catechism in Spanish, Quechua and Aymara and a number of other similar texts in the years from 1584 to 1585. More texts of this type were published until the middle of the 17th century, mostly adhering to a Quechua literary standard that had been codified by the Third Council for this purpose. There is at least one Quechuan version of the Bible. Dramas and poems continued to be written in the 19th and especially in 20th centuries as well; in addition, in the 20th century and more recently, more prose has been published. However, few literary forms were made present in the 19th century as European influences limited literary criticism. While some of that literature consists of original compositions (poems and dramas), the bulk of 20th century Quechua literature consists of traditional folk stories and oral narratives. Johnny Payne has translated two sets of Quechua oral short stories, one into Spanish and the other into English. Demetrio Túpac Yupanqui wrote a Quechuan version of Don Quixote, under the title Yachay sapa wiraqucha dun Qvixote Manchamantan. Media A news broadcast in Quechua, "Ñuqanchik" (all of us), began in Peru in 2016. Many Andean musicians write and sing in their native languages, including Quechua and Aymara. Notable musical groups are Los Kjarkas, Kala Marka, J'acha Mallku, Savia Andina, Wayna Picchu, Wara, Alborada, Uchpa and many others. There are several Quechua and Quechua-Spanish bloggers, as well as a Quechua language podcast. The 1961 Peruvian film Kukuli was the first film to be spoken in the Quechua language. See also References Sources Rolph, Karen Sue. Ecologically Meaningful Toponyms: Linking a lexical domain to production ecology in the Peruvian Andes. Doctoral Dissertation, Stanford University, 2007. Adelaar, Willem. The Languages of the Andes. With the collaboration of P.C. Muysken. Cambridge language survey. Cambridge University Press, 2007, Cerrón-Palomino, Rodolfo. Lingüística Quechua, Centro de Estudios Rurales Andinos 'Bartolomé de las Casas', 2nd ed. 2003 Cole, Peter. "Imbabura Quechua", North-Holland (Lingua Descriptive Studies 5), Amsterdam 1982. Cusihuamán, Antonio, Diccionario Quechua Cuzco-Collao, Centro de Estudios Regionales Andinos "Bartolomé de Las Casas", 2001, Cusihuamán, Antonio, Gramática Quechua Cuzco-Collao, Centro de Estudios Regionales Andinos "Bartolomé de Las Casas", 2001, Mannheim, Bruce, The Language of the Inka since the European Invasion, University of Texas Press, 1991, Rodríguez Champi, Albino. (2006). Quechua de Cusco. Ilustraciones fonéticas de lenguas amerindias, ed. Stephen A. Marlett. Lima: SIL International y Universidad Ricardo Palma. Lengamer.org Aikhenvald, Alexandra. Evidentiality. Oxford: Oxford UP, 2004. Print. Floyd, Rick. The Structure of Evidential Categories in Wanka Quechua. Dallas, TX: Summer Institute of Linguistics, 1999. Print. Hintz, Diane. "The evidential system in Sihuas Quechua: personal vs. shared knowledge" The Nature of Evidentiality Conference, The Netherlands, 14–16 June 2012. SIL International. Internet. 13 April 2014. Lefebvre, Claire, and Pieter Muysken. Mixed Categories: Nominalizations in Quechua. Dordrecht, Holland: Kluwer Academic, 1988. Print. Weber, David. "Information Perspective, Profile, and Patterns in Quechua." Evidentiality: The Linguistic Coding of Epistemology. Ed. Wallace L. Chafe and Johanna Nichols. Norwood, NJ: Ablex Pub, 1986. 137–55. Print. Further reading Adelaar, Willem F. H. Modeling convergence: Towards a reconstruction of the history of Quechuan–Aymaran interaction About the origin of Quechua, and its relation with Aymara, 2011. Adelaar, Willem F. H. Tarma Quechua: Grammar, Texts, Dictionary. Lisse: Peter de Ridder Press, 1977. Bills, Garland D., Bernardo Vallejo C., and Rudolph C. Troike. An Introduction to Spoken Bolivian Quechua. Special publication of the Institute of Latin American Studies, the University of Texas at Austin. Austin: Published for the Institute of Latin American Studies by the University of Texas Press, 1969. Coronel-Molina, Serafín M. Quechua Phrasebook. 2002 Lonely Planet Curl, John, Ancient American Poets. Tempe AZ: Bilingual Press,
complexes that then assemble into even larger complexes. In such cases, one uses the nomenclature, e.g., "dimer of dimers" or "trimer of dimers", to suggest that the complex might dissociate into smaller sub-complexes before dissociating into monomers. Another distinction often made when referring to oligomers is whether they are homomeric or heteromeric, referring to whether the smaller protein subunits that come together to make the protein complex are the same (homomeric) or different (heteromeric) from each other. For example, two identical protein monomers would come together to form a homo-dimer, whereas two different protein monomers would create a hetero-dimer. Structure Determination Protein quaternary structure can be determined using a variety of experimental techniques that require a sample of protein in a variety of experimental conditions. The experiments often provide an estimate of the mass of the native protein and, together with knowledge of the masses and/or stoichiometry of the subunits, allow the quaternary structure to be predicted with a given accuracy. It is not always possible to obtain a precise determination of the subunit composition for a variety of reasons. The number of subunits in a protein complex can often be determined by measuring the hydrodynamic molecular volume or mass of the intact complex, which requires native solution conditions. For folded proteins, the mass can be inferred from its volume using the partial specific volume of 0.73 ml/g. However, volume measurements are less certain than mass measurements, since unfolded proteins appear to have a much larger volume than folded proteins; additional experiments are required to determine whether a protein is unfolded or has formed an oligomer. Common techniques used to study protein quaternary structure Ultracentrifugation Surface-induced dissociation mass spectrometry Coimmunoprecipation FRET Nuclear Magnetic Resonance (NMR) Direct mass measurement of intact complexes Sedimentation-equilibrium analytical ultracentrifugation Electrospray mass spectrometry Mass Spectrometric Immunoassay MSIA Direct size measurement of intact complexes Static light scattering Size exclusion chromatography (requires calibration) Dual polarisation interferometry Indirect size measurement of intact complexes Sedimentation-velocity analytical ultracentrifugation (measures the translational diffusion constant) Dynamic light scattering (measures the translational diffusion constant) Pulsed-gradient protein nuclear magnetic resonance (measures the translational diffusion constant) Fluorescence polarization (measures the rotational diffusion constant) Dielectric relaxation (measures the rotational diffusion constant) Dual polarisation interferometry (measures the size and the density of the complex) Methods that measure the mass or volume under unfolding conditions (such as MALDI-TOF mass spectrometry and SDS-PAGE) are generally not useful, since non-native conditions usually cause the complex to dissociate into monomers. However, these may sometimes be applicable; for example, the experimenter may apply SDS-PAGE after first treating the intact complex with chemical cross-link reagents. Structure Prediction Some bioinformatics methods have been developed for predicting the quaternary structural attributes of proteins based on their sequence information by using various modes of pseudo amino acid composition. Protein folding prediction programs used to predict protein tertiary structure have also been expanding to better predict protein quaternary structure. One such development is
less certain than mass measurements, since unfolded proteins appear to have a much larger volume than folded proteins; additional experiments are required to determine whether a protein is unfolded or has formed an oligomer. Common techniques used to study protein quaternary structure Ultracentrifugation Surface-induced dissociation mass spectrometry Coimmunoprecipation FRET Nuclear Magnetic Resonance (NMR) Direct mass measurement of intact complexes Sedimentation-equilibrium analytical ultracentrifugation Electrospray mass spectrometry Mass Spectrometric Immunoassay MSIA Direct size measurement of intact complexes Static light scattering Size exclusion chromatography (requires calibration) Dual polarisation interferometry Indirect size measurement of intact complexes Sedimentation-velocity analytical ultracentrifugation (measures the translational diffusion constant) Dynamic light scattering (measures the translational diffusion constant) Pulsed-gradient protein nuclear magnetic resonance (measures the translational diffusion constant) Fluorescence polarization (measures the rotational diffusion constant) Dielectric relaxation (measures the rotational diffusion constant) Dual polarisation interferometry (measures the size and the density of the complex) Methods that measure the mass or volume under unfolding conditions (such as MALDI-TOF mass spectrometry and SDS-PAGE) are generally not useful, since non-native conditions usually cause the complex to dissociate into monomers. However, these may sometimes be applicable; for example, the experimenter may apply SDS-PAGE after first treating the intact complex with chemical cross-link reagents. Structure Prediction Some bioinformatics methods have been developed for predicting the quaternary structural attributes of proteins based on their sequence information by using various modes of pseudo amino acid composition. Protein folding prediction programs used to predict protein tertiary structure have also been expanding to better predict protein quaternary structure. One such development is AlphaFold-Multimer built upon the AlphaFold model for predicting protein tertiary structure. Role in Cell Signaling Protein quaternary structure also plays an important role in certain cell signaling pathways. The G-protein coupled receptor pathway involves a heterotrimeric protein known as a G-protein. G-proteins contain three distinct subunits known as the G-alpha, G-beta, and G-gamma subunits. When the G-protein is activated, it binds to the G-protein coupled receptor protein and the cell signaling pathway is initiated. Another example is the receptor tyrosine kinase (RTK) pathway, which is initiated by the dimerization of two receptor tyrosine kinase monomers. When the dimer is formed, the two kinases can phosphorylate each other and initiate a cell signaling pathway. Protein–protein interactions Proteins are capable of forming very tight complexes. For example, ribonuclease inhibitor binds to ribonuclease A with a roughly 20 fM dissociation constant. Other proteins have evolved to bind specifically to
legends, puns, and memorable characters, creating a 5-part series in the Sierra stable. The series was originally titled Hero's Quest. However, Sierra failed to trademark the name. The Milton Bradley Company successfully trademarked an electronic version of their unrelated joint Games Workshop board game, HeroQuest, which forced Sierra to change the series' title to Quest for Glory. This decision meant that all future games in the series (as well as newer releases of Hero's Quest I) used the new name. Series Lori Cole pitched Quest for Glory to Sierra as a "rich, narrative-driven, role-playing experience". The series consisted of five games, each of which followed directly upon the events of the last. New games frequently referred to previous entries in the series, often in the form of cameos by recurring characters. The objective of the series is to transform the player character from an average adventurer to a hero by completing non-linear quests. The game also was revolutionary in its character import system. This allowed players to import their individual character, including the skills and wealth they had acquired, from one game to the next. Hybrids by their gameplay and themes, the games feature serious stories leavened with humor throughout. There are real dangers to face, and true heroic feats to perform, but silly details and overtones creep in (when the drama of adventuring does not force them out). Cheap word play is particularly frequent, to the point that the second game's ending refers to itself as the hero's "latest set of adventures and miserable puns." The games have recurring story elements. For example, each installment in the series requires the player to create a dispel potion. The games include a number of Easter eggs, including a number of allusions to other Sierra games. For example, if a player types "pick nose" in the first game, (or clicks the lockpick icon on the player in the new version), if their lock-picking skill is high enough, the game responds: "Success! You now have an open nose". If the skill is too low, the player could insert the lock pick too far, killing himself. Another example is Dr. Cranium, an allusion to The Castle of Dr. Brain, in the fourth game. Each game draws its inspiration from a different culture and mythology: (in order, Germanic/fairy tale; Middle Eastern/Arabian Nights; Egyptian/African; Slavic folklore; and finally Greco-Mediterranean) with the hero facing increasingly powerful opponents with help from characters who become more familiar from game to game. Each game varies somewhat from the tradition it is derived from; for example, Baba Yaga, a character borrowed from Slavic folklore, appears in the first game which is based on German mythology. The second game, which uses Middle Eastern folklore, introduces several Arab and African-themed characters who reappear in the third game based on Egyptian mythology. Characters from every game and genre in the series reappear in the fourth and fifth games. In addition to deviating from the player's expectations of the culture represented in each game, the series also includes a number of intentional anachronisms, such as the pizza-loving mad scientists in the later games. Many CRPG enthusiasts consider the Quest for Glory series to be among the best in the genre, and the series is lauded for its non-linearity. The games are notable for blending the mechanics of adventure video games and roleplaying video games, their unique tone which combines pathos and humour, and the game systems which were ahead of their time, such as day-night cycles, non-playable characters which adhered to their own schedules within the games, and character improvement through both skill practice and point investiture. The website Polygon and the Kotaku blog have characterised the game as a precursor to modern day RPGs. Fraser Brown of the Destructoid blog considers the games: "one of the greatest adventure series of all time". Rowan Kaizer of the blog Engadget credits the games' hybrid adventure and roleplaying systems for the series' success. "The binary succeed/fail form of adventure game puzzles tended to either make those games too easy or too hard," he wrote, "But most puzzles in Quest For Glory involved some kind of skill check for your hero. This meant that you could succeed at most challenges by practicing or exploring, instead of getting stuck on bizarre item-combination puzzles". Gameplay The first four games are hybrid adventure/role playing video games with real-time combat, while the fifth game switches to the action RPG genre. The gameplay standards established in earlier Sierra adventure games are enhanced by the player's ability to choose the character's career path from among the three traditional role-playing game backgrounds: fighter, magic-user/wizard and thief. Further variation is added by the ability to customize the Hero's abilities, including the option of selecting skills normally reserved for another character class, leading to unique combinations often referred to as "hybrid characters". During the second or third games, a character can be initiated as a Paladin by performing honorable actions, changing his class and abilities, and receiving a unique sword. This applies when the character is exported into later games. Any character that finishes any game in the series (except Dragon Fire, the last in the series) can be exported to a more recent game (Shadows of Darkness has a glitch which allows one to import characters from the same game), keeping the character's statistics and parts of its inventory. If the character received the paladin sword, he would keep the magic sword (Soulforge or Piotyr's sword) and special paladin magic abilities. A character imported into a later game in the series from any other game can be assigned any character class, including Paladin. Each career path has its own strengths and weaknesses, and scenarios unique to the class because of the skills associated with it. Each class also has its own distinct way to solve various in-game puzzles, which encourage replay: some puzzles have up to four different solutions. For instance, if a door is closed, instead of lockpicking or casting an open spell, the fighter can simply knock down the door. The magic user and the thief are both non-confrontational characters, as they lack the close range ability of the fighter, but are better able to attack from a distance, using daggers or spells. An example of these separate paths can be seen early in the first game. A gold ring belonging to the healer rests in a nest on top of a tree; fighters might make it fall by hurling rocks, thieves may want to climb the
rest or risking injury. Mana is only required by characters with skill in magic, and is calculated according to the character's intelligence and magic attributes. Puzzle and Experience points only show the development of the player and his progress in the game, though in the first game also affects the kind of random encounters a player faces, as some monsters only appear after a certain level of experience is reached. Games Quest for Glory: So You Want to Be a Hero In the valley barony of Spielburg, the evil ogress Baba Yaga has cursed the land and the baron who tried to drive her off. His children have disappeared, while the land is ravaged by monsters and brigands. The Valley of Spielburg is in need of a Hero able to solve these problems. The original game was released in 1989 while a VGA remake was released in 1992. Quest for Glory II: Trial by Fire Quest for Glory II: Trial by Fire takes place in the land of Shapeir, in the world of Gloriana. Directly following from the events of the first game, the newly proclaimed Hero of Spielburg travels by flying carpet with his friends Abdulla Doo, Shameen and Shema to the desert city of Shapeir. The city is threatened by magical elementals, while the Emir Arus al-Din of Shapeir's sister city Raseir is missing and his city fallen under tyranny. Quest for Glory II is the only game in the series not to have originated or have been remade beyond the EGA graphics engine by Sierra, but AGD Interactive released a VGA fan remake of the game using the Adventure Game Studio engine on 24 August 2008. Quest for Glory III: Wages of War Rakeesh the Paladin brings the Hero (and Prince of Shapeir) along with Uhura and her son Simba to his homeland, the town of Tarna in a jungle and savannah country called Fricana that resembles central African ecosystems. Tarna is on the brink of war; the Simbani, the tribe of Uhura, are ready to do battle with the Leopardmen. Each tribe has stolen a sacred relic from the other, and both refuse to return it until the other side does. The Hero must prevent the war then thwart a demon who may be loosed upon the world. Quest for Glory: Shadows of Darkness Drawn without warning from his victory in Fricana, the Hero arrives without equipment or explanation in the middle of the hazardous Dark One Caves in the distant land of Mordavia. While struggling to survive in this land plagued with undead, the Hero must prevent a dark power from summoning eternal darkness into the world. Quest for Glory V: Dragon Fire Erasmus introduces the player character, the Hero, to the Greece-like kingdom of Silmaria, whose king was recently assassinated. Thus, the traditional Rites of Rulership are due to commence, and the victor will be crowned king. The Hero enters the contest with the assistance of Erasmus, Rakeesh, and many old friends from previous entries in the series. The Hero competes against competitors, including the Silmarian guard Kokeeno Pookameeso, the warlord Magnum Opus, the hulking Gort, and the warrior Elsa Von Spielburg. Collections Quest for Glory Anthology (1996), a package that includes the first four games, including the fully patched CD version of QFG IV; game copy protection codes (a feature of Quest for Glory IV) are included in the manual and on CD, while game saves are included in the save folder of the CD and the VGA version of Quest for Glory I. Quest for Glory Collection Series (1997), a re-release of Anthology with a Dragon Fire demo and sample soundtrack. Quest for Glory 1–5 (2012), a digital collection on GOG.com and Steam that includes all five games in the series (including the EGA version and VGA remake of QFG1). Original concept Originally, the series was to be a tetralogy, consisting of four games, with the following themes and cycles: the four cardinal directions, the four classical elements, the four seasons and four different mythologies. This is what the creators originally had in mind: However, when Shadows of Darkness was designed, it was thought that it would be too difficult for the hero to go straight from Shapeir to Mordavia and defeat the Dark One. To solve the problem, a new game, Wages of War, was inserted into the canon, and resulting in a renumbering of the series. Evidence for this can be found in the end of Trial by Fire: the player is told that the next game will be Shadows of Darkness and a fanged vampiric moon is shown, to hint at the next game's theme. The developers discussed this in the Fall 1992 issue of Sierra's InterAction magazine, and an online chat room: Somewhere between finishing Trial by Fire and cranking up the design process for Shadows of Darkness, the husband-and-wife team realized a fifth chapter would have to be added to bridge the games. That chapter became Wages of War. The concept of seasons in the games represents the maturation of the Hero as he moves from story to story. It's a critical component in a series that – from the very beginning – was designed to be a defined quartet of stories, representing an overall saga with a distinct beginning, middle, and end. In the first episode, the player is a new graduate of the Famous Adventurer's Correspondence School, ready to venture out into the springtime of his career and build a rep. It's a light-hearted, exhilarating
by the UK government. The number had been falling: there were 827 in 2007 and 790 in 2008. The number of NDPBs had fallen by over 10% since 1997. Staffing and expenditure of NDPBs had increased. They employed 111,000 people in 2009 and spent £46.5 billion, of which £38.4 billion was directly funded by the Government. United States Use of the term quango is less common in the United States although many US bodies, including Government Sponsored Enterprises, operate in the same fashion. However, Paul Krugman has stated that the US Federal Reserve is, effectively, "what the British call a quango... Its complex structure divides power between the federal government and the private banks that are its members, and in effect gives substantial autonomy to a governing board of long-term appointees." Other U.S.-based organizations that fit the original definition of quangos include the National Center for Missing and Exploited Children (NCMEC), the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac). On the broader definition now used in the United Kingdom, there are hundreds of federal agencies that might be classed as quangos. History The term "quasi non-governmental organisation" was created in 1967 by Alan Pifer of the US-based Carnegie Foundation, in an essay on the independence and accountability of public-funded bodies that are incorporated in the private sector. This essay got the attention of David Howell, a Conservative M.P. in Britain, who then organized an Anglo-American project with Pifer, to examine the pros and cons of such enterprises. The lengthy term was shortened to the acronym QUANGO (later lowercased quango) by a British participant to the joint project, Anthony Barker, during one of the conferences on the subject. It describes an ostensibly non-governmental organisation performing governmental functions, often in receipt of funding or other support from government, By contrast, traditional NGOs mostly get their donations or funds from the public and other organisations that support their cause. An essential feature of a quango in the original definition was that it should not be a formal part of the state structure. The term was then extended to apply to a range of organisations, such as executive agencies providing (from 1988) health, education and other services. Particularly in the UK, this occurred in a polemical atmosphere in which it was alleged that proliferation of such bodies was undesirable and should be reversed. In this context, the original acronym was often replaced by a backronym spelt out as "quasi-autonomous national government organisation, and often rendered as 'qango' This spawned the related acronym qualgo, a 'quasi-autonomous local government organisation'. The less contentious term non-departmental public body (NDPB) is often employed to identify numerous organisations with devolved governmental responsibilities. Examples in the United Kingdom include those engaged in the regulation of various commercial and service sectors, such as the Water Services Regulation Authority. The UK government's definition in 1997 of a non-departmental public body or quango was: Criticisms The Times has accused quangos of bureaucratic waste
Accounting Standards Review Board, Takeovers Panel) to quasi-judicial (e.g. Police Complaints Authority, Race Relations Conciliator), to the arts (e.g. New Zealand Symphony Orchestra, NZ Film Commission), to social welfare (e.g. Housing Corporation of NZ) and to substantial enterprises (e.g. Auckland International Airport Ltd)." By 2003, the number of quangos had increased to an estimated 400 (excluding Board of Trustees), with more than 3,000 people sitting on governance boards that were appointed by successive governments. This appointment of people to governance boards has been widely criticised by political parties and political commentators as a form of cronyism. In 2010, there were 2,607 crown entities (including Board of Trustees) with annual expenditure of $32billion in 2009/2010. United Kingdom Despite a 1979 'commitment' from the Conservative party to curb the growth of non-departmental bodies, their numbers grew rapidly throughout that party's time in power during the 1980s. One UK example is the Forestry Commission, which is a non-ministerial government department responsible for forestry in England. The Cabinet Office 2009 report on non-departmental public bodies found that there were 766 NDPBs sponsored by the UK government. The number had been falling: there were 827 in 2007 and 790 in 2008. The number of NDPBs had fallen by over 10% since 1997. Staffing and expenditure of NDPBs had increased. They employed 111,000 people in 2009 and spent £46.5 billion, of which £38.4 billion was directly funded by the Government. United States Use of the term quango is less common in the United States although many US bodies, including Government Sponsored Enterprises, operate in the same fashion. However, Paul Krugman has stated that the US Federal Reserve is, effectively, "what the British call a quango... Its complex structure divides power between the federal government and the private banks that are its members, and in effect gives substantial autonomy to a governing board of long-term appointees." Other U.S.-based organizations that fit the original definition of quangos include the National Center for Missing and Exploited Children (NCMEC), the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac). On the broader definition now used in the United Kingdom, there are hundreds of federal agencies that might be classed as quangos. History The term "quasi non-governmental organisation" was created in 1967 by Alan Pifer of the US-based Carnegie Foundation, in an essay on the independence and accountability of public-funded bodies that are incorporated in the private sector. This essay got the attention of David Howell, a Conservative M.P. in Britain, who then organized an Anglo-American project with Pifer, to examine the pros and cons of such enterprises. The lengthy term was shortened to the acronym QUANGO (later lowercased quango) by a British participant to the joint project, Anthony Barker, during one of the conferences on the subject. It describes an ostensibly non-governmental organisation performing governmental functions, often in receipt of funding or other support from government, By contrast, traditional NGOs mostly get their donations or funds from the public and other organisations that support their cause. An essential feature of a quango in the original definition was that it should not be a formal part of the state structure. The term was then extended to apply to a range
wood, furs, and other natural materials, but are now often made of metal or plastic. Etymology The English word quiver has its origins in Old French, written as quivre, cuevre or coivre . Types Belt quiver The most common style of quiver is a flat or cylindrical container suspended from the belt. They are found across many cultures from North America to China. Many variations of this type exist, such as being canted forwards or backwards, and being carried on the dominant hand side, off-hand side, or the small of the back. Some variants enclose almost the entire arrow, while minimalist "pocket quivers" consist of little more than a small stiff pouch that only covers the first few inches. The Bayeux Tapestry shows that most bowmen in medieval Europe used belt quivers. Back quiver Back quivers are secured to the archer's back by leather straps, with the nock ends protruding above the dominant hand's shoulder. Arrows can be drawn over the shoulder rapidly by the nock. This style of quiver was used by native peoples of North America and Africa, and was also commonly depicted in bas-reliefs from ancient Assyria. They were also used in Ancient Greece and often feature on sculptural representations of Artemis, goddess of the hunt. While popular in cinema and 20th century art for depictions of medieval European characters (such as Robin Hood), this style of quiver was rarely used in medieval Europe. Ground quiver
being carried on the dominant hand side, off-hand side, or the small of the back. Some variants enclose almost the entire arrow, while minimalist "pocket quivers" consist of little more than a small stiff pouch that only covers the first few inches. The Bayeux Tapestry shows that most bowmen in medieval Europe used belt quivers. Back quiver Back quivers are secured to the archer's back by leather straps, with the nock ends protruding above the dominant hand's shoulder. Arrows can be drawn over the shoulder rapidly by the nock. This style of quiver was used by native peoples of North America and Africa, and was also commonly depicted in bas-reliefs from ancient Assyria. They were also used in Ancient Greece and often feature on sculptural representations of Artemis, goddess of the hunt. While popular in cinema and 20th century art for depictions of medieval European characters (such as Robin Hood), this style of quiver was rarely used in medieval Europe. Ground quiver A ground quiver is used for both target shooting or warfare when the archer is shooting from a fixed location. They can be simply stakes in the ground with a ring at the top to hold the arrows, or more elaborate designs that hold the arrows within reach without the archer having to lean down to draw. Bow quiver A modern invention,
the pound sterling. The Quid, a Canadian garage rock band from Winnipeg, Manitoba. Quid (encyclopedia), a French encyclopedia, established in 1963 by Dominique
The Quid, a Canadian garage rock band from Winnipeg, Manitoba. Quid (encyclopedia), a French encyclopedia, established in 1963 by Dominique Frémy. Quid Inc., a private software and services company, specializing in text-based data analysis. Tertium quids (sometimes quids), various factions of the Democratic-Republican
malaria purine nucleoside phosphorylase enzyme. Chemistry The UV absorption of quinine peaks around 350 nm (in UVA). Fluorescent emission peaks at around 460 nm (bright blue/cyan hue). Quinine is highly fluorescent (quantum yield ~0.58) in 0.1 M sulfuric acid solution. The 3D structure of quinine can be viewed using QRChem.net. Synthesis Cinchona trees remain the only economically practical source of quinine. However, under wartime pressure during World War II, research towards its synthetic production was undertaken. A formal chemical synthesis was accomplished in 1944 by American chemists R.B. Woodward and W.E. Doering. Since then, several more efficient quinine total syntheses have been achieved, but none of them can compete in economic terms with isolation of the alkaloid from natural sources. The first synthetic organic dye, mauveine, was discovered by William Henry Perkin in 1856 while he was attempting to synthesize quinine. Biosynthesis In the first step of quinine biosynthesis, the enzyme strictosidine synthase catalyzes a stereoselective Pictet–Spengler reaction between tryptamine and secologanin to yield strictosidine. Suitable modification of strictosidine leads to an aldehyde. Hydrolysis and decarboxylation would initially remove one carbon from the iridoid portion and produce corynantheal. Then the tryptamine side-chain were cleaved adjacent to the nitrogen, and this nitrogen was then bonded to the acetaldehyde function to yield cinchonaminal. Ring opening in the indole heterocyclic ring could generate new amine and keto functions. The new quinoline heterocycle would then be formed by combining this amine with the aldehyde produced in the tryptamine side-chain cleavage, giving cinchonidinone. For the last step, hydroxylation and methylation gives quinine. History Quinine was used as a muscle relaxant by the Quechua people, who are indigenous to Peru, Bolivia and Ecuador, to halt shivering. The Quechua would mix the ground bark of cinchona trees with sweetened water to offset the bark's bitter taste, thus producing something similar to tonic water. Spanish Jesuit missionaries were the first to bring cinchona to Europe. The Spanish had observed the Quechua's use of cinchona and were aware of the medicinal properties of cinchona bark by the 1570s or earlier: Nicolás Monardes (1571) and Juan Fragoso (1572) both described a tree, which was subsequently identified as the cinchona tree, whose bark was used to produce a drink to treat diarrhea. Quinine has been used in unextracted form by Europeans since at least the early 17th century. A popular story of how it was brought to Europe by the Countess of Chinchon was debunked by medical historian Alec Haggis around 1941. During the 17th century, malaria was endemic to the swamps and marshes surrounding the city of Rome. It had caused the deaths of several popes, many cardinals and countless common Roman citizens. Most of the Catholic priests trained in Rome had seen malaria victims and were familiar with the shivering brought on by the febrile phase of the disease. The Jesuit Agostino Salumbrino (1564–1642), an apothecary by training who lived in Lima (now in present-day Peru), observed the Quechua using the bark of the cinchona tree to treat such shivering. While its effect in treating malaria (and malaria-induced shivering) was unrelated to its effect in controlling shivering from rigors, it was a successful medicine against malaria. At the first opportunity, Salumbrino sent a small quantity to Rome for testing as a malaria treatment. In the years that followed, cinchona bark, known as Jesuit's bark or Peruvian bark, became one of the most valuable commodities shipped from Peru to Europe. When King Charles II was cured of malaria at the end of the 17th Century with quinine, it became popular in London. It remained the antimalarial drug of choice until the 1940s, when other drugs took over. The form of quinine most effective in treating malaria was found by Charles Marie de La Condamine in 1737. In 1820, French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou first isolated quinine from the bark of a tree in the genus Cinchona – probably Cinchona officinalis – and subsequently named the substance. The name was derived from the original Quechua (Inca) word for the cinchona tree bark, quina or quina-quina, which means "bark of bark" or "holy bark". Prior to 1820, the bark was dried, ground to a fine powder, and mixed into a liquid (commonly wine) in order to be drunk. Large-scale use of quinine as a malaria prophylaxis started around 1850. In 1853 Paul Briquet published a brief history and discussion of the literature on "quinquina". Quinine played a significant role in the colonization of Africa by Europeans. The availability of quinine for treatment had been said to be the prime reason Africa ceased to be known as the "white man's grave". A historian said, "it was
but this risk is small and the physician should not hesitate to use quinine in people with G6PD deficiency when there is no alternative. Adverse effects Quinine can cause unpredictable serious and life-threatening blood and cardiovascular reactions including low platelet count and hemolytic-uremic syndrome/thrombotic thrombocytopenic purpura (HUS/TTP), long QT syndrome and other serious cardiac arrhythmias including torsades de pointes, blackwater fever, disseminated intravascular coagulation, leukopenia, and neutropenia. Some people who have developed TTP due to quinine have gone on to develop kidney failure. It can also cause serious hypersensitivity reactions including anaphylactic shock, urticaria, serious skin rashes, including Stevens–Johnson syndrome and toxic epidermal necrolysis, angioedema, facial edema, bronchospasm, granulomatous hepatitis, and itchiness. The most common adverse effects involve a group of symptoms called cinchonism, which can include headache, vasodilation and sweating, nausea, tinnitus, hearing impairment, vertigo or dizziness, blurred vision, and disturbance in color perception. More severe cinchonism includes vomiting, diarrhea, abdominal pain, deafness, blindness, and disturbances in heart rhythms. Cinchonism is much less common when quinine is given by mouth, but oral quinine is not well tolerated (quinine is exceedingly bitter and many people will vomit after ingesting quinine tablets). Other drugs, such as Fansidar (sulfadoxine with pyrimethamine) or Malarone (proguanil with atovaquone), are often used when oral therapy is required. Quinine ethyl carbonate is tasteless and odourless, but is available commercially only in Japan. Blood glucose, electrolyte and cardiac monitoring are not necessary when quinine is given by mouth. Quinine has diverse unwanted interactions with numerous prescription drugs, such as potentiating the anticoagulant effects of warfarin. Mechanism of action Quinine is used for its toxicity to the malarial pathogen, Plasmodium falciparum, by interfering with the parasite's ability to dissolve and metabolize hemoglobin. As with other quinoline antimalarial drugs, the precise mechanism of action of quinine has not been fully resolved, although in vitro studies indicate it inhibits nucleic acid and protein synthesis, and inhibits glycolysis in P. falciparum. The most widely accepted hypothesis of its action is based on the well-studied and closely related quinoline drug, chloroquine. This model involves the inhibition of hemozoin biocrystallization in the heme detoxification pathway, which facilitates the aggregation of cytotoxic heme. Free cytotoxic heme accumulates in the parasites, causing their deaths. Quinine may target the malaria purine nucleoside phosphorylase enzyme. Chemistry The UV absorption of quinine peaks around 350 nm (in UVA). Fluorescent emission peaks at around 460 nm (bright blue/cyan hue). Quinine is highly fluorescent (quantum yield ~0.58) in 0.1 M sulfuric acid solution. The 3D structure of quinine can be viewed using QRChem.net. Synthesis Cinchona trees remain the only economically practical source of quinine. However, under wartime pressure during World War II, research towards its synthetic production was undertaken. A formal chemical synthesis was accomplished in 1944 by American chemists R.B. Woodward and W.E. Doering. Since then, several more efficient quinine total syntheses have been achieved, but none of them can compete in economic terms with isolation of the alkaloid from natural sources. The first synthetic organic dye, mauveine, was discovered by William Henry Perkin in 1856 while he was attempting to synthesize quinine. Biosynthesis In the first step of quinine biosynthesis, the enzyme strictosidine synthase catalyzes a stereoselective Pictet–Spengler reaction between tryptamine and secologanin to yield strictosidine. Suitable modification of strictosidine leads to an aldehyde. Hydrolysis and decarboxylation would initially remove one carbon from the iridoid portion and produce corynantheal. Then the tryptamine side-chain were cleaved adjacent to the nitrogen, and this nitrogen was then bonded to the acetaldehyde function to yield cinchonaminal. Ring opening in the indole heterocyclic ring could generate new amine and keto functions. The new quinoline heterocycle would then be formed by combining this amine with the aldehyde produced in the tryptamine side-chain cleavage, giving cinchonidinone. For the last step, hydroxylation and methylation gives quinine. History Quinine was used as a muscle relaxant by the Quechua people, who are indigenous to Peru, Bolivia and Ecuador, to halt shivering. The Quechua would mix the ground bark of cinchona trees with sweetened water to offset the bark's bitter taste, thus producing something similar to tonic water. Spanish Jesuit missionaries were the first to bring cinchona to Europe. The Spanish had observed the Quechua's use of cinchona and were aware of the medicinal properties of cinchona bark by the 1570s or earlier: Nicolás Monardes (1571) and Juan Fragoso (1572) both described a tree, which was subsequently identified as the cinchona tree, whose bark was used to produce a drink to treat diarrhea. Quinine has been used in unextracted form by Europeans since at least the early 17th century. A popular story of how it was brought to Europe by the Countess of Chinchon was debunked by medical historian Alec Haggis around 1941. During the 17th century, malaria was endemic to the swamps and marshes surrounding the city of Rome. It had caused the deaths of several popes, many cardinals and countless common Roman citizens. Most of the Catholic priests trained in Rome had seen malaria victims and were familiar with the shivering brought on by the febrile phase of the disease. The Jesuit Agostino Salumbrino (1564–1642), an apothecary by training who lived in Lima (now in present-day Peru), observed the Quechua using the bark of the cinchona tree to treat such shivering. While its effect in treating malaria (and malaria-induced shivering) was unrelated to its effect in controlling shivering from rigors, it was a successful medicine against malaria. At the first opportunity, Salumbrino sent a small quantity to Rome for testing as a malaria treatment. In the years that followed, cinchona bark, known as Jesuit's bark or Peruvian bark, became one of the most valuable commodities shipped from Peru to Europe. When King Charles II was cured of malaria at the end of the 17th Century with quinine, it became popular in London. It remained the antimalarial drug of choice until the 1940s, when other drugs took over. The form of quinine most effective in treating malaria was found by Charles Marie de La Condamine in 1737. In 1820, French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou first isolated quinine from the bark of a tree in the genus Cinchona – probably Cinchona officinalis – and subsequently named the substance. The name was derived from the original Quechua (Inca) word for the cinchona tree bark, quina or quina-quina, which means "bark of bark" or "holy bark". Prior to 1820, the bark was dried, ground to a fine powder, and mixed into a liquid (commonly wine) in order to be drunk. Large-scale use of quinine as a malaria prophylaxis started around 1850. In 1853 Paul Briquet published a brief history and discussion of the literature on "quinquina". Quinine played a significant role in the colonization of Africa by Europeans. The availability of quinine for treatment had been said to be the prime reason Africa ceased to be known as the "white man's grave". A historian said, "it was quinine's efficacy that gave colonists fresh opportunities to swarm into the Gold Coast, Nigeria and other parts of west Africa". To maintain their monopoly on cinchona bark, Peru and surrounding countries began outlawing the export of cinchona seeds and saplings in the early 19th century. The Dutch government persisted in its attempts to smuggle the seeds, and by the late 19th century the Dutch grew the plants in Indonesian plantations. Soon they became the main suppliers of the tree. In 1913 they set up the Kina Bureau, a cartel of cinchona producers charged with controlling price and production. By the 1930s Dutch plantations in Java were producing 22 million pounds of cinchona bark, or 97% of the world's quinine production. U.S. attempts to prosecute the Kina Bureau proved unsuccessful. During World War II, Allied powers were cut off from their supply of quinine when Germany conquered the Netherlands, and Japan controlled the Philippines and Indonesia. The US had obtained four million cinchona seeds from the Philippines and began operating cinchona plantations in Costa Rica. Additionally, they began harvesting wild cinchona bark during the Cinchona Missions. Such supplies came too late. Tens of thousands of US troops in Africa and the South Pacific died of malaria due to the lack of quinine. Despite controlling the supply, the Japanese did not make effective use of quinine, and thousands of Japanese troops in the southwest Pacific died as a result. Quinine
People Quincy (name), including a list of people with the name Quincy Quincy political family, including members of the family Places and jurisdictions France Quincy, Cher, a commune in the Cher département A hamlet of Chilly in the Haute-Savoie département A former commune in the Seine-et-Marne département, now part of Quincy-Voisins United States Quincy, California Quincy, Florida Quincy, Illinois Quincy University, located in Quincy, Illinois the former Roman Catholic Diocese of Quincy, now a Latin titular see Quincy, Indiana Quincy, Iowa Quincy, Kansas Quincy, an unincorporated community in Lewis County, Kentucky Quincy, Massachusetts, the first Quincy in the United States
the Chicago Transit Authority's 'L' system Quincy House (disambiguation), several places Josiah Quincy House, a historical landmark in Quincy, Massachusetts built and owned by a Josiah Quincy Josiah Quincy Mansion, former mansion in Wollaston Park, Quincy, Massachusetts, built and owned by a Josiah Quincy Quincy Homestead, the Dorothy Quincy House and remaining homestead of the Quincy family Quincy Market, a historic building in the Faneuil Hall Marketplace shopping center in Boston, Massachusetts Singapore The Quincy Hotel, a 108 luxury hotel in Singapore managed by Far East Hospitality Ships USS Quincy, the name of several ships In pop culture Quincy (film), a 2018 American documentary film Quincy, M.E., an American television series starring Jack Klugman as Dr. Quincy Quincy (Bleach), the race of Hollow-slayers who utilize spiritual power in the anime and manga, Bleach "Quincy / Kono Yo no Shirushi", a single by Korean singer BoA Quincy (band), a new wave power pop band from New Jersey Quincy (comic strip), a newspaper comic strip Quincy, the first name of cartoon character Mr. Magoo Quincy, the
a rare five-child multiple birth In music, a tuplet of five successive notes of equal duration The Quintuplet cluster, a star cluster near the
set of five similar items. It may refer to: Each of the children born in a rare five-child multiple
pilot's license in the United States Quimby (surname), a list of people and fictional characters Quimby, Iowa, a small city in the United
woman to gain a pilot's license in the United States Quimby (surname), a list of people and fictional characters Quimby, Iowa, a small city in the United States Quimby (band),
in the wild, where they may be released to supplement the wild population, or extend into areas outside their natural range. In 2007, 40 million quail were produced in the U.S. The collective noun for a group of quail is a flock, covey or bevy. New World Genus Callipepla Scaled quail, (commonly called blue quail) Callipepla squamata Elegant quail, Callipepla douglasii California quail, Callipepla californica Gambel's quail, Callipepla gambelii Genus Cyrtonyx Montezuma quail, Cyrtonyx montezumae Ocellated quail, Cyrtonyx ocellatus Genus Dactylortyx Singing quail, Dactylortyx thoracicus Genus Philortyx Banded quail, Philortyx fasciatus Genus Colinus Northern bobwhite, Colinus virginianus Black-throated bobwhite, Colinus nigrogularis Spot-bellied bobwhite, Colinus leucopogon Crested bobwhite, Colinus cristatus Genus Odontophorus Marbled wood quail, Odontophorus gujanensis Spot-winged wood quail, Odontophorus capueira Black-eared wood quail, Odontophorus melanotis Rufous-fronted wood quail, Odontophorus erythrops Black-fronted wood quail, Odontophorus atrifrons Chestnut wood quail, Odontophorus hyperythrus Dark-backed wood quail, Odontophorus melanonotus Rufous-breasted wood quail, Odontophorus speciosus Tacarcuna wood quail, Odontophorus dialeucos Gorgeted wood quail, Odontophorus strophium Venezuelan wood quail, Odontophorus columbianus Black-breasted wood quail,
bobwhite, Colinus nigrogularis Spot-bellied bobwhite, Colinus leucopogon Crested bobwhite, Colinus cristatus Genus Odontophorus Marbled wood quail, Odontophorus gujanensis Spot-winged wood quail, Odontophorus capueira Black-eared wood quail, Odontophorus melanotis Rufous-fronted wood quail, Odontophorus erythrops Black-fronted wood quail, Odontophorus atrifrons Chestnut wood quail, Odontophorus hyperythrus Dark-backed wood quail, Odontophorus melanonotus Rufous-breasted wood quail, Odontophorus speciosus Tacarcuna wood quail, Odontophorus dialeucos Gorgeted wood quail, Odontophorus strophium Venezuelan wood quail, Odontophorus columbianus Black-breasted wood quail, Odontophorus leucolaemus Stripe-faced wood quail, Odontophorus balliviani Starred wood quail, Odontophorus stellatus Spotted wood quail, Odontophorus guttatus Genus Oreortyx Mountain quail, Oreortyx pictus Genus Rhynchortyx Tawny-faced quail, Rhynchortyx cinctus Old World Genus Coturnix Common quail (also called Pharaoh, Bible, European or Nile quail), Coturnix coturnix Japanese quail, Coturnix japonica Stubble quail, Coturnix pectoralis †New Zealand quail, Coturnix novaezelandiae (extinct) Rain quail, Coturnix coromandelica Harlequin quail, Coturnix delegorguei †Canary Islands quail, Coturnix gomerae (fossil) Genus Synoicus Brown quail, Synoicus ypsilophorus Blue quail, Synoicus adansonii King quail, Synoicus chinensis Snow Mountain quail, Synoicus monorthonyx Genus Perdicula Jungle bush quail, Perdicula asiatica Rock bush quail, Perdicula argoondah Painted bush quail, Perdicula erythrorhyncha Manipur bush
Bunny Arts, entertainment and media Fictional entities Quagmire (comics), a Marvel Comics character Quagmire family, characters from the animated television sitcom Family Guy Glenn Quagmire, a friend of the head of the family Quagmire family, a principal family in the children's novel series A Series of Unfortunate Events Quagmire Moat, half of the Moat Twins from Eureeka's Castle
character Quagmire family, characters from the animated television sitcom Family Guy Glenn Quagmire, a friend of the head of the family Quagmire family, a principal family in the children's novel series A Series of Unfortunate Events Quagmire Moat, half of the Moat Twins from Eureeka's Castle Quagmire McDuck, a Disney character from Clan McDuck Games Quagmire, a level in the game Banjo-Tooie Quagmire, a black enchantment from the card game Magic: The Gathering Quagmire, a
fiber or aluminium alloy (or sometimes both aluminum and carbon are used), and is very lightweight for its strength. Shafts come with varying degrees of stiffness — referred to as the "spine" of the bolt. The more resistant to bending a bolt is, the more "spine" it is said to have, and a crossbow with higher draw weight ideally needs to be paired with a heavier bolt point and higher spine specifications. The weight of a shaft is usually in grains, and product descriptions may provide the total weight in grains, or in grains per inch (GPI), for which the total weight of shaft can be calculated by multiplying the GPI value with length of shaft in inches. Fletching Fletchings, also referred to as vanes, are fins located at the rear end of the shaft just before the nock. The fletching is typically made from soft light materials such as feathers, plastic or silicone rubber. They stabilize the trajectory of the bolt via three different means: resisting pitching and yawing of the shaft by acting like a stabilizer fin (fin-stabilization); reducing deviation from the longitudinal axis by creating a back-pulling center of pressure behind the bolt's center of mass (drag-stabilization); and in some particular cases, creating a rotation around the longitudinal axis (spin-stabilization) by having the fletchings mounted at a slight angle of attack. There is no rule or formula for determining
the shaft by acting like a stabilizer fin (fin-stabilization); reducing deviation from the longitudinal axis by creating a back-pulling center of pressure behind the bolt's center of mass (drag-stabilization); and in some particular cases, creating a rotation around the longitudinal axis (spin-stabilization) by having the fletchings mounted at a slight angle of attack. There is no rule or formula for determining the length of fletching needed — generally the longer the shaft is, the longer the fletching needs to be, and vice versa. Nock A nock is a small notched piece that is attached to the rear end of the shaft, for engaging and receiving the propulsive push from the string during shooting. Nocks are made of either plastic or aluminum. Size and weight There is not any hard and fast rule of bolt sizing. Generally, the bolts are 15 to 22 inches long but the standard length is 20 inches. Experts recommend longer bolts but they have certain disadvantages as well. The weight of the bolt can have a serious effect on the range of the bolt. The bolt's total weight includes the bolt's weight, nock, insert, vanes, and broadhead or field point. Almost all bolt manufacturers will list how many grains each shaft weighs or how many grains are in each inch of the shaft. A more massive bolt, e.g. at least 400 grains, will have better downrange energy and offer better penetration but will travel more slowly and thus drop more due to gravity during its flight. A lighter bolt will fly quicker and give the shooter a longer range, but might not have the desired penetration. References Archery Arrow types Projectiles Crossbows
sequences. In the quasispecies model, mutations occur through errors made in the process of copying already existing sequences. Further, selection arises because different types of sequences tend to replicate at different rates, which leads to the suppression of sequences that replicate more slowly in favor of sequences that replicate faster. However, the quasispecies model does not predict the ultimate extinction of all but the fastest replicating sequence. Although the sequences that replicate more slowly cannot sustain their abundance level by themselves, they are constantly replenished as sequences that replicate faster mutate into them. At equilibrium, removal of slowly replicating sequences due to decay or outflow is balanced by replenishing, so that even relatively slowly replicating sequences can remain present in finite abundance. Due to the ongoing production of mutant sequences, selection does not act on single sequences, but on mutational "clouds" of closely related sequences, referred to as quasispecies. In other words, the evolutionary success of a particular sequence depends not only on its own replication rate, but also on the replication rates of the mutant sequences it produces, and on the replication rates of the sequences of which it is a mutant. As a consequence, the sequence that replicates fastest may even disappear completely in selection-mutation equilibrium, in favor of more slowly replicating sequences that are part of a quasispecies with a higher average growth rate. Mutational clouds as predicted by the quasispecies model have been observed in RNA viruses and in in vitro RNA replication. The mutation rate and the general fitness of the molecular sequences and their neighbors is crucial to the formation of a quasispecies. If the mutation rate is zero, there is no exchange by mutation, and each sequence is its own species. If the mutation rate is too high, exceeding what is known as the error threshold, the quasispecies will break down and be dispersed over the entire range of available sequences. Mathematical description A simple mathematical model for a quasispecies is as follows: let there be possible sequences and let there be organisms with sequence i. Let's say that each of these organisms asexually gives rise to offspring. Some are duplicates of their parent, having sequence i, but some are mutant and have some other sequence. Let the mutation rate correspond to the probability that a j type parent will produce an i type organism. Then the expected fraction of offspring generated by j type organisms that would be i type organisms is , where . Then the total number of i-type organisms after the first round of reproduction, given as , is Sometimes a death rate term is included so that: where is equal to 1 when i=j and is zero otherwise. Note that the n-th generation can be found by just taking the n-th power of W substituting it in place of W in the above formula. This is just a system of linear equations. The usual way to solve such a system is to first diagonalize the W matrix. Its diagonal entries will be eigenvalues corresponding to certain
copies with the same properties. Instead, what matters is the connectedness of the cloud. For example, the sequence AGGT has 12 (3+3+3+3) possible single point mutants AGGA, AGGG, and so on. If 10 of those mutants are viable genotypes that may reproduce (and some of whose offspring or grandchildren may mutate back into AGGT again), we would consider that sequence a well-connected node in the cloud. If instead only two of those mutants are viable, the rest being lethal mutations, then that sequence is poorly connected and most of its descendants will not reproduce. The analog of fitness for a quasispecies is the tendency of nearby relatives within the cloud to be well-connected, meaning that more of the mutant descendants will be viable and give rise to further descendants within the cloud. When the fitness of a single genotype becomes meaningless because of the high rate of mutations, the cloud as a whole or quasispecies becomes the natural unit of selection. Application to biological research Quasispecies represents the evolution of high-mutation-rate viruses such as HIV and sometimes single genes or molecules within the genomes of other organisms. Quasispecies models have also been proposed by Jose Fontanari and Emmanuel David Tannenbaum to model the evolution of sexual reproduction. Quasispecies was also shown in compositional replicators (based on the Gard model for abiogenesis) and was also suggested to be applicable to describe cell's replication, which amongst other things requires the maintenance and evolution of the internal composition of the parent and bud. Formal background The model rests on four assumptions: The self-replicating entities can be represented as sequences composed of a small number of building blocks—for example, sequences of RNA consisting of the four bases adenine, guanine, cytosine, and uracil. New sequences enter the system solely as the result of a copy process, either correct or erroneous, of other sequences that are already present. The substrates, or raw materials, necessary for ongoing replication are always present in sufficient quantity. Excess sequences are washed away in an outgoing flux. Sequences may decay into their building blocks. The probability of decay does not depend on the sequences' age; old sequences are just as likely to decay as young sequences. In the quasispecies model, mutations occur through errors made in the process of copying already existing sequences. Further, selection arises because different types of sequences tend to replicate at different rates, which leads to the suppression of sequences that replicate more slowly in favor of sequences that replicate faster. However, the quasispecies model does not predict the ultimate extinction of all but the fastest replicating sequence. Although the sequences that replicate more slowly cannot sustain their abundance level by themselves, they are constantly replenished as sequences that replicate faster mutate into them. At equilibrium, removal of slowly replicating sequences due to decay or outflow is
China were lost to the Russian Empire in the mid-19th century. Manchuria was originally separated from China proper by the Inner Willow Palisade, a ditch and embankment planted with willows intended to restrict the movement of the Han Chinese, as the area was off-limits to civilian Han Chinese until the government started colonizing the area, especially since the 1860s. With respect to these outer regions, the Qing maintained imperial control, with the emperor acting as Mongol khan, patron of Tibetan Buddhism and protector of Muslims. However, Qing policy changed with the establishment of Xinjiang province in 1884. During The Great Game era, taking advantage of the Dungan revolt in northwest China, Yaqub Beg invaded Xinjiang from Central Asia with support from the British Empire, and made himself the ruler of the kingdom of Kashgaria. The Qing court sent forces to defeat Yaqub Beg and Xinjiang was reconquered, and then the political system of China proper was formally applied onto Xinjiang. The Kumul Khanate, which was incorporated into the Qing empire as a vassal after helping Qing defeat the Zunghars in 1757, maintained its status after Xinjiang turned into a province through the end of the dynasty in the Xinhai Revolution up until 1930. In the early 20th century, Britain sent an expedition force to Tibet and forced Tibetans to sign a treaty. The Qing court responded by asserting Chinese sovereignty over Tibet, resulting in the 1906 Anglo-Chinese Convention signed between Britain and China. The British agreed not to annex Tibetan territory or to interfere in the administration of Tibet, while China engaged not to permit any other foreign state to interfere with the territory or internal administration of Tibet. Furthermore, similar to Xinjiang which was converted into a province earlier, the Qing government also turned Manchuria into three provinces in the early 20th century, officially known as the "Three Northeast Provinces", and established the post of Viceroy of the Three Northeast Provinces to oversee these provinces, making the total number of regional viceroys to nine. Society Population growth and mobility The most significant facts of early and mid-Qing social history was growth in population, population density, and mobility. The population in 1700, according to widely accepted estimates, was roughly 150 million, about what it had been under the late Ming a century before, then doubled over the next century, and reached a height of 450 million on the eve of the Taiping Rebellion in 1850. One reason for this growth was the spread of New World crops like peanuts, sweet potatoes, and potatoes, which helped to sustain the people during shortages of harvest for crops such as rice or wheat. These crops could be grown under harsher conditions, and thus were cheaper as well, which led to them becoming staples for poorer farmers, decreasing the number of deaths from malnutrition. Diseases such as smallpox, widespread in the seventeenth century, were brought under control by an increase in inoculations. In addition, infant deaths were also greatly decreased due to improvements in birthing techniques and childcare performed by doctors and midwives and through an increase in medical books available to the public. Government campaigns decreased the incidence of infanticide. Unlike Europe, where population growth in this period was greatest in the cities, in China the growth in cities and the lower Yangzi was low. The greatest growth was in the borderlands and the highlands, where farmers could clear large tracts of marshlands and forests. The population was also remarkably mobile, perhaps more so than at any time in Chinese history. Indeed, the Qing government did far more to encourage mobility than to discourage it. Millions of Han Chinese migrated to Yunnan and Guizhou in the 18th century, and also to Taiwan. After the conquests of the 1750s and 1760s, the court organized agricultural colonies in Xinjiang. Migration might be permanent, for resettlement, or the migrants (in theory at least) might regard the move as a temporary sojourn. The latter included an increasingly large and mobile workforce. Local-origin-based merchant groups also moved freely. This mobility also included the organized movement of Qing subjects overseas, largely to Southeastern Asia, in search of trade and other economic opportunities. Despite this relative mobility, Manchuria was formally closed to Han settlement by the Willow Palisade, with the exception of some bannermen. Nonetheless, by 1780, Han Chinese had become 80% of the population of Manchuria even with the Manchu restrictions. The region was considered one of the last frontiers. The relatively low populated territory was vulnerable as the Russian Empire demanded the Amur Annexation annexing Outer Manchuria. In response, the Qing officials such as Tepuqin (), the Military Governor of Heilongjiang in 1859–1867, made proposals (1860) to open parts of Guandong for Chinese civilian farmer settlers in order to oppose further possible annexations. In the later 19th century, Manchuria was opened up for Han settlers leading to a more extensive migration, which was called Chuang Guandong () literally "Crashing into Guandong" with Guandong being an older name for Manchuria. Statuses in society According to statute, Qing society was divided into relatively closed estates, of which in most general terms there were five. Apart from the estates of the officials, the comparatively minuscule aristocracy, and the degree-holding literati, there also existed a major division among ordinary Chinese between commoners and people with inferior status. They were divided into two categories: one of them, the good "commoner" people, the other "mean" people who were seen as debased and servile. The majority of the population belonged to the first category and were described as liangmin, a legal term meaning good people, as opposed to jianmin meaning the mean (or ignoble) people. Qing law explicitly stated that the traditional four occupational groups of scholars, farmers, artisans and merchants were "good", or having a status of commoners. On the other hand, slaves or bondservants, entertainers (including prostitutes and actors), tattooed criminals, and those low-level employees of government officials were the "mean people". Mean people were legally inferior to commoners and suffered unequal treatments, such as being forbidden to take the imperial examination. Furthermore, such people were usually not allowed to marry with free commoners and were even often required to acknowledge their abasement in society through actions such as bowing. However, throughout the Qing dynasty, the emperor and his court, as well as the bureaucracy, worked towards reducing the distinctions between the debased and free but did not completely succeed even at the end of its era in merging the two classifications together. Qing gentry Although there had been no powerful hereditary aristocracy since the Song dynasty, the gentry (shenshi), like their British counterparts, enjoyed imperial privileges and managed local affairs. The status of this scholar-official was defined by passing at least the first level of civil service examinations and holding a degree, which qualified him to hold imperial office, although he might not actually do so. The gentry member could legally wear gentry robes and could talk to officials as equals. Officials who had served for one or two terms could retire to enjoy the glory of their status. Informally, the gentry then presided over local society and could use their connections to influence the magistrate, acquire land, and maintain large households. The gentry thus included not only males holding degrees but also their wives, descendants, some of their relatives. The gentry class was divided into groups. Not all who held office were literati, as merchant families could purchase degrees, and not all who passed the exams found employment as officials, since the number of degree-holders was greater than the number of openings. The gentry class also differed in the source and amount of their income. Literati families drew income from landholding, as well as from lending money. Officials, of course, drew a salary, which, as the years went by, were less and less adequate, leading to widespread reliance on "squeeze," irrgular payments. Those who prepared for but failed the exams, like those who passed but were not appointed to office, could become tutors or teachers, private secretaries to sitting officials, administrators of guilds or temples, or other positions that required literacy. Others turned to fields such as engineering, medicine, or law, which by the nineteenth century demanded specialized learning. By the nineteenth century, it was no longer shameful to become an author or publisher of fiction. The Qing gentry were marked as much by their aspiration to a cultured lifestyle as by their legal status. They lived more refined and comfortable lives than the commoners and used sedan-chairs to travel any significant distance. They often showed off their learning by collecting objects such as scholars' stones, porcelain or pieces of art for their beauty, which set them off from less cultivated commoners. Qing nobility Family and kinship By the Qing, the building block of society was patrilineal kinship, that is, the local family lineage with descent through the male line, often translated as "clan". A shift in marital practices, identity and loyalty had begun during the Song dynasty when the civil service examination began to replace nobility and inheritance as a means for gaining status. Instead of intermarrying within aristocratic elites of the same social status, they tended to form marital alliances with nearby families of the same or higher wealth, and established the local people's interests as first and foremost which helped to form intermarried townships. The Neo-Confucian ideology, especially the Cheng-Zhu thinking favored by Qing social thought, emphasised patrilineal families and genealogy in society. The emperors and local officials exhorted families to compile genealogies in order to stabilize local society. The genealogy was placed in the ancestral hall, which served as the lineage's headquarters and a place for annual ancestral sacrifice.The genealogy recorded the lineage's history, biographies of respected ancestors, a chart of all the family members of each generation, rules for the members to follow, and often copies of title contracts for collective property as well. A specific Chinese character appeared in the given name of each male of each generation, often well into the future. These lineages claimed to be based on biological descent but when a member of a lineage gained office or became wealthy, he might use considerable creativity in selecting a prestigious figure to be "founding ancestor". Such worship was intended to ensure that the ancestors remain content and benevolent spirits (shen) who would keep watch over and protect the family. Later observers felt that the ancestral cult focused on the family and lineage, rather than on more public matters such as community and nation. Inner Mongols and Khalkha Mongols in the Qing rarely knew their ancestors beyond four generations and Mongol tribal society was not organized among patrilineal clans, contrary to what was commonly thought, but included unrelated people at the base unit of organization. The Qing tried but failed to promote the Chinese Neo-Confucian ideology of organizing society along patrimonial clans among the Mongols. Religion Manchu rulers presided over a multi-ethnic empire and the emperor, who was held responsible for “All Under Heaven” or Tian Xia, patronized and took responsibility for all religions and belief systems. The empire's “spiritual center of gravity" was the "religio-political state." Since the empire was part of the order of the cosmos, which conferred the Mandate of Heaven, the Emperor as “Son of Heaven” was both the head of the political system and the head priest of the State Cult. He united political and spiritual roles that in medieval Europe were separated into the roles of emperor and pope and performed the imperial rites that ensured political order, prosperity, and social morality. The emperor and his officials, who were his personal representatives, took responsibility all aspects of the empire, especially spiritual life and religious institutions and practices. The county magistrate, as the emperor's political and spiritual representative, made offerings at officially recognized temples, for instance those dedicated to the God of Walls and Moats, (the so-called “City God), and local deified heroes. The magistrate lectured on the Emperor's Sacred Edict to promote civic morality; he kept close watch over religious organizations whose actions might threaten the sovereignty and religious prerogative of the state. The belief system most widely practiced among Han Chinese is often called local, popular, or folk religion, and was centered around the patriarchal family, the maintenance of the male family line, and shen, or spirits. Common practices included ancestor veneration, filial piety, local gods and spirits. Rites included mourning, funeral, burial, practices. Since they did not require exclusive allegiance, forms and branches of Confucianism, Buddhism, and Daoism were intertwined, for instance in the syncretic Three teachings. Chinese folk religion combined elements of the three, with local variations County magistrates, who were graded and promoted on their ability to maintain local order, tolerated local sects and even patronized local temples as long as they were orderly, but were suspicious of heterodox sects that defied state authority and rejected imperial doctrines. Some of these sects indeed had long histories of rebellion, such as the Way of Former Heaven, which drew on Daoism, and the White Lotus society, which drew on millennial Buddhism. The White Lotus Rebellion (1796–1804) confirmed official suspicions as did the Taiping Rebellion, which drew on millennial Christianity. Manchu and imperial religion The Manchu imperial family were especially attracted by Yellow Sect or Gelug Buddhism that had spread from Tibet into Mongolia. The Fifth Dalai Lama, who had gained power in 1642, just before the Manchus took Beijing, looked to the Qing court for support. The Kangxi and Qianlong emperors practiced this form of Tibetan Buddhism as one of their household religions and built temples that made Beijing one of its centers, and constructed a replica Lhasa's Potala Palace at their summer retreat in Rehe. Shamanism, the most common religion among Manchus, was a spiritual inheritance from their Tungusic ancestors that set them off from Han Chinese. State shamanism was important to the imperial family both to maintain their Manchu cultural identity and to promote their imperial legitimacy among tribes in the northeast. Imperial obligations included rituals on the first day of Chinese New Year at a shamanic shrine (tangse). Practices in Manchu families included sacrifices to the ancestors, and the use of shamans, often women, who went into a trance to seek healing or exorcism Christianity, Judaism, and Islam The Abrahamic religions had arrived from Western Asia as early as the Tang dynasty but their insistence that they should be practiced to the exclusion of other religions made them less adaptable than Buddhism, which had quickly been accepted as native. Islam predominated in Central Asian areas of the empire, while Judaism and Christianity were practiced in well-established but self-contained communities. Several hundred Catholic missionaries arrived from the late Ming period until the proscription of Christianity in 1724. The Jesuits adapted to Chinese expectations, evangelized from the top down, adopted the robes and lifestyles of literati, becoming proficient in the Confucian classics, and did not challenge Chinese moral values, such as ancestor veneration. They proved their value to the early Manchu emperors with their work in gunnery, cartography, and astronomy, but fell out of favor for a time until the Kangxi Emperor's 1692 edict of toleration. In the countryside, the newly arrived Dominican and Franciscan clerics established rural communities that adapted to local folk religious practices by emphasizing healing, festivals, and holy days rather than sacraments and doctrine. By the beginning of the eighteenth century, a spectrum of Christian believers had established communities. In 1724, the Yongzheng Emperor (1678–1735) announced that Christianity was a "heterodox teaching" and hence proscribed. Since the European Catholic missionaries kept control in their own hands and had not allowed the creation of a native clergy, however, the number of Catholics would grow more rapidly after 1724 and local communities could set their own rules and standards. In 1811, Christian religious activities were further criminalized by the Jiaqing Emperor (1760–1820). The imperial ban was lifted by Treaty in 1846. The first Protestant missionary to China was Robert Morrison (1782–1834) of the London Missionary Society (LMS), who arrived at Canton on September 6, 1807. He completed a translation of the entire Bible in 1819. Liang Afa (1789–1855), a Morrison-trained Chinese convert, branched out the evangelization mission in inner China. The two Opium Wars (1839–1860) marked the watershed of Protestant Christian missions. The 1842 Treaty of Nanjing, the American treaty and the French treaty signed in 1844, and the 1858 Treaty of Tianjin, distinguished Christianity from the local religions and granted it protected status. Chinese popular cults, such as the White Lotus and the Eight Trigram, presented themselves as Christian to share this protection. In the late 1840s Hong Xiuquan read Morrison's Chinese Bible, as well as Liang Afa's evangelistic pamphlet, and announced to his followers that Christianity in fact had been the religion of ancient China before Confucius and his followers drove it out. He formed the Taiping Movement, which emerged in South China as a "collusion of the Chinese tradition of millenarian rebellion and Christian messianism", "apocalyptic revolution, Christianity, and 'communist utopianism'". After 1860, enforcement of the treaties allowed missionaries to spread their evangelization efforts outside Treaty Ports. Their presence created cultural and political opposition. Historian John K. Fairbank observed that "[t]o the scholar-gentry, Christian missionaries were foreign subversives, whose immoral conduct and teaching were backed by gunboats". In the next decades, there were 800 some conflicts between village Christians and non-Christians (jiao'an) mostly about non-religious issues, such as land rights or local taxes, but religious conflict often behind such cases. In the summer of 1900, as foreign powers contemplated the division of China, village youths, known as Boxers, who practiced Chinese martial arts and spiritual practices, reacted against Western power and churches, attacked and murdered Chinese Christians and foreign missionaries in the Boxer Uprising. The imperialist powers once again invaded and imposed a substantial indemnity. The Beijing government reacted by implementing substantial fiscal and administrative reforms but this defeat convinced many among the educated elites that popular religion was an obstacle to China's development as a modern nation, and some turned to Christianity as a spiritual tool to build one. By 1900, there were about 1,400 Catholic priests and nuns in China serving nearly 1 million Catholics. Over 3,000 Protestant missionaries were active among the 250,000 Protestant Christians in China. Western medical missionaries established clinics and hospitals, and led medical training in China. Missionaries began establishing nurse training schools in the late 1880s, but nursing of sick men by women was rejected by local tradition, so the number of students was small until the 1930s. Economy By the end of the 17th century, the Chinese economy had recovered from the devastation caused by the wars in which the Ming dynasty were overthrown, and the resulting breakdown of order. In the following century, markets continued to expand as in the late Ming period, but with more trade between regions, a greater dependence on overseas markets and a greatly increased population. By the end of the 18th century the population had risen to 300 million from approximately 150 million during the late Ming dynasty. The dramatic rise in population was due to several reasons, including the long period of peace and stability in the 18th century and the import of new crops China received from the Americas, including peanuts, sweet potatoes and maize. New species of rice from Southeast Asia led to a huge increase in production. Merchant guilds proliferated in all of the growing Chinese cities and often acquired great social and even political influence. Rich merchants with official connections built up huge fortunes and patronized literature, theater and the arts. Textile and handicraft production boomed. The government broadened land ownership by returning land that had been sold to large landowners in the late Ming period by families unable to pay the land tax. To give people more incentives to participate in the market, they reduced the tax burden in comparison with the late Ming, and replaced the corvée system with a head tax used to hire laborers. The administration of the Grand Canal was made more efficient, and transport opened to private merchants. A system of monitoring grain prices eliminated severe shortages, and enabled the price of rice to rise slowly and smoothly through the 18th century. Wary of the power of wealthy merchants, Qing rulers limited their trading licenses and usually refused them permission to open new mines, except in poor areas. These restrictions on domestic resource exploration, as well as on foreign trade, are held by some scholars as a cause of the Great Divergence, by which the Western world overtook China economically. During the Ming–Qing period (1368–1911) the biggest development in the Chinese economy was its transition from a command to a market economy, the latter becoming increasingly more pervasive throughout the Qing's rule. From roughly 1550 to 1800 China proper experienced a second commercial revolution, developing naturally from the first commercial revolution of the Song period which saw the emergence of long-distance inter-regional trade of luxury goods. During the second commercial revolution, for the first time, a large percentage of farming households began producing crops for sale in the local and national markets rather than for their own consumption or barter in the traditional economy. Surplus crops were placed onto the national market for sale, integrating farmers into the commercial economy from the ground up. This naturally led to regions specializing in certain cash-crops for export as China's economy became increasingly reliant on inter-regional trade of bulk staple goods such as cotton, grain, beans, vegetable oils, forest products, animal products, and fertilizer. Silver Silver entered in large quantities from mines in the New World after the Spanish conquered the Philippines in the 1570s. The re-opening of the southeast coast, which had been closed in the late 17th century, quickly revived trade, which expanded at 4% per annum throughout the latter part of the 18th century. China continued to export tea, silk and manufactures, creating a large, favorable trade balance with the West. The resulting expansion of the money supply supported competitive and stable markets. During the mid-Ming China had gradually shifted to silver as the standard currency for large scale transactions and by the late Kangxi reign the assessment and collection of the land tax was done in silver. Landlords began only accepting rent payments in silver rather than in crops themselves, which in turn incentivized farmers to produce crops for sale in local and national markets rather than for their own personal consumption or barter. Unlike the copper coins, qian or cash, used mainly for smaller transactions, silver was not reliably minted into a coin but rather was traded in units of weight: the liang or tael, which equaled roughly 1.3 ounces of silver. A third-party had to be brought in to assess the weight and purity of the silver, resulting in an extra "meltage fee" added on to the price of transaction. Furthermore, since the "meltage fee" was unregulated until the reign of the Yongzheng emperor it was the source of much corruption at each level of the bureaucracy. The Yongzheng emperor cracked down on the corrupt "meltage fees," legalizing and regulating them so that they could be collected as a tax, "returning meltage fees to the public coffer." From this newly increased public coffer, the Yongzheng emperor increased the salaries of the officials who collected them, further legitimizing silver as the standard currency of the Qing economy. Urbanization and the proliferation of market-towns The second commercial revolution also had a profound effect on the dispersion of the Qing populace. Up until the late Ming there existed a stark contrast between the rural countryside and city metropoles and very few mid-sized cities existed. This was due to the fact that extraction of surplus crops from the countryside was traditionally done by the state and not commercial organizations. However, as commercialization expanded exponentially in the late-Ming and early-Qing, mid-sized cities began popping up to direct the flow of domestic, commercial trade. Some towns of this nature had such a large volume of trade and merchants flowing through them that they developed into full-fledged market-towns. Some of these more active market-towns even developed into small-cities and became home to the new rising merchant-class. The proliferation of these mid-sized cities was only made possible by advancements in long-distance transportation and methods of communication. As more and more Chinese-citizens were travelling the country conducting trade they increasingly found themselves in a far-away place needing a place to stay, in response the market saw the expansion of guild halls to house these merchants. Full-fledged trade guilds emerged, which, among other things, issued regulatory codes and price schedules, and provided a place for travelling merchants to stay and conduct their business. Along with the huiguan trade guilds, guild halls dedicated to more specific professions, gongsuo, began to appear and to control commercial craft or artisanal industries such as carpentry, weaving, banking, and medicine. By the nineteenth century guild halls worked to transform urban areas into cosmopolitan, multi-cultural hubs, staged theatre performances open to general public, developed real estate by pooling funds together in the style of a trust, and some even facilitated the development of social services such as maintaining streets, water supply, and sewage facilities. Trade with the West In 1685 the Kangxi emperor legalized private maritime trade along the coast, establishing a series of customs stations
Others turned to fields such as engineering, medicine, or law, which by the nineteenth century demanded specialized learning. By the nineteenth century, it was no longer shameful to become an author or publisher of fiction. The Qing gentry were marked as much by their aspiration to a cultured lifestyle as by their legal status. They lived more refined and comfortable lives than the commoners and used sedan-chairs to travel any significant distance. They often showed off their learning by collecting objects such as scholars' stones, porcelain or pieces of art for their beauty, which set them off from less cultivated commoners. Qing nobility Family and kinship By the Qing, the building block of society was patrilineal kinship, that is, the local family lineage with descent through the male line, often translated as "clan". A shift in marital practices, identity and loyalty had begun during the Song dynasty when the civil service examination began to replace nobility and inheritance as a means for gaining status. Instead of intermarrying within aristocratic elites of the same social status, they tended to form marital alliances with nearby families of the same or higher wealth, and established the local people's interests as first and foremost which helped to form intermarried townships. The Neo-Confucian ideology, especially the Cheng-Zhu thinking favored by Qing social thought, emphasised patrilineal families and genealogy in society. The emperors and local officials exhorted families to compile genealogies in order to stabilize local society. The genealogy was placed in the ancestral hall, which served as the lineage's headquarters and a place for annual ancestral sacrifice.The genealogy recorded the lineage's history, biographies of respected ancestors, a chart of all the family members of each generation, rules for the members to follow, and often copies of title contracts for collective property as well. A specific Chinese character appeared in the given name of each male of each generation, often well into the future. These lineages claimed to be based on biological descent but when a member of a lineage gained office or became wealthy, he might use considerable creativity in selecting a prestigious figure to be "founding ancestor". Such worship was intended to ensure that the ancestors remain content and benevolent spirits (shen) who would keep watch over and protect the family. Later observers felt that the ancestral cult focused on the family and lineage, rather than on more public matters such as community and nation. Inner Mongols and Khalkha Mongols in the Qing rarely knew their ancestors beyond four generations and Mongol tribal society was not organized among patrilineal clans, contrary to what was commonly thought, but included unrelated people at the base unit of organization. The Qing tried but failed to promote the Chinese Neo-Confucian ideology of organizing society along patrimonial clans among the Mongols. Religion Manchu rulers presided over a multi-ethnic empire and the emperor, who was held responsible for “All Under Heaven” or Tian Xia, patronized and took responsibility for all religions and belief systems. The empire's “spiritual center of gravity" was the "religio-political state." Since the empire was part of the order of the cosmos, which conferred the Mandate of Heaven, the Emperor as “Son of Heaven” was both the head of the political system and the head priest of the State Cult. He united political and spiritual roles that in medieval Europe were separated into the roles of emperor and pope and performed the imperial rites that ensured political order, prosperity, and social morality. The emperor and his officials, who were his personal representatives, took responsibility all aspects of the empire, especially spiritual life and religious institutions and practices. The county magistrate, as the emperor's political and spiritual representative, made offerings at officially recognized temples, for instance those dedicated to the God of Walls and Moats, (the so-called “City God), and local deified heroes. The magistrate lectured on the Emperor's Sacred Edict to promote civic morality; he kept close watch over religious organizations whose actions might threaten the sovereignty and religious prerogative of the state. The belief system most widely practiced among Han Chinese is often called local, popular, or folk religion, and was centered around the patriarchal family, the maintenance of the male family line, and shen, or spirits. Common practices included ancestor veneration, filial piety, local gods and spirits. Rites included mourning, funeral, burial, practices. Since they did not require exclusive allegiance, forms and branches of Confucianism, Buddhism, and Daoism were intertwined, for instance in the syncretic Three teachings. Chinese folk religion combined elements of the three, with local variations County magistrates, who were graded and promoted on their ability to maintain local order, tolerated local sects and even patronized local temples as long as they were orderly, but were suspicious of heterodox sects that defied state authority and rejected imperial doctrines. Some of these sects indeed had long histories of rebellion, such as the Way of Former Heaven, which drew on Daoism, and the White Lotus society, which drew on millennial Buddhism. The White Lotus Rebellion (1796–1804) confirmed official suspicions as did the Taiping Rebellion, which drew on millennial Christianity. Manchu and imperial religion The Manchu imperial family were especially attracted by Yellow Sect or Gelug Buddhism that had spread from Tibet into Mongolia. The Fifth Dalai Lama, who had gained power in 1642, just before the Manchus took Beijing, looked to the Qing court for support. The Kangxi and Qianlong emperors practiced this form of Tibetan Buddhism as one of their household religions and built temples that made Beijing one of its centers, and constructed a replica Lhasa's Potala Palace at their summer retreat in Rehe. Shamanism, the most common religion among Manchus, was a spiritual inheritance from their Tungusic ancestors that set them off from Han Chinese. State shamanism was important to the imperial family both to maintain their Manchu cultural identity and to promote their imperial legitimacy among tribes in the northeast. Imperial obligations included rituals on the first day of Chinese New Year at a shamanic shrine (tangse). Practices in Manchu families included sacrifices to the ancestors, and the use of shamans, often women, who went into a trance to seek healing or exorcism Christianity, Judaism, and Islam The Abrahamic religions had arrived from Western Asia as early as the Tang dynasty but their insistence that they should be practiced to the exclusion of other religions made them less adaptable than Buddhism, which had quickly been accepted as native. Islam predominated in Central Asian areas of the empire, while Judaism and Christianity were practiced in well-established but self-contained communities. Several hundred Catholic missionaries arrived from the late Ming period until the proscription of Christianity in 1724. The Jesuits adapted to Chinese expectations, evangelized from the top down, adopted the robes and lifestyles of literati, becoming proficient in the Confucian classics, and did not challenge Chinese moral values, such as ancestor veneration. They proved their value to the early Manchu emperors with their work in gunnery, cartography, and astronomy, but fell out of favor for a time until the Kangxi Emperor's 1692 edict of toleration. In the countryside, the newly arrived Dominican and Franciscan clerics established rural communities that adapted to local folk religious practices by emphasizing healing, festivals, and holy days rather than sacraments and doctrine. By the beginning of the eighteenth century, a spectrum of Christian believers had established communities. In 1724, the Yongzheng Emperor (1678–1735) announced that Christianity was a "heterodox teaching" and hence proscribed. Since the European Catholic missionaries kept control in their own hands and had not allowed the creation of a native clergy, however, the number of Catholics would grow more rapidly after 1724 and local communities could set their own rules and standards. In 1811, Christian religious activities were further criminalized by the Jiaqing Emperor (1760–1820). The imperial ban was lifted by Treaty in 1846. The first Protestant missionary to China was Robert Morrison (1782–1834) of the London Missionary Society (LMS), who arrived at Canton on September 6, 1807. He completed a translation of the entire Bible in 1819. Liang Afa (1789–1855), a Morrison-trained Chinese convert, branched out the evangelization mission in inner China. The two Opium Wars (1839–1860) marked the watershed of Protestant Christian missions. The 1842 Treaty of Nanjing, the American treaty and the French treaty signed in 1844, and the 1858 Treaty of Tianjin, distinguished Christianity from the local religions and granted it protected status. Chinese popular cults, such as the White Lotus and the Eight Trigram, presented themselves as Christian to share this protection. In the late 1840s Hong Xiuquan read Morrison's Chinese Bible, as well as Liang Afa's evangelistic pamphlet, and announced to his followers that Christianity in fact had been the religion of ancient China before Confucius and his followers drove it out. He formed the Taiping Movement, which emerged in South China as a "collusion of the Chinese tradition of millenarian rebellion and Christian messianism", "apocalyptic revolution, Christianity, and 'communist utopianism'". After 1860, enforcement of the treaties allowed missionaries to spread their evangelization efforts outside Treaty Ports. Their presence created cultural and political opposition. Historian John K. Fairbank observed that "[t]o the scholar-gentry, Christian missionaries were foreign subversives, whose immoral conduct and teaching were backed by gunboats". In the next decades, there were 800 some conflicts between village Christians and non-Christians (jiao'an) mostly about non-religious issues, such as land rights or local taxes, but religious conflict often behind such cases. In the summer of 1900, as foreign powers contemplated the division of China, village youths, known as Boxers, who practiced Chinese martial arts and spiritual practices, reacted against Western power and churches, attacked and murdered Chinese Christians and foreign missionaries in the Boxer Uprising. The imperialist powers once again invaded and imposed a substantial indemnity. The Beijing government reacted by implementing substantial fiscal and administrative reforms but this defeat convinced many among the educated elites that popular religion was an obstacle to China's development as a modern nation, and some turned to Christianity as a spiritual tool to build one. By 1900, there were about 1,400 Catholic priests and nuns in China serving nearly 1 million Catholics. Over 3,000 Protestant missionaries were active among the 250,000 Protestant Christians in China. Western medical missionaries established clinics and hospitals, and led medical training in China. Missionaries began establishing nurse training schools in the late 1880s, but nursing of sick men by women was rejected by local tradition, so the number of students was small until the 1930s. Economy By the end of the 17th century, the Chinese economy had recovered from the devastation caused by the wars in which the Ming dynasty were overthrown, and the resulting breakdown of order. In the following century, markets continued to expand as in the late Ming period, but with more trade between regions, a greater dependence on overseas markets and a greatly increased population. By the end of the 18th century the population had risen to 300 million from approximately 150 million during the late Ming dynasty. The dramatic rise in population was due to several reasons, including the long period of peace and stability in the 18th century and the import of new crops China received from the Americas, including peanuts, sweet potatoes and maize. New species of rice from Southeast Asia led to a huge increase in production. Merchant guilds proliferated in all of the growing Chinese cities and often acquired great social and even political influence. Rich merchants with official connections built up huge fortunes and patronized literature, theater and the arts. Textile and handicraft production boomed. The government broadened land ownership by returning land that had been sold to large landowners in the late Ming period by families unable to pay the land tax. To give people more incentives to participate in the market, they reduced the tax burden in comparison with the late Ming, and replaced the corvée system with a head tax used to hire laborers. The administration of the Grand Canal was made more efficient, and transport opened to private merchants. A system of monitoring grain prices eliminated severe shortages, and enabled the price of rice to rise slowly and smoothly through the 18th century. Wary of the power of wealthy merchants, Qing rulers limited their trading licenses and usually refused them permission to open new mines, except in poor areas. These restrictions on domestic resource exploration, as well as on foreign trade, are held by some scholars as a cause of the Great Divergence, by which the Western world overtook China economically. During the Ming–Qing period (1368–1911) the biggest development in the Chinese economy was its transition from a command to a market economy, the latter becoming increasingly more pervasive throughout the Qing's rule. From roughly 1550 to 1800 China proper experienced a second commercial revolution, developing naturally from the first commercial revolution of the Song period which saw the emergence of long-distance inter-regional trade of luxury goods. During the second commercial revolution, for the first time, a large percentage of farming households began producing crops for sale in the local and national markets rather than for their own consumption or barter in the traditional economy. Surplus crops were placed onto the national market for sale, integrating farmers into the commercial economy from the ground up. This naturally led to regions specializing in certain cash-crops for export as China's economy became increasingly reliant on inter-regional trade of bulk staple goods such as cotton, grain, beans, vegetable oils, forest products, animal products, and fertilizer. Silver Silver entered in large quantities from mines in the New World after the Spanish conquered the Philippines in the 1570s. The re-opening of the southeast coast, which had been closed in the late 17th century, quickly revived trade, which expanded at 4% per annum throughout the latter part of the 18th century. China continued to export tea, silk and manufactures, creating a large, favorable trade balance with the West. The resulting expansion of the money supply supported competitive and stable markets. During the mid-Ming China had gradually shifted to silver as the standard currency for large scale transactions and by the late Kangxi reign the assessment and collection of the land tax was done in silver. Landlords began only accepting rent payments in silver rather than in crops themselves, which in turn incentivized farmers to produce crops for sale in local and national markets rather than for their own personal consumption or barter. Unlike the copper coins, qian or cash, used mainly for smaller transactions, silver was not reliably minted into a coin but rather was traded in units of weight: the liang or tael, which equaled roughly 1.3 ounces of silver. A third-party had to be brought in to assess the weight and purity of the silver, resulting in an extra "meltage fee" added on to the price of transaction. Furthermore, since the "meltage fee" was unregulated until the reign of the Yongzheng emperor it was the source of much corruption at each level of the bureaucracy. The Yongzheng emperor cracked down on the corrupt "meltage fees," legalizing and regulating them so that they could be collected as a tax, "returning meltage fees to the public coffer." From this newly increased public coffer, the Yongzheng emperor increased the salaries of the officials who collected them, further legitimizing silver as the standard currency of the Qing economy. Urbanization and the proliferation of market-towns The second commercial revolution also had a profound effect on the dispersion of the Qing populace. Up until the late Ming there existed a stark contrast between the rural countryside and city metropoles and very few mid-sized cities existed. This was due to the fact that extraction of surplus crops from the countryside was traditionally done by the state and not commercial organizations. However, as commercialization expanded exponentially in the late-Ming and early-Qing, mid-sized cities began popping up to direct the flow of domestic, commercial trade. Some towns of this nature had such a large volume of trade and merchants flowing through them that they developed into full-fledged market-towns. Some of these more active market-towns even developed into small-cities and became home to the new rising merchant-class. The proliferation of these mid-sized cities was only made possible by advancements in long-distance transportation and methods of communication. As more and more Chinese-citizens were travelling the country conducting trade they increasingly found themselves in a far-away place needing a place to stay, in response the market saw the expansion of guild halls to house these merchants. Full-fledged trade guilds emerged, which, among other things, issued regulatory codes and price schedules, and provided a place for travelling merchants to stay and conduct their business. Along with the huiguan trade guilds, guild halls dedicated to more specific professions, gongsuo, began to appear and to control commercial craft or artisanal industries such as carpentry, weaving, banking, and medicine. By the nineteenth century guild halls worked to transform urban areas into cosmopolitan, multi-cultural hubs, staged theatre performances open to general public, developed real estate by pooling funds together in the style of a trust, and some even facilitated the development of social services such as maintaining streets, water supply, and sewage facilities. Trade with the West In 1685 the Kangxi emperor legalized private maritime trade along the coast, establishing a series of customs stations in major port cities. The customs station at Canton became by far the most active in foreign trade and by the late Kangxi reign more than forty mercantile houses specializing in trade with the West had appeared. The Yongzheng emperor made a parent corporation comprising those forty individual houses in 1725 known as the Cohong system. Firmly established by 1757, the Canton Cohong was an association of thirteen business firms that had been awarded exclusive rights to conduct trade with Western merchants in Canton. Until its abolition after the Opium War in 1842, the Canton Cohong system was the only permitted avenue of Western trade into China, and thus became a booming hub of international trade by the early eighteenth century. By the eighteenth century the most significant export China had was tea. British demand for tea increased exponentially up until they figured out how to grow it for themselves in the hills of northern India in the 1880s. By the end of the eighteenth century tea exports going through the Canton Cohong system amounted to one-tenth of the revenue from taxes collected from the British and nearly the entire revenue of the British East India Company and until the early nineteenth century tea comprised ninety percent of exports leaving Canton. Science and technology Chinese scholars, court academies, and local officials carried on late Ming dynasty strengths in astronomy, mathematics, and geography, as well as technologies in ceramics, metallurgy, water transport, printing. Contrary to stereotypes in some Western writing, 16th and 17th century Qing dynasty officials and literati eagerly explored the technology and science introduced by Jesuit missionaries. Manchu leaders employed Jesuits to use cannon and gunpowder to great effect in the conquest of China, and the court sponsored their research in astronomy. The aim of these efforts, however, was to reform and improve inherited science and technology, not to replace it. Scientific knowledge advanced during the Qing, but there was not a change in the way this knowledge was organized or the way scientific evidence was defined or its truth tested. Those who studied the physical universe shared their findings with each other and identified themselves as men of science, but they did not have a separate and independent professional role with its own training and advancement. They were still literati. The Opium Wars, however, demonstrated the power of steam engine and military technology that had only recently been put into practice in the West. During the Self-Strengthening Movement of the 1860s and 1870s Confucian officials in several coastal provinces established an industrial base in military technology. The introduction of railroads into China raised questions that were more political than technological. A British company built the Shanghai—Woosung line in 1876, obtaining the land under false pretenses, and it was soon torn up. Court officials feared local public opinion and that railways would help invaders, harm farmlands, and obstruct feng shui. To keep development in Chinese hands, the Qing government borrowed 34 billion taels of silver from foreign lenders for railway construction between 1894 and 1911. As late as 1900, only were in operation, with a further in the planning stage. Finally, of railway was completed. The British and French after 1905 were finally able to open lines to Burma and Vietnam. Protestant missionaries by the 1830s translated and printed Western science and medical textbooks. The textbooks found homes in the rapidly enlarging network of missionary schools, and universities. The textbooks opened learning open possibilities for the small number of Chinese students interested in science, and a very small number interested in technology. After 1900, Japan had a greater role in bringing modern science and technology to Chinese audiences but even then they reached chiefly the children of the rich landowning gentry, who seldom engaged in industrial careers. Arts and culture Under the Qing, inherited forms of art flourished and innovations occurred at many levels and in many types. High levels of literacy, a successful publishing industry, prosperous cities, and the Confucian emphasis on cultivation all fed a lively and creative set of cultural fields. By the end of the nineteenth century, national artistic and cultural worlds had begun to come to terms with the cosmopolitan culture of the West and Japan. The decision to stay within old forms or welcome Western models was now a conscious choice rather than an unchallenged acceptance of tradition. Classically trained Confucian scholars such as Liang Qichao and Wang Guowei read widely and broke aesthetic and critical ground later cultivated in the New Culture Movement. Fine arts The Qing emperors were generally adept at poetry and often skilled in painting, and offered their patronage to Confucian culture. The Kangxi and Qianlong Emperors, for instance, embraced Chinese traditions both to control them and to proclaim their own legitimacy. The Kangxi Emperor sponsored the Peiwen Yunfu, a rhyme dictionary published in 1711, and the Kangxi Dictionary published in 1716, which remains to this day an authoritative reference. The Qianlong Emperor sponsored the largest collection of writings in Chinese history, the Siku Quanshu, completed in 1782. Court painters made new versions of the Song masterpiece, Zhang Zeduan's Along the River During the Qingming Festival, whose depiction of a prosperous and happy realm demonstrated the beneficence of the emperor. The emperors undertook tours of the south and commissioned monumental scrolls to depict the grandeur of the occasion. Imperial patronage also encouraged the industrial production of ceramics and Chinese export porcelain. Peking glassware became popular after European glass making processes were introduced by Jesuits to Beijing. Yet the most impressive aesthetic works were done among the scholars and urban elite. Calligraphy and painting remained a central interest to both court painters and scholar-gentry who considered the four arts part of their cultural identity and social standing. The painting of the early years of the dynasty included such painters as the orthodox Four Wangs and the individualists Bada Shanren (1626–1705) and Shitao (1641–1707). The nineteenth century saw such innovations as the Shanghai School and the Lingnan School, which used the technical skills of tradition to set the stage for modern painting. Traditional learning and literature Traditional learning flourished, especially among Ming loyalists such as Dai Zhen and Gu Yanwu, but scholars in the school of evidential learning made innovations in skeptical textual scholarship. Scholar-bureaucrats, including Lin Zexu and Wei Yuan, developed a school of practical statecraft which rooted bureaucratic reform and restructuring in classical philosophy. Philosophy and literature grew to new heights in the Qing period. Poetry continued as a mark of the cultivated gentleman, but women wrote in larger and larger numbers and poets came from all walks of life. The poetry of the Qing dynasty is a lively field of research, being studied (along with the poetry of the Ming dynasty) for its association with Chinese opera, developmental trends of Classical Chinese poetry, the transition to a greater role for vernacular language, and for poetry by women. The Qing dynasty was a period of literary editing and criticism, and many of the modern popular versions of Classical Chinese poems were transmitted through Qing dynasty anthologies, such as the Quan Tangshi and the Three Hundred Tang Poems. Although fiction did not have the prestige of poetry, novels flourished. Pu Songling brought the short story to a new level in his Strange Tales from a Chinese Studio, published in the mid-18th century, and Shen Fu demonstrated the charm of the informal memoir in Six Chapters of a Floating Life, written in the early 19th century but published only in 1877. The art of the novel reached a pinnacle in Cao Xueqin's Dream of the Red Chamber, but its combination of social commentary and psychological insight were echoed in highly skilled novels such as Wu Jingzi's Rulin waishi (1750) and Li Ruzhen's Flowers in the Mirror (1827). Cuisine Cuisine aroused a cultural pride in the richness of a long and varied past. The gentleman gourmet, such as Yuan Mei, applied aesthetic standards to the art of cooking, eating, and appreciation of tea at a time when New World crops and products entered everyday life. Yuan's Suiyuan Shidan expounded culinary aesthetics and theory, along with a range of recipes. The Manchu–Han Imperial Feast originated at the court. Although this banquet was probably never common, it reflected an appreciation of Manchu culinary customs. Nevertheless, culinary traditionalists such as Yuan Mei lambasted the opulence of the Manchu Han Feast. Yuan wrote that the feast was caused in part by the "vulgar habits of bad chefs" and that "displays this trite are useful only for welcoming new relations through one's gates or when the boss comes to visit". (皆惡廚陋習。只可用之於新親上門,上司入境) Historiography and memory Nationalism After 1912, writers, historians and scholars in China and abroad generally deprecated the failures of the late imperial system. However, in the 21st century, a favorable view has emerged in popular culture. Building pride in Chinese history, nationalists have portrayed Imperial China as benevolent, strong and more advanced than the West. They blame ugly wars and diplomatic controversies on imperialist exploitation by Western nations and Japan. Although officially still communist and Maoist, in practice China's rulers have used this grassroots settlement to proclaim that their current policies are restoring China's historical glory. Chinese Communist Party General Secretary Xi Jinping has sought parity between Beijing and Washington and promised to restore China to its historical glory. New Qing History The New Qing History is a revisionist historiographical trend starting in the mid-1990s emphasizing the Manchu nature of the dynasty. Earlier historians had emphasized the power of Han Chinese to "sinicize" their conquerors, that is, to assimilate and make them Chinese in their thought and institutions. In the 1980s and early 1990s, American scholars began to learn Manchu and took advantage of newly opened Chinese- and Manchu-language documents in the archives. This research found that the Manchu rulers manipulated their subjects and from the 1630s through at least the 18th century, emperors developed a sense of Manchu identity and used Central Asian models of rule as much as they did Confucian ones. According to the new school the Manchu ruling class regarded "China" as only a part, although a very important part, of a much wider empire that extended into the Inner Asian territories of Mongolia, Tibet, Manchuria and Xinjiang. Ping-ti Ho criticized the new approach for exaggerating the Manchu character of the dynasty and argued for the sinification of its rule. Some scholars in China accused the American group of imposing American concerns with race and identity or even of imperialist misunderstanding to weaken China. Still others in China agree that this scholarship has opened new vistas for the study of Qing history. The "New Qing History" is not related to the New Qing History, a multi-volume history of the Qing dynasty that was authorized by the Chinese State Council in 2003. See also Anti-Qing sentiment Century of humiliation Christianity in China Costumes of Qing officials Eminent Chinese of the Ch'ing Period Foreign relations of the Qing dynasty History of China Henan Education Official Gazette Imperial Chinese harem system International relations of the Great Powers (1814–1919) Islam during the Qing dynasty List of emperors of the Qing dynasty List of rebellions in China List of recipients of tribute from China List of Chinese monarchs Manchuria under Qing rule Military history of China before 1911 Mongolia under Qing rule Names of the Qing dynasty New Qing History Qing emperors' family tree Qing dynasty in Inner Asia Qing official headwear The Rise and Fall of Qing Dynasty Royal and noble ranks of the Qing dynasty Taiwan under Qing rule Tibet under Qing rule Timeline of Chinese history Timeline of late anti-Qing rebellions Xinjiang under Qing rule Notes References Citations Sources (2 vol); reprinted: Leiden: Brill, 2010); revised ed: Eminent Chinese of the Qing Period; Wade–Giles converted to pinyin; Introductory matter by Pamela Kyle Crossley (Great Barrington, MA: Berkshire, 2018 ). 800 biographical articles on people who died 1644 to 1912. Vol 1 of 1943 edition Internet Archive; Vol 2 Internet Archive Online at Internet Archive Further reading Morse, Hosea Ballou. The international relations of the Chinese empire Vol. I: The period of conflict, 1834-1860 (1910) Vol. II: The period of submission, 1861-1893 (1918) Vol. III: The period of subjection, 1894-1911 (1918) Legible color maps. Primary source collections and reference Lists bureaucratic structure and offices, with standard translations. Historiography Chapters on: The problem with "China's response to the West"—Moving beyond "Tradition and modernity"—Imperialism: reality or myth? -- Toward a China-centered history of China. Still useful for discussion of issues, schools of interpretation, and published sources. Rawski, Evelyn S. "The Qing in Historiographical Dialogue." Late Imperial China (2016) 37#1 pp 1–4 summary online Discusses developments in scholarship on 18th century history published after 2000. * Covers the New Qing History approach that arose in the U.S. in the 1980s and the responses to it. External links Section on the Ming and Qing dynasties of "China's Population: Readings and Maps." Collection: "Manchu, Qing Dynasty" from the University of Michigan Museum of Art Qing Dynasty resource at the Virginia Museum of Fine Arts Dynasties in Chinese history Former countries in Chinese history Former monarchies Former monarchies of Asia History of Manchuria History of Mongolia 02 States and territories established in 1636 States and territories established in 1644 States and territories disestablished in 1912 1644 establishments in China 1912 disestablishments in China Articles
field theory. One might expect that, as with electromagnetism, the gravitational force should also have a corresponding quantum field theory. However, gravity is perturbatively nonrenormalizable. For a quantum field theory to be well defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of finitely many parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale. On the other hand, in quantizing gravity there are, in perturbation theory, infinitely many independent parameters (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of the renormalization group tells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, then every one of the infinitely many unknown parameters would begin to matter, and we could make no predictions at all. It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, finding a reliable answer is difficult, pursued in the asymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries. Quantum gravity as an effective field theory In an effective field theory, not all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory. Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally. By treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. Spacetime background dependence A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in space-time. On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory. String theory String theory can be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise to space-time in a dynamical way. Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence. Background independent theories Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory. Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks. Semi-classical quantum gravity Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation. Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles). Problem of time A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time. In contrast, general relativity treats time as a dynamical variable which relates directly with matter and moreover requires the Hamiltonian constraint to vanish. Because this variability of time has been observed macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level. Candidate theories There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. String theory The central idea of string theory is to replace the classical concept of a point particle in quantum field theory with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price of this success are unusual features such as six extra dimensions of space in addition to the usual three for space and one for time. In what is called the second superstring revolution, it was conjectured that both string theory
It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe. Quantum mechanics and general relativity Graviton The observation that all fundamental forces except gravity have one or more known messenger particles leads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as the graviton. These particles act as a force particle similar to the photon of the electromagnetic interaction. Under mild assumptions, the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles. Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. The Weinberg–Witten theorem places some constraints on theories in which the graviton is a composite particle. While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be undetectable because they interact too weakly. Nonrenormalizability of gravity General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a corresponding quantum field theory. However, gravity is perturbatively nonrenormalizable. For a quantum field theory to be well defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of finitely many parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale. On the other hand, in quantizing gravity there are, in perturbation theory, infinitely many independent parameters (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of the renormalization group tells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, then every one of the infinitely many unknown parameters would begin to matter, and we could make no predictions at all. It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, finding a reliable answer is difficult, pursued in the asymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries. Quantum gravity as an effective field theory In an effective field theory, not all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory. Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally. By treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. Spacetime background dependence A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in space-time. On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory. String theory String theory can be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise to space-time in a dynamical way. Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence. Background independent theories Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory. Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks. Semi-classical quantum gravity Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation. Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles). Problem of time A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time. In contrast, general relativity treats time as a dynamical variable which relates directly with matter and moreover requires the Hamiltonian constraint to vanish. Because this variability of time has been observed macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level. Candidate theories There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. String theory The central idea of string theory is to replace the classical concept of a point particle in quantum field theory with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this
service in terms of call quality even without QoS mechanisms in use on the user's connection to their ISP and the VoIP provider's connection to a different ISP. Under high load conditions, however, VoIP may degrade to cell-phone quality or worse. The mathematics of packet traffic indicate that network requires just 60% more raw capacity under conservative assumptions. IP and Ethernet efforts Unlike single-owner networks, the Internet is a series of exchange points interconnecting private networks. Hence the Internet's core is owned and managed by a number of different network service providers, not a single entity. Its behavior is much more unpredictable. There are two principal approaches to QoS in modern packet-switched IP networks, a parameterized system based on an exchange of application requirements with the network, and a prioritized system where each packet identifies a desired service level to the network. Integrated services ("IntServ") implements the parameterized approach. In this model, applications use the Resource Reservation Protocol (RSVP) to request and reserve resources through a network. Differentiated services ("DiffServ") implements the prioritized model. DiffServ marks packets according to the type of service they desire. In response to these markings, routers and switches use various scheduling strategies to tailor performance to expectations. Differentiated services code point (DSCP) markings use the first 6 bits in the ToS field (now renamed as the DS field) of the IP(v4) packet header. Early work used the integrated services (IntServ) philosophy of reserving network resources. In this model, applications used RSVP to request and reserve resources through a network. While IntServ mechanisms do work, it was realized that in a broadband network typical of a larger service provider, Core routers would be required to accept, maintain, and tear down thousands or possibly tens of thousands of reservations. It was believed that this approach would not scale with the growth of the Internet, and in any event was antithetical to the end-to-end principle, the notion of designing networks so that core routers do little more than simply switch packets at the highest possible rates. Under DiffServ, packets are marked either by the traffic sources themselves or by the edge devices where the traffic enters the network. In response to these markings, routers and switches use various queuing strategies to tailor performance to requirements. At the IP layer, DSCP markings use the 6 bit DS field in the IP packet header. At the MAC layer, VLAN IEEE 802.1Q can be used to carry 3 bit of essentially the same information. Routers and switches supporting DiffServ configure their network scheduler to use multiple queues for packets awaiting transmission from bandwidth constrained (e.g., wide area) interfaces. Router vendors provide different capabilities for configuring this behavior, to include the number of queues supported, the relative priorities of queues, and bandwidth reserved for each queue. In practice, when a packet must be forwarded from an interface with queuing, packets requiring low jitter (e.g., VoIP or videoconferencing) are given priority over packets in other queues. Typically, some bandwidth is allocated by default to network control packets (such as Internet Control Message Protocol and routing protocols), while best-effort traffic might simply be given whatever bandwidth is left over. At the Media Access Control (MAC) layer, VLAN IEEE 802.1Q and IEEE 802.1p can be used to distinguish between Ethernet frames and classify them. Queueing theory models have been developed on performance analysis and QoS for MAC layer protocols. Cisco IOS NetFlow and the Cisco Class Based QoS (CBQoS) Management Information Base (MIB) are marketed by Cisco Systems. One compelling example of the need for QoS on the Internet relates to congestive collapse. The Internet relies on congestion avoidance protocols, primarily as built into Transmission Control Protocol (TCP), to reduce traffic under conditions that would otherwise lead to congestive collapse. QoS applications, such as VoIP and IPTV, require largely constant bitrates and low latency, therefore they cannot use TCP and cannot otherwise reduce their traffic rate to help prevent congestion. Service-level agreements limit traffic that can be offered to the Internet and thereby enforce traffic shaping that can prevent it from becoming overloaded, and are hence an indispensable part of the Internet's ability to handle a mix of real-time and non-real-time traffic without collapse. Protocols Several QoS mechanisms and schemes exist for IP networking. The type of service (ToS) field in the IPv4 header (now superseded by DiffServ) Differentiated services (DiffServ) Integrated services (IntServ) Resource Reservation Protocol (RSVP) RSVP-TE QoS capabilities are available in the following network technologies. Multiprotocol Label Switching (MPLS) provides eight QoS classes Frame Relay X.25 Some DSL modems Asynchronous transfer mode (ATM) Ethernet supporting IEEE 802.1Q with Audio Video Bridging and Time-Sensitive Networking Wi-Fi supporting IEEE 802.11e HomePNA home networking over coax and phone wires The G.hn home networking standard provides QoS by means of contention-free transmission opportunities (CFTXOPs) which are allocated to flows which require QoS and which have negotiated a contract with the network controller. G.hn also supports non-QoS operation by means of contention-based time slots. End-to-end quality of service End-to-end quality of service can require a method of coordinating resource allocation between one autonomous system and another. The Internet Engineering Task Force (IETF) defined the Resource Reservation Protocol (RSVP) for bandwidth reservation as a proposed standard in 1997. RSVP is an end-to-end bandwidth reservation and admission control protocol. RSVP was not widely adopted due to scalability limitations. The more scalable traffic engineering version, RSVP-TE, is used in many networks to establish traffic-engineered Multiprotocol Label Switching (MPLS) label-switched paths. The IETF also defined Next Steps in Signaling (NSIS) with QoS signalling as a target. NSIS is a development and simplification of RSVP. Research consortia such as "end-to-end quality of service support over heterogeneous networks" (EuQoS, from 2004 through 2007) and fora such as the IPsphere Forum developed more mechanisms for handshaking QoS invocation from one domain to the next. IPsphere defined the Service Structuring Stratum (SSS) signaling bus in order to establish, invoke and (attempt to) assure network services. EuQoS conducted experiments to integrate Session Initiation Protocol, Next Steps in Signaling and IPsphere's SSS with an estimated cost of about 15.6 million Euro and published a book. A research project Multi Service Access Everywhere (MUSE) defined another QoS concept in a first phase from January 2004 through February 2006, and a second phase from January 2006 through 2007. Another research project named PlaNetS was proposed for European funding circa 2005. A broader European project called "Architecture and design for the future Internet" known as 4WARD had a budget estimated at 23.4 million Euro and was funded from January 2008 through June 2010. It included a "Quality of Service Theme" and published a book. Another European project, called WIDENS (Wireless Deployable Network System), proposed a bandwidth reservation approach for mobile wireless multirate adhoc networks. Limitations Strong cryptography network protocols such as Secure Sockets Layer, I2P, and virtual private networks obscure the data transferred using them. As all electronic commerce on the Internet requires the use of such strong cryptography protocols, unilaterally downgrading the performance of encrypted traffic creates an unacceptable hazard for customers. Yet, encrypted traffic is otherwise unable to undergo deep packet inspection for QoS. Protocols like ICA and RDP may encapsulate other traffic (e.g. printing, video streaming) with varying requirements that can make optimization difficult. The Internet2 project found, in 2001, that the QoS protocols were probably not deployable inside its Abilene Network with equipment available at that time. The group predicted that “logistical, financial, and organizational barriers will block the way toward any bandwidth guarantees” by protocol modifications aimed at QoS. They believed that the economics would encourage network providers to deliberately erode the quality of best effort traffic as a way to push customers to higher priced QoS services. Instead they proposed over-provisioning of capacity as more cost-effective at the time. The Abilene network study was the basis for the testimony of Gary Bachula to the US Senate Commerce Committee's hearing on Network Neutrality in early 2006. He expressed the opinion that adding more bandwidth was more effective than any of the various schemes for accomplishing QoS they examined. Bachula's testimony has been cited by proponents of a law banning quality of service as proof that no legitimate purpose is served by such an offering. This argument is dependent on the assumption that over-provisioning isn't a form of QoS and that it is always possible. Cost and other factors affect the ability of carriers to build and maintain permanently over-provisioned networks. Mobile (cellular) QoS Mobile cellular service providers may offer mobile QoS to customers just as the wired public switched telephone network services providers and Internet service providers may offer QoS. QoS mechanisms are always provided for circuit switched services, and are essential for inelastic services, for example streaming multimedia. Mobility adds complications to QoS mechanisms. A phone call or other session may be interrupted after a handover if the new base station is overloaded. Unpredictable handovers make it impossible to give an absolute QoS guarantee during the session initiation phase. Standards Quality of service in the field of telephony was first defined in 1994 in ITU-T Recommendation E.800. This definition is very broad, listing 6 primary components: Support, Operability, Accessibility, Retainability, Integrity and Security. In 1998 the ITU published a document discussing QoS in the field of data networking. X.641 offers a means of developing or enhancing standards related to QoS and provide concepts and terminology that should assist in maintaining the consistency of related standards. Some QoS-related IETF Request for Comments (RFC)s are , and ; both these are discussed above. The IETF has also published two RFCs giving background on QoS: , and . The IETF has also published as an informative or best practices document about the practical aspects of designing a QoS solution for a DiffServ network. The document tries to identify applications commonly run over an IP network, groups them into traffic classes, studies the treatment required by these classes
to cell-phone quality or worse. The mathematics of packet traffic indicate that network requires just 60% more raw capacity under conservative assumptions. IP and Ethernet efforts Unlike single-owner networks, the Internet is a series of exchange points interconnecting private networks. Hence the Internet's core is owned and managed by a number of different network service providers, not a single entity. Its behavior is much more unpredictable. There are two principal approaches to QoS in modern packet-switched IP networks, a parameterized system based on an exchange of application requirements with the network, and a prioritized system where each packet identifies a desired service level to the network. Integrated services ("IntServ") implements the parameterized approach. In this model, applications use the Resource Reservation Protocol (RSVP) to request and reserve resources through a network. Differentiated services ("DiffServ") implements the prioritized model. DiffServ marks packets according to the type of service they desire. In response to these markings, routers and switches use various scheduling strategies to tailor performance to expectations. Differentiated services code point (DSCP) markings use the first 6 bits in the ToS field (now renamed as the DS field) of the IP(v4) packet header. Early work used the integrated services (IntServ) philosophy of reserving network resources. In this model, applications used RSVP to request and reserve resources through a network. While IntServ mechanisms do work, it was realized that in a broadband network typical of a larger service provider, Core routers would be required to accept, maintain, and tear down thousands or possibly tens of thousands of reservations. It was believed that this approach would not scale with the growth of the Internet, and in any event was antithetical to the end-to-end principle, the notion of designing networks so that core routers do little more than simply switch packets at the highest possible rates. Under DiffServ, packets are marked either by the traffic sources themselves or by the edge devices where the traffic enters the network. In response to these markings, routers and switches use various queuing strategies to tailor performance to requirements. At the IP layer, DSCP markings use the 6 bit DS field in the IP packet header. At the MAC layer, VLAN IEEE 802.1Q can be used to carry 3 bit of essentially the same information. Routers and switches supporting DiffServ configure their network scheduler to use multiple queues for packets awaiting transmission from bandwidth constrained (e.g., wide area) interfaces. Router vendors provide different capabilities for configuring this behavior, to include the number of queues supported, the relative priorities of queues, and bandwidth reserved for each queue. In practice, when a packet must be forwarded from an interface with queuing, packets requiring low jitter (e.g., VoIP or videoconferencing) are given priority over packets in other queues. Typically, some bandwidth is allocated by default to network control packets (such as Internet Control Message Protocol and routing protocols), while best-effort traffic might simply be given whatever bandwidth is left over. At the Media Access Control (MAC) layer, VLAN IEEE 802.1Q and IEEE 802.1p can be used to distinguish between Ethernet frames and classify them. Queueing theory models have been developed on performance analysis and QoS for MAC layer protocols. Cisco IOS NetFlow and the Cisco Class Based QoS (CBQoS) Management Information Base (MIB) are marketed by Cisco Systems. One compelling example of the need for QoS on the Internet relates to congestive collapse. The Internet relies on congestion avoidance protocols, primarily as built into Transmission Control Protocol (TCP), to reduce traffic under conditions that would otherwise lead to congestive collapse. QoS applications, such as VoIP and IPTV, require largely constant bitrates and low latency, therefore they cannot use TCP and cannot otherwise reduce their traffic rate to help prevent congestion. Service-level agreements limit traffic that can be offered to the Internet and thereby enforce traffic shaping that can prevent it from becoming overloaded, and are hence an indispensable part of the Internet's ability to handle a mix of real-time and non-real-time traffic without collapse. Protocols Several QoS mechanisms and schemes exist for IP networking. The type of service (ToS) field in the IPv4 header (now superseded by DiffServ) Differentiated services (DiffServ) Integrated services (IntServ) Resource Reservation Protocol (RSVP) RSVP-TE QoS capabilities are available in the following network technologies. Multiprotocol Label Switching (MPLS) provides eight QoS classes Frame Relay X.25 Some DSL modems Asynchronous transfer mode (ATM) Ethernet supporting IEEE 802.1Q with Audio Video Bridging and Time-Sensitive Networking Wi-Fi supporting IEEE 802.11e HomePNA home networking over coax and phone wires The G.hn home networking standard provides QoS by means of contention-free transmission opportunities (CFTXOPs) which are allocated to flows which require QoS and which have negotiated a contract with the network controller. G.hn also supports non-QoS operation by means of contention-based time slots. End-to-end quality of service End-to-end quality of service can require a method of coordinating resource allocation between one autonomous system and another. The Internet Engineering Task Force (IETF) defined the Resource Reservation Protocol (RSVP) for bandwidth reservation as a proposed standard in 1997. RSVP is an end-to-end bandwidth reservation and admission control protocol. RSVP was not widely adopted due to scalability limitations. The more scalable traffic engineering version, RSVP-TE, is used in many networks to establish traffic-engineered Multiprotocol Label Switching (MPLS) label-switched paths. The IETF also defined Next Steps in Signaling (NSIS) with QoS signalling as a target. NSIS is a development and simplification of RSVP. Research consortia such as "end-to-end quality of service support over heterogeneous networks" (EuQoS, from 2004 through 2007) and fora such as the IPsphere Forum developed more mechanisms for handshaking QoS invocation from one domain to the next. IPsphere defined the Service Structuring Stratum (SSS) signaling bus in order to establish, invoke and (attempt to) assure network services. EuQoS conducted experiments to integrate Session Initiation Protocol,
condition known as orthogonality or quadrature. The transmitted signal is created by adding the two carrier waves together. At the receiver, the two waves can be coherently separated (demodulated) because of their orthogonality property. Another key property is that the modulations are low-frequency/low-bandwidth waveforms compared to the carrier frequency, which is known as the narrowband assumption. Phase modulation (analog PM) and phase-shift keying (digital PSK) can be regarded as a special case of QAM, where the amplitude of the transmitted signal is a constant, but its phase varies. This can also be extended to frequency modulation (FM) and frequency-shift keying (FSK), for these can be regarded as a special case of phase modulation. QAM is used extensively as a modulation scheme for digital telecommunication systems, such as in 802.11 Wi-Fi standards. Arbitrarily high spectral efficiencies can be achieved with QAM by setting a suitable constellation size, limited only by the noise level and linearity of the communications channel. QAM is being used in optical fiber systems as bit rates increase; QAM16 and QAM64 can be optically emulated with a 3-path interferometer. Demodulation of QAM In a QAM signal, one carrier lags the other by 90°, and its amplitude modulation is customarily referred to as the in-phase component, denoted by The other modulating function is the quadrature component, So the composite waveform is mathematically modeled as: or: where is the carrier frequency. At the receiver, a coherent demodulator multiplies the received signal separately with both a cosine and sine signal to produce the received estimates of and . For example: Using standard trigonometric identities, we can write this as: Low-pass filtering removes the high frequency terms (containing ), leaving only the term. This filtered signal is unaffected by showing that the in-phase component can be received independently of the quadrature component. Similarly, we can multiply by a sine wave and then low-pass filter to extract The addition of two sinusoids is a linear operation that creates no new frequency components. So the bandwidth of the composite signal is comparable to the bandwidth of the DSB (Double-Sideband) components. Effectively, the spectral redundancy of DSB enables a doubling of the information capacity using this technique. This comes at the expense of demodulation complexity. In particular, a DSB signal has zero-crossings at a regular frequency, which makes it easy to recover the phase of the carrier sinusoid. It is said to be self-clocking. But the sender and receiver of a quadrature-modulated signal must share a clock or otherwise send a clock signal. If the clock phases drift apart, the demodulated I and Q signals bleed into each other, yielding crosstalk. In this context, the clock signal is called a "phase reference". Clock synchronization is typically achieved by transmitting a burst subcarrier or a pilot signal. The phase reference for NTSC, for example, is included within its colorburst signal. Analog QAM is used in: NTSC and PAL analog color television systems, where the I- and Q-signals carry the components of chroma (colour) information. The QAM carrier phase is recovered from a special colorburst transmitted at the beginning of each scan line. C-QUAM ("Compatible QAM") is used in AM stereo radio to carry the stereo difference information. Fourier analysis of QAM In the frequency domain, QAM has a similar spectral pattern to DSB-SC modulation. Applying Euler's formula to the sinusoids in , the positive-frequency portion of (or analytic representation) is: where denotes the Fourier transform, and and are the transforms of and This result represents the sum of two DSB-SC signals with the same center frequency. The factor of represents the 90° phase shift that enables their individual demodulations. Digital QAM As in many digital modulation schemes, the constellation diagram is useful for QAM. In QAM, the
the noise level and linearity of the communications channel. QAM is being used in optical fiber systems as bit rates increase; QAM16 and QAM64 can be optically emulated with a 3-path interferometer. Demodulation of QAM In a QAM signal, one carrier lags the other by 90°, and its amplitude modulation is customarily referred to as the in-phase component, denoted by The other modulating function is the quadrature component, So the composite waveform is mathematically modeled as: or: where is the carrier frequency. At the receiver, a coherent demodulator multiplies the received signal separately with both a cosine and sine signal to produce the received estimates of and . For example: Using standard trigonometric identities, we can write this as: Low-pass filtering removes the high frequency terms (containing ), leaving only the term. This filtered signal is unaffected by showing that the in-phase component can be received independently of the quadrature component. Similarly, we can multiply by a sine wave and then low-pass filter to extract The addition of two sinusoids is a linear operation that creates no new frequency components. So the bandwidth of the composite signal is comparable to the bandwidth of the DSB (Double-Sideband) components. Effectively, the spectral redundancy of DSB enables a doubling of the information capacity using this technique. This comes at the expense of demodulation complexity. In particular, a DSB signal has zero-crossings at a regular frequency, which makes it easy to recover the phase of the carrier sinusoid. It is said to be self-clocking. But the sender and receiver of a quadrature-modulated signal must share a clock or otherwise send a clock signal. If the clock phases drift apart, the demodulated I and Q signals bleed into each other, yielding crosstalk. In this context, the clock signal is called a "phase reference". Clock synchronization is typically achieved by transmitting a burst subcarrier or a pilot signal. The phase reference for NTSC, for example, is included within its colorburst signal. Analog QAM is used in: NTSC and PAL analog color television systems, where the I- and Q-signals carry the components of chroma (colour) information. The QAM carrier phase is recovered from a special colorburst transmitted at the beginning of each scan line. C-QUAM ("Compatible QAM") is used in AM stereo radio to carry the stereo difference information. Fourier analysis of QAM In the frequency domain, QAM has a similar spectral pattern to DSB-SC modulation. Applying Euler's formula to the sinusoids in , the positive-frequency portion of (or analytic representation) is: where denotes the Fourier transform, and and are the transforms of and This result represents the sum of two DSB-SC signals with the same center frequency. The factor of represents the 90° phase shift that enables their individual demodulations. Digital
LGBT pride march held in the Indian city of Mumbai See also Qaem, refers to two completely separate Iranian weapons:
Meridiem), indicates medication should be taken every morning Quantum analog of AM complexity class - see QMA Queer Azaadi Mumbai, LGBT
azhdarchid species. Q. lawsoni was found to be a valid taxon in 2021, and confirmed to belong to the same genus as Q. nothropi. An azhdarchid neck vertebra, discovered in 2002 from the Maastrichtian age Hell Creek Formation, may also belong to Quetzalcoatlus. The specimen (BMR P2002.2) was recovered accidentally when it was included in a field jacket prepared to transport part of a Tyrannosaurus specimen. Despite this association with the remains of a large carnivorous dinosaur, the vertebra shows no evidence that it was chewed on by the dinosaur. The bone came from an individual azhdarchid pterosaur estimated to have had a wingspan of . Description Size When it was first named as a new species in 1975, scientists estimated that the largest Quetzalcoatlus fossils came from an individual with a wingspan as large as . Choosing the middle of three extrapolations from the proportions of other pterosaurs gave an estimate of 11 m, 15.5 m, and 21 m, respectively (36 ft, 50.85 ft, 68.9 ft). In 1981, further advanced studies lowered these estimates to . More recent estimates based on greater knowledge of azhdarchid proportions place its wingspan at . Remains found in Texas in 1971 indicate that this pterosaur had a minimum wingspan of about . Generalized height in a bipedal stance, based on its wingspan, would have been at least high at the shoulder. Weight estimates for giant azhdarchids are extremely problematic because no existing species share a similar size or body plan, and in consequence, published results vary widely. Generalized weight, based on some studies that have historically found extremely low weight estimates for Quetzalcoatlus, was as low as for a individual. A majority of estimates published since the 2000s have been substantially higher, around . Skull Skull material from Q. lawsoni shows that Quetzalcoatlus had a very sharp and pointed beak. That is contrary to some earlier reconstructions that showed a blunter snout, based on the inadvertent inclusion of jaw material from another pterosaur species; this material was named as the holotype of a genus of short-snouted azhdarchid, Wellnhopterus, in 2021. A skull crest was also present but its exact form and size are still unknown. Classification Below is a cladogram showing the phylogenetic placement of Quetzalcoatlus within Neoazhdarchia from Andres and Myers (2013). Paleobiology Quetzalcoatlus was abundant in Texas during the Lancian in a fauna dominated by Alamosaurus. The Alamosaurus-Quetzalcoatlus association probably represents semi-arid inland plains. Quetzalcoatlus had precursors in North America and its apparent rise to widespreadness may represent the expansion of its preferred habitat rather than an immigration event, as some experts have suggested. It co-existed with the thalassodromine Javelinadactylus, as well as an additional pterosaur taxon, suggesting a relatively high diversity of Late Cretaceous pterosaur genera. Feeding There have been a number of different ideas proposed about the lifestyle of Quetzalcoatlus. Because the area of the fossil site was removed from the coastline and there were no indications of large rivers or deep lakes nearby at the end of the Cretaceous, Lawson in 1975 rejected a fish-eating lifestyle, instead suggesting that Quetzalcoatlus scavenged like the marabou stork (which will scavenge, but is more of a terrestrial predator of small animals), but then on the carcasses of titanosaur sauropods such as Alamosaurus. Lawson had found the remains of the giant pterosaur while searching for the bones of this dinosaur, which formed an important part of its ecosystem. In 1996, Lehman and Langston rejected the scavenging hypothesis, pointing out that the lower jaw bent so strongly downwards that even when it closed completely a gap of over remained between it and the upper jaw, very
an immigration event, as some experts have suggested. It co-existed with the thalassodromine Javelinadactylus, as well as an additional pterosaur taxon, suggesting a relatively high diversity of Late Cretaceous pterosaur genera. Feeding There have been a number of different ideas proposed about the lifestyle of Quetzalcoatlus. Because the area of the fossil site was removed from the coastline and there were no indications of large rivers or deep lakes nearby at the end of the Cretaceous, Lawson in 1975 rejected a fish-eating lifestyle, instead suggesting that Quetzalcoatlus scavenged like the marabou stork (which will scavenge, but is more of a terrestrial predator of small animals), but then on the carcasses of titanosaur sauropods such as Alamosaurus. Lawson had found the remains of the giant pterosaur while searching for the bones of this dinosaur, which formed an important part of its ecosystem. In 1996, Lehman and Langston rejected the scavenging hypothesis, pointing out that the lower jaw bent so strongly downwards that even when it closed completely a gap of over remained between it and the upper jaw, very different from the hooked beaks of specialized scavenging birds. They suggested that with its long neck vertebrae and long toothless jaws Quetzalcoatlus fed like modern-day skimmers, catching fish during flight while cleaving the waves with its beak. While this skim-feeding view became widely accepted, it was not subjected to scientific research until 2007 when a study showed that for such large pterosaurs it was not a viable method because the energy costs would be too high due to excessive drag. In 2008 pterosaur workers Mark Witton and Darren Naish published an examination of possible feeding habits and ecology of azhdarchids. Witton and Naish noted that most azhdarchid remains are found in inland deposits far from seas or other large bodies of water required for skimming. Additionally, the beak, jaw, and neck anatomy are unlike those of any known skimming animal. Rather, they concluded that azhdarchids were more likely terrestrial stalkers, similar to modern storks, and probably hunted small vertebrates on land or in small streams. Though Quetzalcoatlus, like other pterosaurs, was a quadruped when on the ground, Quetzalcoatlus and other azhdarchids have fore and hind limb proportions more similar to modern running ungulate mammals than to their smaller cousins, implying that they were uniquely suited to a terrestrial lifestyle. Flight The nature of flight in Quetzalcoatlus and other giant azhdarchids was poorly understood until serious biomechanical studies were conducted in the 21st century. One early (1984) experiment by Paul MacCready used practical aerodynamics to test the flight of Quetzalcoatlus. MacCready constructed a model flying machine or ornithopter with a simple computer functioning as an autopilot. The model successfully flew with a combination of soaring and wing flapping; the model was based on a then-current weight estimate of around , far lower than more modern estimates of over . The method of flight in these pterosaurs depends largely on weight, which has been controversial, and widely differing masses have been favored by different scientists. Some researchers have suggested that these animals employed slow, soaring flight, while others have concluded that their flight was fast and dynamic. In 2010, Donald Henderson argued that the mass of Q. northropi had been underestimated, even the highest estimates, and that it was too massive to have achieved powered flight. He estimated it in his 2010 paper as . Henderson argued that it may have been flightless. Other flight capability estimates have disagreed with Henderson's research, suggesting instead an animal superbly adapted to long-range, extended flight. In 2010, Mike Habib, a professor of biomechanics at Chatham University, and Mark Witton, a British paleontologist, undertook further investigation into the claims of flightlessness in large pterosaurs. After factoring wingspan, body weight, and aerodynamics, computer modeling led the two researchers to conclude that Q. northropi was capable of flight up to for 7 to 10 days at altitudes of . Habib further suggested a maximum flight range of for Q. northropi. Henderson's work was also further criticized by Witton and Habib in another study, which pointed out that although Henderson used excellent mass estimations, they were based on outdated pterosaur models, which caused Henderson's mass estimations to be more than double what Habib used in his estimations, and that anatomical study of Q. northropi and other big pterosaur forelimbs showed a higher degree of robustness than would be expected if they were purely quadrupedal. This study proposed that large pterosaurs most likely utilized a short burst of powered flight to then transition to thermal soaring. Studies of Q. northropi and Q. lawsoni published in 2021 by Kevin Padian and colleagues instead suggested that Quetzalcoatlus was actually a powerful flier with a large breastbone to support the necessary muscles for creating the flight stroke and would probably have used its powerful hind legs to launch as high as when taking off, allowing it to gain enough height and momentum to begin the necessary downstrokes needed for takeoff. This same study also suggests that Quetzalcoatlus had limited walking motion in its wings, with the limbs on each side of the body being moved together and the forelimbs being lifted out of the way of the hindlimbs. This study also suggests that the wings of pterosaurs were only attached to the body, with the legs and feet being tucked underneath, much like how modern birds tuck their legs beneath their own bodies in flight. Cultural significance In 1975, artist Giovanni Casselli depicted Quetzalcoatlus as a small-headed scavenger with an extremely long neck in the book The evolution and ecology of the Dinosaurs by British paleontologist Beverly Halstead. Over the next twenty-five years prior to future discoveries, it would launch similar depictions colloquially known as a "paleomeme" in various books as noted by Darren Naish. In 1985, the US Defense Advanced Research Projects Agency (DARPA) and AeroVironment used Quetzalcoatlus northropi as the basis for an experimental ornithopter unmanned aerial vehicle (UAV). They produced a half-scale model weighing , with a wingspan of . Coincidentally, Douglas A. Lawson, who discovered Q. northropi in Texas in 1971, named it after John "Jack" Northrop, a developer of tailless flying wing aircraft in the 1940s. The replica of Q. northropi incorporates a "flight control system/autopilot which processes pilot commands and sensor inputs, implements several feedback loops, and delivers command signals to its various servo-actuators". It is on exhibit at the National Air and Space Museum. In 2010, several life-sized models of Q. northropi were put on display on London's South Bank as the centerpiece exhibit for the Royal Society's 350th-anniversary exhibition. The models, which included both flying and standing individuals with wingspans of over , were intended to help build public interest in science. The models were
death in 936, his widow Saint Matilda founded a religious community for women (Frauenstift) on the castle hill, where daughters of the higher nobility were educated. The main task of this collegiate foundation, Quedlinburg Abbey, was to pray for the memory of King Henry and the rulers who came after him. The Annals of Quedlinburg were also compiled there. The first abbess was Matilda, a granddaughter of King Henry and St. Matilda. The Quedlinburg castle complex, founded by King Henry I and built up by Emperor Otto I in 936, was an imperial Pfalz of the Saxon emperors. The Pfalz, including the male convent, was in the valley, where today the Roman Catholic Church of St. Wiperti is situated, while the women's convent was located on the castle hill. In 973, shortly before the death of Emperor Otto I, a Reichstag (Imperial Convention) was held at the imperial court in which Mieszko, duke of Polans, and Boleslav, duke of Bohemia, as well as numerous other nobles from as far away as Byzantium and Bulgaria, gathered to pay homage to the emperor. On the occasion, Otto the Great introduced his new daughter-in-law Theophanu, a Byzantine princess whose marriage to Otto II brought hope for recognition and continued peace between the rulers of the Eastern and Western empires. In 994, Otto III granted the right of market, tax, and coining, and established the first market place to the north of the castle hill. The town became a member of the Hanseatic League in 1426. Quedlinburg Abbey frequently disputed the independence of the town, which sought the aid of the Bishopric of Halberstadt. In 1477, Abbess Hedwig, aided by her brothers Ernest and Albert, broke the resistance of the town and expelled the bishop's forces. Quedlinburg was forced to leave the Hanseatic League and was subsequently protected by the Electorate of Saxony. Both town and abbey converted to Lutheranism in 1539 during the Protestant Reformation. In 1697, Elector Frederick Augustus I of Saxony sold his rights to Quedlinburg to Elector Frederick III of Brandenburg for 240,000 thalers. Quedlinburg Abbey contested Brandenburg-Prussia's claims throughout the 18th century, however. The abbey was secularized in 1802 during the German Mediatisation, and Quedlinburg passed to the Kingdom of Prussia as part of the Principality of Quedlinburg. Part of the Napoleonic Kingdom of Westphalia from 1807 to 1813, it was included within the new Prussian Province of Saxony in 1815. In all this time, ladies ruled Quedlinburg as abbesses without "taking the veil"; they were free to marry. The last of these ladies was a Swedish princess, an early fighter for women's rights, Sofia Albertina. During the Nazi regime, the memory of Henry I became a sort of cult, as Heinrich Himmler saw himself as the reincarnation of the "most German of all German" rulers. The collegiate church and castle were to be turned into a shrine for Nazi Germany. The Nazi Party tried to create a new religion. The cathedral was closed from 1938 and during the war. The local crematory was kept busy burning the victims of the Langenstein-Zwieberge concentration camp. Georg Ay was local party chief from 1931 until the end of the war. American occupation during the last months of World War II brought back the Protestant bishop and the church bells, and the Nazi-style eagle was removed from the tower. However, in the 1980s, upon the death of one of the US military men, the theft of medieval art from Quedlinburg came to light. Quedlinburg was administered within Bezirk Halle while part of the Communist East Germany from 1949 to 1990. It became part of the state of Saxony-Anhalt upon German reunification in 1990. During Quedlinburg's Communist era, restoration specialists from Poland were called in during the 1980s to carry out repairs on the old architecture. Today, Quedlinburg is a center of restoration of Fachwerk houses. Quedlinburg is
out repairs on the old architecture. Today, Quedlinburg is a center of restoration of Fachwerk houses. Quedlinburg is the setting for the acclaimed 2016 Frantz, serving as a quintessential small German town in the wake of WWI, home to the family who is reeling from the death of a son in the war. Geography Location The town is located north of the Harz mountains, about 123 m above NHN. The nearest mountains reach 181 m above NHN. The largest part of the town is located in the western part of the Bode river valley. This river comes from the Harz mountains and flows into the river Saale, a tributary of the river Elbe. The municipal area of Quedlinburg is . Before the incorporation of the two (previously independent) municipalities of Gernrode and Bad Suderode in January 2014, it was only . Divisions The town Quedlinburg consists of Quedlinburg proper and the following Ortsteile or municipal divisions: Bad Suderode Gernrode Gersdorfer Burg Morgenrot Münchenhof Quarmbeck Neighbouring communities Climate Quedlinburg has an oceanic climate (Cfb) resulting from prevailing westerlies, blowing from the high-pressure area in the central Atlantic towards Scandinavia. Snowfall occurs almost every winter. January and February are the coldest months of the year, with an average temperature of 0.5 °C and 1.5 °C. July and August are the hottest months, with an average temperature of 17 °C (63 °F) and 18 °C (64 °F). The average annual precipitation is close to 438 mm with rain occurring usually from May to September. This precipitation is one of the lowest in Germany, which has an annual average hovering around 440 mm. In August 2010, Quedlinburg was the driest place in Germany, with only 72.4 L/m2. Demographics Governance The mayor is Frank Ruch (CDU), elected in 2015. Town twinning Quedlinburg is twinned with: Aulnoye-Aymeries, France, since 1961 Herford, Germany, since 1991 Celle, Germany, since 1991 Hameln, Germany, since 1991 Hann. Münden, Germany, since 1991 Attractions In the centre of the town are a wide selection of half-timbered buildings from at least five different centuries (including a 14th-century structure, one of Germany's oldest), while around the outer fringes of the old town are examples of Jugendstil buildings, dating from the late 19th and early 20th centuries. The old town of Quedlinburg belongs to the largest in Germany with a size of around 90 hectares. 2000 half-timbered houses can be found here. The oldest, the "Ständerbau", dates back from 1347. Another famous building is called "Klopstockhaus", the birthplace of poet Friedrich Gottlieb Klopstock. Since December 1994, the old town of Quedlinburg and the castle mount with the Stiftskirche (collegiate church) are listed as one of UNESCO's World Heritage Sites. Quedlinburg is one of the best-preserved medieval and Renaissance towns in Europe, having escaped major damage in World War II. In 2006, the Selke valley branch of the Harz Narrow Gauge Railways was extended to Quedlinburg from Gernrode, giving access to the historic steam narrow gauge railway, Alexisbad and the high Harz plateau. The castle and Stiftskirche St. Servatius still dominate the town like in the early Middle Ages. The church is a prime example of German Romanesque style. The treasure of the church, containing ancient Christian religious artifacts and books, was stolen by an American soldier but brought back to Quedlinburg in 1993 and is again on display here. The former Stiftskirche St. Wiperti was established in 936 when the Kanonikerstift St. Wigpertus (of male canons) was moved from the castle hill to make way for what became Quedlinburg Abbey. The church was built at the location of the first Ottonian Royal palace at Quedlinburg. Around 1020, a three-aisled crypt was added to the basilica. The crypt, which survived all later alterations to the church, is also a designated stop on the Romanesque Road today. Infrastructure Transport Air The nearest airports to Quedlinburg are Hannover, northwest, and Leipzig/Halle Airport, southeast. Much closer, but only served by a few airlines, is Magdeburg-Cochstedt. An airfield is located at Ballenstedt-Assmussstedt for general aviation. Railway Regional trains operated by Deutsche Bahn and the private Transdev company run on the standard-gauge Magdeburg–Thale line connecting Quedlinburg station with Magdeburg, Thale, and Halberstadt. In 2006, the Selke Valley branch of the Harz Narrow Gauge Railways was extended into Quedlinburg from Gernrode, giving access via the historic steam-operated narrow-gauge railway to Alexisbad and the High Harz plateau. Bus Quedlinburg is connected by regional buses to the surrounding villages and small towns. Additionally, there are long-distance buses to Berlin. Notable people Andreas Werckmeister (1645–1706), German theorist, organist, organ examiner and composer Dorothea Erxleben (1715–1762), was the first female medical doctor in Germany Friedrich Gottlieb Klopstock (1724–1803), German poet and contemporary of Johann Wolfgang von Goethe Johann Gerhard (1582–1637), theologian, mean Denter representatives of Lutheran orthodoxy Wilhelm Homberg (1652–1715), naturalist, born apparently during a trip in Batavia / Jakarta, but parents living in Quedlinburg Johann Christoph Friedrich GutsMuths (1759–1839), father of German gymnastics Carl Ritter (1779–1859), founder of scientific geography Julius Wolff (1834–1910), Freeman, poet and writer Gustav Albert Schwalbe (1844–1916), anatomist and anthropologist Carl Schroeder (1848–1935), cellist, composer, conductor and Hofkapellmeister Georg Ay (1900–1997), politician (NSDAP), member of Reichstag 1933–1945 Fritz Grasshoff (1913–1997), poet, painter, pop lyricist Bernhard Schrader (1931–2012), chemist, pioneer of experimental Raman and infrared spectroscopy Peter Kramer (born 1933),
the integers). The term quantization may refer to: Signal processing Quantization (signal processing) Quantization (image processing) Color quantization Quantization (music) Physics Quantization (physics) Canonical quantization Geometric quantization Discrete spectrum, or otherwise discrete quantity Spatial quantization
numbers) to a discrete set (such as the integers). The term quantization may refer to: Signal processing Quantization
field of physics Old quantum theory, predating modern quantum mechanics Quantum field theory, an area of quantum mechanics that includes: Quantum electrodynamics Quantum chromodynamics Electroweak interaction Quantum gravity, a field of theoretical physics Quantum optics Quantum chemistry Quantum information Quantum Theory: Concepts and Methods,
quantum mechanics that includes: Quantum electrodynamics Quantum chromodynamics Electroweak interaction Quantum gravity, a field of theoretical physics Quantum optics Quantum chemistry Quantum
U.S., amateurs can transmit up to 1,500 Watts. QRP enthusiasts contend that this practice is rarely necessary, and doing so wastes power, increases the likelihood of causing interference to nearby televisions, radios, and telephones and, for United States' amateurs, is contrary to FCC Part 97 rule, which states that one must use "the minimum power necessary to carry out the desired communications". QRP can also be used for emergency communications during disaster recovery, when frugal use of available battery power and generator fuel is crucial. Practice The practice of operating with low power was popularized as early as 1924, with a variety of reports, editorials and articles published in U.S. amateur radio magazines and journals that encouraged amateurs to lower power output, both for purposes of experimentation, and for improving operating conditions by reducing interference. There is not complete agreement on what constitutes QRP power. Most amateur organizations agree that for CW, AM, FM, and data modes, the transmitter output power should be 5 watts (or less). The maximum output power for SSB (single sideband) is not always agreed upon. Some believe that the power should be no more than 10 Watts peak envelope power (PEP), while others strongly hold that the power limit should be 5 Watts. QRPers are known to regularly use less than 5 Watts, sometimes operating with as little as 100 milliwatts or even less. Extremely low power — 1 Watt and below — is often referred to by hobbyists as "QRPP". Communicating using QRP can be difficult since the QRPer must face the same challenges of radio propagation faced by amateurs using higher power levels, but with the inherent disadvantages associated with having a weaker signal on the receiving end, all other things being equal. QRP aficionados try to make up for this through more efficient antenna systems and enhanced operating skills. Weak signal modes QRP enthusiasts may use special modes that employ technology and software designed to enhance reception of the relatively weak transmitted signals resulting from low power levels. QRSS: Very slow speed Morse code QRSS uses very slow speed CW (Morse code) to compensate for the decreased signal-to-noise ratio involved in QRP operation. QRSS
limit their transmitted RF output power to 5 Watts or less regardless of mode be it CW operation or SSB operation. Reliable two-way communication at such low power levels can be challenging due to changing radio propagation and the difficulty of receiving the relatively weak transmitted signals. QRP enthusiasts may employ optimized antenna systems, enhanced operating skills, and a variety of special modes, in order to maximize their ability to make and maintain radio contact. Since the late 1960s, commercial transceivers specially designed for QRP operation have evolved from vacuum tube to solid state technology. A number of organizations dedicated to QRP operation exist, and aficionados participate in various contests designed to test their skill in making long-distance contacts at low power levels. Etymology The term "QRP" derives from the standard Q code used in radio communication, where QRP is used to request "Reduce power" and QRP? is used to ask "Should I reduce power?". Philosophy Most amateur transceivers are capable of transmitting approximately 100 Watts, but in some parts of the world, such as the U.S., amateurs can transmit up to 1,500 Watts. QRP enthusiasts contend that this practice is rarely necessary, and doing so wastes power, increases the likelihood of causing interference to nearby televisions, radios, and telephones and, for United States' amateurs, is contrary to FCC Part 97 rule, which states that one must use "the minimum power necessary to carry out the desired communications". QRP can also be used for emergency communications during disaster recovery, when frugal use of available battery power and generator fuel is crucial. Practice The practice of operating with low power was popularized as early as 1924, with a variety of reports, editorials and articles published in U.S. amateur radio magazines and journals that encouraged amateurs to lower power output, both for purposes of experimentation, and for improving operating conditions by reducing interference. There is not complete agreement on what constitutes QRP power. Most amateur organizations agree that for CW, AM, FM, and data modes, the transmitter output power should be 5 watts (or less). The maximum output power for SSB (single sideband) is not always agreed upon. Some believe that the power should be no more than 10 Watts peak envelope power (PEP), while others strongly hold that the power limit should be 5 Watts. QRPers are known to regularly use less than 5 Watts, sometimes operating with as little as 100 milliwatts or even less. Extremely low power — 1 Watt and below — is often referred to by hobbyists as "QRPP". Communicating using QRP can be difficult since the QRPer must face the same challenges of radio propagation faced by amateurs using higher power levels, but with the inherent disadvantages associated with having a weaker signal on the receiving end, all other things being equal. QRP aficionados try to make up for this through more efficient antenna systems and enhanced operating skills. Weak signal modes QRP enthusiasts may use special modes that employ technology and software designed to enhance reception of the relatively weak transmitted signals resulting from low power levels. QRSS: Very slow
income tax law Quality, cost, delivery, in lean manufacturing Quintessential Player, formerly known as Quintessential CD
chromodynamics, is the theory of the strong interaction between quarks and gluons.
Daltrey "Quicksilver" (instrumental), 1969 instrumental from Pink Floyd album Soundtrack from the Film More Shortened name form of Quicksilver Messenger Service, used by the band on their later record covers "Quicksilver Girl" (song), Steve Miller Band, 1968 album Sailor Film and television Quicksilver (U.S. game show), a short-lived US game show Quicksilver (Irish game show), an Irish quiz show Quicksilver (film), a 1986 film Fictional entities Quicksilver (Marvel Comics), a superhero in the Marvel Comics universe Quicksilver (DC Comics), or Max Mercury, a superhero in the DC Comics universe Quicksilver, leader of the SilverHawks on the 1986 animated television series SilverHawks Quicksilver, a fictional synthetic hormone in The Invisible Man Other arts Quicksilver (novel) by Neal Stephenson, first volume of The Baroque Cycle Aircraft Eipper Quicksilver Quicksilver GT500 USS Quicksilver (SP-281), US Navy patrol vessel Computing QuickSilver (project), a software research project at Cornell University Quicksilver (software), an open source application launcher for Mac OS
Comics universe Quicksilver, leader of the SilverHawks on the 1986 animated television series SilverHawks Quicksilver, a fictional synthetic hormone in The Invisible Man Other arts Quicksilver (novel) by Neal Stephenson, first volume of The Baroque Cycle Aircraft Eipper Quicksilver Quicksilver GT500 USS Quicksilver (SP-281), US Navy patrol vessel Computing QuickSilver (project), a software research project at Cornell University Quicksilver (software), an open source application launcher for Mac OS X QuickSilver, Broadvision desktop publishing software, formerly by Interleaf Organizations Quicksilver Manufacturing, an aircraft company Quicksilver (ISP), a New Zealand internet service provider Entertainment Quicksilver (company), UK's largest amusement arcade company Quiksilver, surf and sports related apparel brand Quicksilver Software, computer-games company Las
and baritone saxophone or (SATB). Often a second alto may be substituted for the soprano part (AATB) or a bass saxophone may be substituted for the baritone. Vocal quartet Compositions for four singers have been written for quartets a cappella; accompanied by instruments, such as a piano; and accompanied by larger vocal forces, such as a choir. Brahms and Schubert wrote numerous pieces for four voices that were once popular in private salons, although they are seldom performed today. Vocal quartets also feature within larger classical compositions, such as opera, choral works, and symphonic compositions. The final movement of Beethoven's Ninth Symphony and the Verdi Requiem are two examples of renowned concert works that include vocal quartets. Typically, a vocal quartet is composed of: Soprano, alto (or mezzo-soprano), tenor, and bass (or baritone), for mixed ensembles; or 1st tenor, 2nd tenor, baritone, and bass, for male groups; or 1st soprano, 2nd soprano, mezzo-soprano, and contralto, for female groups; or Tenor, lead, baritone, and bass, for barbershop style (both male and female). Baroque quartet The baroque quartet is a form of music composition similar to the trio sonata, but with four music parts performed by three solo melodic instruments and basso continuo. The solo instruments could be strings or wind instruments. Examples of baroque quartets are Telemann's Paris quartets. Jazz Quartets are popular in jazz and jazz fusion music. Jazz quartet ensembles are often composed of a horn, classically clarinet (or saxophone, trumpet, etc.), a chordal instrument (e.g., electric guitar, piano, Hammond organ, vibraphone etc.), a bass instrument (e.g., double bass, tuba or bass guitar) and a drum kit. This configuration is sometimes modified by using a second horn replacing the chordal instrument, such as a trumpet and saxophone with string bass and drum kit, or by using two chordal instruments (e.g., piano and
instrument (e.g., double bass, tuba or bass guitar) and a drum kit. This configuration is sometimes modified by using a second horn replacing the chordal instrument, such as a trumpet and saxophone with string bass and drum kit, or by using two chordal instruments (e.g., piano and electric guitar). Popular music Rock and pop The quartet lineup also is very common in pop and rock music. A standard quartet formation in pop and rock music is an ensemble consisting of two electric guitars, a bass guitar, and a drum kit. This configuration is sometimes modified by using a keyboard instrument (e.g., organ, piano, synthesizer) or a soloing instrument (e.g., saxophone) in place of the second electric guitar. Vocal quartet In 20th century Western popular music, the term "vocal quartet" usually refers to ensembles of four singers of the same gender. This is particularly common for barbershop quartets and Gospel quartets. Some well-known female US vocal quartets include The Carter Sisters; The Forester Sisters; The Chiffons; The Chordettes; The Lennon Sisters; and En Vogue. Some well-known male US vocal quartets include The Oak Ridge Boys; The Statler Brothers; The Ames Brothers; The Chi-Lites; Crosby Stills Nash & Young; The Dixie Hummingbirds; The Four Aces; Four Freshmen; The Four Seasons; The Four Tops; The Statesmen Quartet; The Blackwood Brothers; Cathedral Quartet; Ernie Haase and Signature Sound; The Golden Gate Quartet; The Hilltoppers; The Jordanaires; and Mills Brothers. The only known U.S. drag quartet is The Kinsey Sicks. Some mixed-gender vocal quartets include The Pied Pipers; The Mamas & the Papas; The Merry Macs; and The Weavers. Folk music Russian A Russian folk-instrument quartet commonly consists of a bayan, a prima balalaika, a prima or alto domra, and a contrabass balalaika (e.g., Quartet Moskovskaya Balalaika). Configurations without a bayan include a prima domra, a prima balalaika, an alto domra, and a bass balalaika (Quartet Skaz); or two prima domras, a prima balalaika, and a bass balalaika. References Further reading Allen, Ray. Singing in the Spirit: African-American Sacred Quartets in New York City, in series, Publication[s] of the American Folklore Society: New Series. Philadelphia, Penn.: University of Pennsylvania Press, 1991. xx,[2], 268 p., ill. with b&w
the former occurs, then any subsequent measurement performed by Bob, in the same basis, will always return 1. If the latter occurs, (Alice measures 1) then Bob's measurement will return 0 with certainty. Thus, system has been altered by Alice performing a local measurement on system . This remains true even if the systems and are spatially separated. This is the foundation of the EPR paradox. The outcome of Alice's measurement is random. Alice cannot decide which state to collapse the composite system into, and therefore cannot transmit information to Bob by acting on her system. Causality is thus preserved, in this particular scheme. For the general argument, see no-communication theorem. Ensembles As mentioned above, a state of a quantum system is given by a unit vector in a Hilbert space. More generally, if one has less information about the system, then one calls it an 'ensemble' and describes it by a density matrix, which is a positive-semidefinite matrix, or a trace class when the state space is infinite-dimensional, and has trace 1. Again, by the spectral theorem, such a matrix takes the general form: where the wi are positive-valued probabilities (they sum up to 1), the vectors are unit vectors, and in the infinite-dimensional case, we would take the closure of such states in the trace norm. We can interpret as representing an ensemble where is the proportion of the ensemble whose states are . When a mixed state has rank 1, it therefore describes a 'pure ensemble'. When there is less than total information about the state of a quantum system we need density matrices to represent the state. Experimentally, a mixed ensemble might be realized as follows. Consider a "black box" apparatus that spits electrons towards an observer. The electrons' Hilbert spaces are identical. The apparatus might produce electrons that are all in the same state; in this case, the electrons received by the observer are then a pure ensemble. However, the apparatus could produce electrons in different states. For example, it could produce two populations of electrons: one with state with spins aligned in the positive direction, and the other with state with spins aligned in the negative direction. Generally, this is a mixed ensemble, as there can be any number of populations, each corresponding to a different state. Following the definition above, for a bipartite composite system, mixed states are just density matrices on . That is, it has the general form where the wi are positively valued probabilities, , and the vectors are unit vectors. This is self-adjoint and positive and has trace 1. Extending the definition of separability from the pure case, we say that a mixed state is separable if it can be written as where the are positively valued probabilities and the 's and 's are themselves mixed states (density operators) on the subsystems and respectively. In other words, a state is separable if it is a probability distribution over uncorrelated states, or product states. By writing the density matrices as sums of pure ensembles and expanding, we may assume without loss of generality that and are themselves pure ensembles. A state is then said to be entangled if it is not separable. In general, finding out whether or not a mixed state is entangled is considered difficult. The general bipartite case has been shown to be NP-hard. For the and cases, a necessary and sufficient criterion for separability is given by the famous Positive Partial Transpose (PPT) condition. Reduced density matrices The idea of a reduced density matrix was introduced by Paul Dirac in 1930. Consider as above systems and each with a Hilbert space . Let the state of the composite system be As indicated above, in general there is no way to associate a pure state to the component system . However, it still is possible to associate a density matrix. Let . which is the projection operator onto this state. The state of is the partial trace of over the basis of system : The sum occurs over and the identity operator in . is sometimes called the reduced density matrix of on subsystem . Colloquially, we "trace out" system to obtain the reduced density matrix on . For example, the reduced density matrix of for the entangled state discussed above is This demonstrates that, as expected, the reduced density matrix for an entangled pure ensemble is a mixed ensemble. Also not surprisingly, the density matrix of for the pure product state discussed above is . In general, a bipartite pure state ρ is entangled if and only if its reduced states are mixed rather than pure. Two applications that use them Reduced density matrices were explicitly calculated in different spin chains with unique ground state. An example is the one-dimensional AKLT spin chain: the ground state can be divided into a block and an environment. The reduced density matrix of the block is proportional to a projector to a degenerate ground state of another Hamiltonian. The reduced density matrix also was evaluated for XY spin chains, where it has full rank. It was proved that in the thermodynamic limit, the spectrum of the reduced density matrix of a large block of spins is an exact geometric sequence in this case. Entanglement as a resource In quantum information theory, entangled states are considered a 'resource', i.e., something costly to produce and that allows implementing valuable transformations. The setting in which this perspective is most evident is that of "distant labs", i.e., two quantum systems labeled "A" and "B" on each of which arbitrary quantum operations can be performed, but which do not interact with each other quantum mechanically. The only interaction allowed is the exchange of classical information, which combined with the most general local quantum operations gives rise to the class of operations called LOCC (local operations and classical communication). These operations do not allow the production of entangled states between systems A and B. But if A and B are provided with a supply of entangled states, then these, together with LOCC operations can enable a larger class of transformations. For example, an interaction between a qubit of A and a qubit of B can be realized by first teleporting A's qubit to B, then letting it interact with B's qubit (which is now a LOCC operation, since both qubits are in B's lab) and then teleporting the qubit back to A. Two maximally entangled states of two qubits are used up in this process. Thus entangled states are a resource that enables the realization of quantum interactions (or of quantum channels) in a setting where only LOCC are available, but they are consumed in the process. There are other applications where entanglement can be seen as a resource, e.g., private communication or distinguishing quantum states. Classification of entanglement Not all quantum states are equally valuable as a resource. To quantify this value, different entanglement measures (see below) can be used, that assign a numerical value to each quantum state. However, it is often interesting to settle for a coarser way to compare quantum states. This gives rise to different classification schemes. Most entanglement classes are defined based on whether states can be converted to other states using LOCC or a subclass of these operations. The smaller the set of allowed operations, the finer the classification. Important examples are: If two states can be transformed into each other by a local unitary operation, they are said to be in the same LU class. This is the finest of the usually considered classes. Two states in the same LU class have the same value for entanglement measures and the same value as a resource in the distant-labs setting. There is an infinite number of different LU classes (even in the simplest case of two qubits in a pure state). If two states can be transformed into each other by local operations including measurements with probability larger than 0, they are said to be in the same 'SLOCC class' ("stochastic LOCC"). Qualitatively, two states and in the same SLOCC class are equally powerful (since I can transform one into the other and then do whatever it allows me to do), but since the transformations and may succeed with different probability, they are no longer equally valuable. E.g., for two pure qubits there are only two SLOCC classes: the entangled states (which contains both the (maximally entangled) Bell states and weakly entangled states like ) and the separable ones (i.e., product states like ). Instead of considering transformations of single copies of a state (like ) one can define classes based on the possibility of multi-copy transformations. E.g., there are examples when is impossible by LOCC, but is possible. A very important (and very coarse) classification is based on the property whether it is possible to transform an arbitrarily large number of copies of a state into at least one pure entangled state. States that have this property are called distillable. These states are the most useful quantum states since, given enough of them, they can be transformed (with local operations) into any entangled state and hence allow for all possible uses. It came initially as a surprise that not all entangled states are distillable, those that are not are called 'bound entangled'. A different entanglement classification is based on what the quantum correlations present in a state allow A and B to do: one distinguishes three subsets of entangled states: (1) the non-local states, which produce correlations that cannot be explained by a local hidden variable model and thus violate a Bell inequality, (2) the steerable states that contain sufficient correlations for A to modify ("steer") by local measurements the conditional reduced state of B in such a way, that A can prove to B that the state they possess is indeed entangled, and finally (3) those entangled states that are neither non-local nor steerable. All three sets are non-empty. Entropy In this section, the entropy of a mixed state is discussed as well as how it can be viewed as a measure of quantum entanglement. Definition In classical information theory , the Shannon entropy, is associated to a probability distribution,, in the following way: Since a mixed state is a probability distribution over an ensemble, this leads naturally to the definition of the von Neumann entropy: In general, one uses the Borel functional calculus to calculate a non-polynomial function such as . If the nonnegative operator acts on a finite-dimensional Hilbert space and has eigenvalues , turns out to be nothing more than the operator with the same eigenvectors, but the eigenvalues . The Shannon entropy is then: . Since an event of probability 0 should not contribute to the entropy, and given that the convention is adopted. This extends to the infinite-dimensional case as well: if has spectral resolution assume the same convention when calculating As in statistical mechanics, the more uncertainty (number of microstates) the system should possess, the larger the entropy. For example, the entropy of any pure state is zero, which is unsurprising since there is no uncertainty about a system in a pure state. The entropy of any of the two subsystems of the entangled state discussed above is (which can be shown to be the maximum entropy for mixed states). As a measure of entanglement Entropy provides one tool that can be used to quantify entanglement, although other entanglement measures exist. If the overall system is pure, the entropy of one subsystem can be used to measure its degree of entanglement with the other subsystems. For bipartite pure states, the von Neumann entropy of reduced states is the unique measure of entanglement in the sense that it is the only function on the family of states that satisfies certain axioms required of an entanglement measure. It is a classical result that the Shannon entropy achieves its maximum at, and only at, the uniform probability distribution {1/n,...,1/n}. Therefore, a bipartite pure state is said to be a maximally entangled state if the reduced state of each subsystem of is the diagonal matrix For mixed states, the reduced von Neumann entropy is not the only reasonable entanglement measure. As an aside, the information-theoretic definition is closely related to entropy in the sense of statistical mechanics (comparing the two definitions in the present context, it is customary to set the Boltzmann constant ). For example, by properties of the Borel functional calculus, we see that for any unitary operator , Indeed, without this property, the von Neumann entropy would not be well-defined. In particular, could be the time evolution operator of the system, i.e., where is the Hamiltonian of the system. Here the entropy is unchanged. The reversibility of a process is associated with the resulting entropy change, i.e., a process is reversible if, and only if, it leaves the entropy of the system invariant. Therefore, the march of the arrow of time towards thermodynamic equilibrium is simply the growing spread of quantum entanglement. This provides a connection between quantum information theory and thermodynamics. Rényi entropy also can be used as a measure of entanglement. Entanglement measures Entanglement measures quantify the amount of entanglement in a (often viewed as a bipartite) quantum state. As aforementioned, entanglement entropy is the standard measure of entanglement for pure states (but no longer a measure of entanglement for mixed states). For mixed states, there are some entanglement measures in the literature and no single one is standard. Entanglement cost Distillable entanglement Entanglement of formation Relative entropy of entanglement Squashed entanglement Logarithmic negativity Most (but not all) of these entanglement measures reduce for pure states to entanglement entropy, and are difficult (NP-hard) to compute. Quantum field theory The Reeh-Schlieder theorem of quantum field theory is sometimes seen as an analogue of quantum entanglement. Applications Entanglement has many applications in quantum information theory. With the aid of entanglement, otherwise impossible tasks may be achieved. Among the best-known applications of entanglement are superdense coding and quantum teleportation. Most researchers believe that entanglement is necessary to realize quantum computing (although this is disputed by some). Entanglement is used in some protocols of quantum cryptography, but to prove the security of QKD under standard assumptions does not require entanglement. However, the device independent security of QKD is shown exploiting entanglement between the communication partners. Entangled states There are several canonical entangled states that appear often in theory and experiments. For two qubits, the Bell states are These four pure states are all maximally entangled (according to the entropy of entanglement) and form an orthonormal basis (linear algebra) of the Hilbert space of the two qubits. They play a fundamental role in Bell's theorem. For M>2 qubits, the GHZ state is which reduces to the Bell state for . The traditional GHZ state was defined for . GHZ states are occasionally extended to qudits, i.e., systems of d rather than 2 dimensions. Also for M>2 qubits, there are spin squeezed states, a class of squeezed coherent states satisfying certain restrictions on the uncertainty of spin measurements, which are necessarily entangled. Spin squeezed states are good candidates for enhancing precision measurements using quantum entanglement. For two bosonic modes, a NOON state is This is like the Bell state except the basis kets 0 and 1 have been replaced with "the N photons are in one mode" and "the N photons are in the other mode". Finally, there also exist twin Fock states for bosonic modes, which can be created by feeding a Fock state into two arms leading to a beam splitter. They are the sum of multiple of NOON states, and can used to achieve the Heisenberg limit. For the appropriately chosen measures of entanglement, Bell, GHZ, and NOON states are maximally entangled while spin squeezed and twin Fock states are only partially entangled. The partially entangled states are generally easier to prepare experimentally. Methods of creating entanglement Entanglement is usually created by direct interactions between subatomic particles. These interactions can take numerous forms. One of the most commonly used methods is spontaneous parametric down-conversion to generate a pair of photons entangled in polarisation. Other methods include the use of a fiber coupler to confine and mix photons, photons emitted from decay cascade of the bi-exciton in a quantum dot, the use of the Hong–Ou–Mandel effect, etc. In the earliest tests of Bell's theorem, the entangled particles were generated using atomic cascades. It is also possible to create entanglement between quantum systems that never directly interacted, through the use of entanglement swapping. Two independently prepared, identical particles may also be entangled if their wave functions merely spatially overlap, at least partially. Testing a system for entanglement A density matrix ρ is called separable if it can be written as a convex sum of product states, namely with probabilities. By definition, a state is entangled if it is not separable. For 2-Qubit and Qubit-Qutrit systems (2 × 2 and 2 × 3 respectively) the simple Peres–Horodecki criterion provides both a necessary and a sufficient criterion for separability, and thus—inadvertently—for detecting entanglement. However, for the general case, the criterion is merely a necessary one for separability, as the problem becomes NP-hard when generalized. Other separability criteria include (but not limited to) the range criterion, reduction criterion, and those based on uncertainty relations. See Ref. for a review of separability criteria in discrete-variable systems and Ref. for a review on techniques and challenges in experimental entanglement certification in discrete-variable systems. A numerical approach to the problem is suggested by Jon Magne Leinaas, Jan Myrheim and Eirik Ovrum in their paper "Geometrical aspects of entanglement". Leinaas et al. offer a numerical approach, iteratively refining an estimated separable state towards the target state to be tested, and checking if the target state can indeed be reached. An implementation of the algorithm (including a built-in Peres-Horodecki criterion testing) is "StateSeparator" web-app. In continuous variable systems, the Peres-Horodecki criterion also applies. Specifically, Simon formulated a particular version of the Peres-Horodecki criterion in terms of the second-order moments of canonical operators and showed that it is necessary and sufficient for -mode Gaussian states (see Ref. for a seemingly different but essentially equivalent approach). It was later found that Simon's condition is also necessary and sufficient for -mode Gaussian states, but no longer sufficient for -mode Gaussian states. Simon's condition can be generalized by taking into account the higher order moments of canonical operators or by using entropic measures. In 2016 China launched the world’s first quantum communications satellite. The $100m Quantum Experiments at Space Scale (QUESS) mission was launched on Aug 16, 2016, from the Jiuquan Satellite Launch Center in northern China at 01:40 local time. For the next two years, the craft – nicknamed "Micius" after the ancient Chinese philosopher – will demonstrate the feasibility of quantum communication between Earth and space, and test quantum entanglement over unprecedented distances. In the June 16, 2017, issue of Science, Yin et al. report setting a new quantum entanglement distance record of 1,203 km, demonstrating the survival of a two-photon pair and a violation of a Bell inequality, reaching a CHSH valuation of 2.37 ± 0.09, under strict Einstein locality conditions, from the Micius satellite to bases in Lijian, Yunnan and Delingha, Quinhai, increasing the efficiency of transmission over prior fiberoptic experiments by an order of magnitude. Naturally entangled systems The electron shells of multi-electron atoms always consist of entangled electrons. The correct ionization energy can be calculated only by consideration of electron entanglement. Photosynthesis It has been suggested that in the process of photosynthesis, entanglement is involved in the transfer of energy between light-harvesting complexes and photosynthetic reaction centers where the energy of each absorbed photon is harvested in the
reference frames, in which each measurement (in its own relativistic time frame) occurs before the other, the measurement results remain correlated. The fundamental issue about measuring spin along different axes is that these measurements cannot have definite values at the same time―they are incompatible in the sense that these measurements' maximum simultaneous precision is constrained by the uncertainty principle. This is contrary to what is found in classical physics, where any number of properties can be measured simultaneously with arbitrary accuracy. It has been proven mathematically that compatible measurements cannot show Bell-inequality-violating correlations, and thus entanglement is a fundamentally non-classical phenomenon. Notable experimental results proving quantum entanglement The first experiment that verified Einstein's spooky action at a distance or entanglement was successfully corroborated in a lab by Chien-Shiung Wu and a colleague named I. Shaknov in 1949, and was published on new year's day in 1950. The result specifically proved the quantum correlations of a pair of photons. In experiments in 2012 and 2013, polarization correlation was created between photons that never coexisted in time. The authors claimed that this result was achieved by entanglement swapping between two pairs of entangled photons after measuring the polarization of one photon of the early pair, and that it proves that quantum non-locality applies not only to space but also to time. In three independent experiments in 2013 it was shown that classically communicated separable quantum states can be used to carry entangled states. The first loophole-free Bell test was held in TU Delft in 2015 confirming the violation of Bell inequality. In August 2014, Brazilian researcher Gabriela Barreto Lemos and team were able to "take pictures" of objects using photons that had not interacted with the subjects, but were entangled with photons that did interact with such objects. Lemos, from the University of Vienna, is confident that this new quantum imaging technique could find application where low light imaging is imperative, in fields like biological or medical imaging. From 2016 various companies like IBM, Microsoft etc. have successfully created quantum computers and allowed developers and tech enthusiasts to openly experiment with concepts of quantum mechanics including quantum entanglement. Mystery of time There have been suggestions to look at the concept of time as an emergent phenomenon that is a side effect of quantum entanglement. In other words, time is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks, or of any objects usable as clocks) into the same history. This was first fully theorized by Don Page and William Wootters in 1983. The Wheeler–DeWitt equation that combines general relativity and quantum mechanics – by leaving out time altogether – was introduced in the 1960s and it was taken up again in 1983, when Page and Wootters made a solution based on quantum entanglement. Page and Wootters argued that entanglement can be used to measure time. Emergent gravity Based on AdS/CFT correspondence, Mark Van Raamsdonk suggested that spacetime arises as an emergent phenomenon of the quantum degrees of freedom that are entangled and live in the boundary of the space-time. Induced gravity can emerge from the entanglement first law. Non-locality and entanglement In the media and popular science, quantum non-locality is often portrayed as being equivalent to entanglement. While this is true for pure bipartite quantum states, in general entanglement is only necessary for non-local correlations, but there exist mixed entangled states that do not produce such correlations. A well-known example is the Werner states that are entangled for certain values of , but can always be described using local hidden variables. Moreover, it was shown that, for arbitrary numbers of parties, there exist states that are genuinely entangled but admit a local model. The mentioned proofs about the existence of local models assume that there is only one copy of the quantum state available at a time. If the parties are allowed to perform local measurements on many copies of such states, then many apparently local states (e.g., the qubit Werner states) can no longer be described by a local model. This is, in particular, true for all distillable states. However, it remains an open question whether all entangled states become non-local given sufficiently many copies. In short, entanglement of a state shared by two parties is necessary but not sufficient for that state to be non-local. It is important to recognize that entanglement is more commonly viewed as an algebraic concept, noted for being a prerequisite to non-locality as well as to quantum teleportation and to superdense coding, whereas non-locality is defined according to experimental statistics and is much more involved with the foundations and interpretations of quantum mechanics. Quantum mechanical framework The following subsections are for those with a good working knowledge of the formal, mathematical description of quantum mechanics, including familiarity with the formalism and theoretical framework developed in the articles: bra–ket notation and mathematical formulation of quantum mechanics. Pure states Consider two arbitrary quantum systems and , with respective Hilbert spaces and . The Hilbert space of the composite system is the tensor product If the first system is in state and the second in state , the state of the composite system is States of the composite system that can be represented in this form are called separable states, or product states. Not all states are separable states (and thus product states). Fix a basis for and a basis for . The most general state in is of the form . This state is separable if there exist vectors so that yielding and It is inseparable if for any vectors at least for one pair of coordinates we have If a state is inseparable, it is called an 'entangled state'. For example, given two basis vectors of and two basis vectors of , the following is an entangled state: If the composite system is in this state, it is impossible to attribute to either system or system a definite pure state. Another way to say this is that while the von Neumann entropy of the whole state is zero (as it is for any pure state), the entropy of the subsystems is greater than zero. In this sense, the systems are "entangled". This has specific empirical ramifications for interferometry. The above example is one of four Bell states, which are (maximally) entangled pure states (pure states of the space, but which cannot be separated into pure states of each and ). Now suppose Alice is an observer for system , and Bob is an observer for system . If in the entangled state given above Alice makes a measurement in the eigenbasis of , there are two possible outcomes, occurring with equal probability: Alice measures 0, and the state of the system collapses to . Alice measures 1, and the state of the system collapses to . If the former occurs, then any subsequent measurement performed by Bob, in the same basis, will always return 1. If the latter occurs, (Alice measures 1) then Bob's measurement will return 0 with certainty. Thus, system has been altered by Alice performing a local measurement on system . This remains true even if the systems and are spatially separated. This is the foundation of the EPR paradox. The outcome of Alice's measurement is random. Alice cannot decide which state to collapse the composite system into, and therefore cannot transmit information to Bob by acting on her system. Causality is thus preserved, in this particular scheme. For the general argument, see no-communication theorem. Ensembles As mentioned above, a state of a quantum system is given by a unit vector in a Hilbert space. More generally, if one has less information about the system, then one calls it an 'ensemble' and describes it by a density matrix, which is a positive-semidefinite matrix, or a trace class when the state space is infinite-dimensional, and has trace 1. Again, by the spectral theorem, such a matrix takes the general form: where the wi are positive-valued probabilities (they sum up to 1), the vectors are unit vectors, and in the infinite-dimensional case, we would take the closure of such states in the trace norm. We can interpret as representing an ensemble where is the proportion of the ensemble whose states are . When a mixed state has rank 1, it therefore describes a 'pure ensemble'. When there is less than total information about the state of a quantum system we need density matrices to represent the state. Experimentally, a mixed ensemble might be realized as follows. Consider a "black box" apparatus that spits electrons towards an observer. The electrons' Hilbert spaces are identical. The apparatus might produce electrons that are all in the same state; in this case, the electrons received by the observer are then a pure ensemble. However, the apparatus could produce electrons in different states. For example, it could produce two populations of electrons: one with state with spins aligned in the positive direction, and the other with state with spins aligned in the negative direction. Generally, this is a mixed ensemble, as there can be any number of populations, each corresponding to a different state. Following the definition above, for a bipartite composite system, mixed states are just density matrices on . That is, it has the general form where the wi are positively valued probabilities, , and the vectors are unit vectors. This is self-adjoint and positive and has trace 1. Extending the definition of separability from the pure case, we say that a mixed state is separable if it can be written as where the are positively valued probabilities and the 's and 's are themselves mixed states (density operators) on the subsystems and respectively. In other words, a state is separable if it is a probability distribution over uncorrelated states, or product states. By writing the density matrices as sums of pure ensembles and expanding, we may assume without loss of generality that and are themselves pure ensembles. A state is then said to be entangled if it is not separable. In general, finding out whether or not a mixed state is entangled is considered difficult. The general bipartite case has been shown to be NP-hard. For the and cases, a necessary and sufficient criterion for separability is given by the famous Positive Partial Transpose (PPT) condition. Reduced density matrices The idea of a reduced density matrix was introduced by Paul Dirac in 1930. Consider as above systems and each with a Hilbert space . Let the state of the composite system be As indicated above, in general there is no way to associate a pure state to the component system . However, it still is possible to associate a density matrix. Let . which is the projection operator onto this state. The state of is the partial trace of over the basis of system : The sum occurs over and the identity operator in . is sometimes called the reduced density matrix of on subsystem . Colloquially, we "trace out" system to obtain the reduced density matrix on . For example, the reduced density matrix of for the entangled state discussed above is This demonstrates that, as expected, the reduced density matrix for an entangled pure ensemble is a mixed ensemble. Also not surprisingly, the density matrix of for the pure product state discussed above is . In general, a bipartite pure state ρ is entangled if and only if its reduced states are mixed rather than pure. Two applications that use them Reduced density matrices were explicitly calculated in different spin chains with unique ground state. An example is the one-dimensional AKLT spin chain: the ground state can be divided into a block and an environment. The reduced density matrix of the block is proportional to a projector to a degenerate ground state of another Hamiltonian. The reduced density matrix also was evaluated for XY spin chains, where it has full rank. It was proved that in the thermodynamic limit, the spectrum of the reduced density matrix of a large block of spins is an exact geometric sequence in this case. Entanglement as a resource In quantum information theory, entangled states are considered a 'resource', i.e., something costly to produce and that allows implementing valuable transformations. The setting in which this perspective is most evident is that of "distant labs", i.e., two quantum systems labeled "A" and "B" on each of which arbitrary quantum operations can be performed, but which do not interact with each other quantum mechanically. The only interaction allowed is the exchange of classical information, which combined with the most general local quantum operations gives rise to the class of operations called LOCC (local operations and classical communication). These operations do not allow the production of entangled states between systems A and B. But if A and B are provided with a supply of entangled states, then these, together with LOCC operations can enable a larger class of transformations. For example, an interaction between a qubit of A and a qubit of B can be realized by first teleporting A's qubit to B, then letting it interact with B's qubit (which is now a LOCC operation, since both qubits are in B's lab) and then teleporting the qubit back to A. Two maximally entangled states of two qubits are used up in this process. Thus entangled states are a resource that enables the realization of quantum interactions (or of quantum channels) in a setting where only LOCC are available, but they are consumed in the process. There are other applications where entanglement can be seen as a resource, e.g., private communication or distinguishing quantum states. Classification of entanglement Not all quantum states are equally valuable as a resource. To quantify this value, different entanglement measures (see below) can be used, that assign a numerical value to each quantum state. However, it
and Commerce between the two countries. France, then engaged in the 1792–1797 War of the First Coalition, which included Great Britain, viewed the 1794 Jay Treaty between the United States and Britain as incompatible with those treaties, and retaliated by seizing American ships trading with Britain. Diplomatic negotiations failed to resolve these differences, and in October 1796 French privateers began attacking all merchant ships in American waters. The dissolution of Federal military forces following independence left the US unable to mount an effective response and by October 1797, over 316 American ships had been captured. In March 1798, Congress re-established the United States Navy and in July authorized the use of military force against France. In addition to a number of individual ship actions, by 1799 American losses had been significantly reduced through informal cooperation with the Royal Navy, whereby merchant ships from both nations were allowed to join each other's convoys. Diplomatic negotiations between the US and France continued; the establishment of the French Consulate in November 1799 led to the Convention of 1800, which ended the war. Background Under the Treaty of Alliance (1778), the United States had agreed to protect French colonies in the Caribbean in return for their support in the American Revolutionary War. As the treaty had no termination date, France claimed this obligation included defending them against Great Britain and the Dutch Republic during the 1792 to 1797 War of the First Coalition. Despite popular enthusiasm for the French Revolution, especially among anti-British Jeffersonians, there was little support for this in Congress. Neutrality allowed New England shipowners to earn huge profits evading the British blockade, while Southern plantation owners feared the example set by France's abolition of slavery in 1794. In 1793, Congress suspended repayment of French loans incurred during the Revolutionary War, arguing the execution of Louis XVI and establishment of the French First Republic rendered existing agreements void. They further argued American military obligations under the Treaty of Alliance applied only to a "defensive conflict" and thus did not apply, since France had declared war on Britain and the Dutch Republic. To ensure the US did not become involved, Congress passed the Neutrality Act of 1794, while President George Washington issued an Executive Order forbidding American merchant ships from arming themselves. France accepted these acts, but on the basis of 'benevolent neutrality', which they interpreted as allowing French privateers access to US ports, and the right to sell captured British ships in American prize courts, but not vice versa. However, the US viewed 'neutrality' as the right to provide the same privileges to both. These differences were further exacerbated in November 1794 when the US and Britain signed a new trade agreement, which contradicted the 1778 Commercial Treaty granting France most favoured nation status. The Jay Treaty resolved outstanding issues from the American Revolution, and expanded trade between the two countries; between 1794 and 1801, American exports to Britain nearly tripled in value, from US$33 million to $94 million. As a result, in late 1796 French privateers began seizing American ships trading with the British. Any effective response was hampered by the almost complete lack of a United States Navy; driven by Jeffersonian opposition to Federal institutions, its last warship had been sold in 1785, leaving only a small flotilla belonging to the United States Revenue Cutter Service and a few neglected coastal forts. This allowed French privateers to roam virtually unchecked; from October 1796 to June 1797, they captured 316 ships, 6% of the entire American merchant fleet, causing losses of $12 to $15 million. Efforts to resolve the conflict through diplomacy ended in the 1797 dispute known as the XYZ Affair. However, the hostilities created support for establishing a limited naval force, and on June 18, President John Adams appointed Benjamin Stoddert the first Secretary of the Navy. On July 7, 1798, Congress approved the use of force against French warships in American waters, but wanted to ensure conflict did not escalate beyond these strictly limited objectives. As a result, it was called a "limited" or "Quasi-War" and led to political debate over whether it was constitutional. A series of rulings by the Supreme Court of the United States established its legality and confirmed the ability of the US to conduct undeclared war or "police actions". Forces and strategy Since battleships were expensive to build and required highly specialised construction facilities, in 1794 Congress compromised by ordering six large frigates. By 1798, the first three were nearly complete and on July 16, 1798, additional funding was approved for the , , and , plus the frigates and . The provision of naval stores and equipment by the British allowed these to be built relatively quickly, and all saw action during the war. The US Navy was further reinforced by so-called 'subscription ships', privately funded vessels provided by individual cities. These included five frigates, among them the , commanded by Stephen Decatur, and four merchantmen converted into sloops. Primarily intended to attack foreign shipping, these were noted for their speed, and earned huge profits for their owners; the captured over 80 enemy vessels, including the French corvette . With most of the French fleet confined to home ports by the Royal Navy, Secretary Stoddert was able to concentrate his forces against the limited number of frigates and smaller vessels that evaded the blockade and reached the Caribbean. The US also needed convoy protection, and while there was no formal agreement with the British, considerable co-operation took place at a local level. The two navies shared a signal system, and allowed their merchantmen to join each other's convoys, most of which were provided by the British, who had four to five times more escorts available. This allowed the US Navy to concentrate on attacking French privateers, most of very shallow draft and armed with between
these to be built relatively quickly, and all saw action during the war. The US Navy was further reinforced by so-called 'subscription ships', privately funded vessels provided by individual cities. These included five frigates, among them the , commanded by Stephen Decatur, and four merchantmen converted into sloops. Primarily intended to attack foreign shipping, these were noted for their speed, and earned huge profits for their owners; the captured over 80 enemy vessels, including the French corvette . With most of the French fleet confined to home ports by the Royal Navy, Secretary Stoddert was able to concentrate his forces against the limited number of frigates and smaller vessels that evaded the blockade and reached the Caribbean. The US also needed convoy protection, and while there was no formal agreement with the British, considerable co-operation took place at a local level. The two navies shared a signal system, and allowed their merchantmen to join each other's convoys, most of which were provided by the British, who had four to five times more escorts available. This allowed the US Navy to concentrate on attacking French privateers, most of very shallow draft and armed with between one and twenty guns. Operating from French and Spanish bases in the Caribbean, particularly Guadeloupe, they made opportunistic attacks on passing ships, before escaping back into port. To counter those tactics, the US used similarly sized vessels from the United States Revenue Cutter Service, as well as commissioning their own privateers. The first American ship to see action was the , a converted East Indiaman with 26 guns; most were far smaller. The Revenue cutter , commanded by Edward Preble, made two cruises to the West Indies and captured ten prizes. Preble turned command of Pickering over to Benjamin Hillar, who captured the much larger and more heavily armed French privateer lEgypte Conquise after a nine-hour battle. In September 1800, Hillar, Pickering, and her entire crew were lost at sea in a storm. Preble next commanded the frigate , which he sailed around Cape Horn into the Pacific to protect U.S. merchantmen in the East Indies. He recaptured several U.S. ships that had been seized by French privateers. For various reasons, the role of the Royal Navy was minimised both at the time and later; the first significant study of the war by US naval historian Gardner W. Allen in 1909 focused exclusively on ship-to-ship actions, and this is how the war is often remembered. However, historian Michael Palmer argues American naval operations cannot be understood in isolation and when operating in the Caribbean, Significant naval actions From the perspective of the US Navy, the Quasi-War consisted of a series of ship-to-ship actions in US coastal waters and the Caribbean; one of the first was the Capture of La Croyable on 7 July 1798 by the outside Egg Harbor, New Jersey. On 20 November, a pair of French frigates, Insurgente and Volontaire, captured the schooner , commanded by Lieutenant William Bainbridge; Retaliation would be recaptured on 28 June 1799. On 9 February 1799, the frigate captured the French Navy's frigate L'Insurgente and severely damaged the frigate La Vengeance, largely due to . By 1 July, under the command of Stephen Decatur, had been refitted and repaired and embarked on its mission to patrol the South Atlantic coast and West Indies in search of French ships which were preying on American merchant vessels. On 1 January 1800, a convoy of American merchant ships and their escort, United States naval schooner , engaged a squadron of armed barges manned by French-allied Haitians known as picaroons off the coast of present-day Haiti. On 1 February, the American frigate unsuccessfully tried to capture the French frigate La Vengeance off the coast of Saint Kitts. In early May, Captain Silas Talbot organized a naval expedition to Puerto Plata on the island of Hispaniola in order to harass French shipping, capturing the Spanish coastal fort at Puerto Plata and a French corvette. Following the French invasion of Curaçao in July, the American sloops and began a blockade of the island in September that led to a French withdrawal. On 12 October, the frigate captured the corvette . On 25 October, the defeated the French brig Flambeau near the island of Dominica in the Caribbean Sea. Enterprise also captured eight privateers and freed eleven U.S. merchant ships from captivity, while captured the French privateers Deux Amis and Diane and liberated numerous American merchant ships. Although overall USN losses were light, by the time the war ended in 1800, the French had seized over 2,000 American merchant ships. Conclusion of hostilities By late 1800, the United States Navy and the Royal Navy, combined with a more conciliatory diplomatic stance by the government of First Consul Napoleon Bonaparte, had reduced the activity of the French privateers and warships. The Convention of 1800, signed on 30 September, ended the Quasi-War. It affirmed the rights of Americans as neutrals upon the sea and abrogated the alliance with France of 1778. However, it failed
would have been prevented if Quality Systems had been in place. The rule is promulgated at 21 CFR 820. According to current Good Manufacturing Practice (GMP), medical device manufacturers have the responsibility to use good judgment when developing their quality system and apply those sections of the FDA Quality System (QS) Regulation that are applicable to their specific products and operations, in Part 820 of the QS regulation. As with GMP, operating within this flexibility, it is the responsibility of each manufacturer to establish requirements for each type or family of devices that will result in devices that are safe and effective, and to establish methods and procedures to design, produce, and distribute devices that meet the quality system requirements. The FDA has identified in the QS regulation the 7 essential subsystems of a quality system. These subsystems include: Management controls; Design controls; Production and process controls Corrective and preventative actions Material controls Records, documents, and change controls Facilities and equipment controls all overseen by management and quality audits. Because the QS regulation covers a broad spectrum of devices and production processes, it allows some leeway in the details of quality system elements. It is left to manufacturers to determine the necessity for, or extent of, some quality elements and to develop and implement procedures tailored to their particular processes and devices. For example, if it is impossible to mix up labels at a manufacturer because there is only one label to each product, then there is no necessity for the manufacturer to comply with all of the GMP requirements under device labeling. Drug manufacturers are regulated under a different section of the Code of Federal Regulations: Organizations and awards The International Organization for Standardization's ISO 9001:2015 series describes standards for a QMS addressing the principles and processes surrounding the design, development, and delivery of a general product or service. Organizations can participate in a continuing certification process to ISO 9001:2015 to demonstrate their compliance with the standard, which includes a requirement for continual (i.e. planned) improvement of the QMS, as well as more foundational QMS components such as failure mode and effects analysis (FMEA). ISO 9000:2005 provides information on the fundamentals and vocabulary used in quality management systems. ISO 9004:2009 provides guidance on a quality management approach for the sustained success of an organization. Neither of these standards can be used for certification purposes as they provide guidance, not requirements. The Baldrige Performance Excellence Program educates organizations in improving their performance and administers the Malcolm Baldrige National Quality Award. The Baldrige Award recognizes U.S. organizations for performance excellence based on the Baldrige Criteria for Performance Excellence. The Criteria address critical aspects of management that contribute to performance excellence: leadership; strategy; customers; measurement, analysis, and knowledge management; workforce; operations; and results. The European Foundation for Quality Management's EFQM Excellence Model supports an award scheme similar to the Baldrige Award for European companies. In Canada, the National Quality Institute presents the 'Canada Awards for Excellence' on an annual basis to organizations that
The two have a great deal of similarity, and many manufacturers adopt QMS that is compliant with both guidelines. ISO 13485 are harmonized with the European Union medical devices directive (93/42/EEC) as well as the IVD and AIMD directives. The ISO standard is also incorporated in regulations for other jurisdictions such as Japan (JPAL) and Canada (CMDCAS). Quality System requirements for medical devices have been internationally recognized as a way to assure product safety and efficacy and customer satisfaction since at least 1983 and were instituted as requirements in a final rule published on October 7, 1996. The U.S. Food and Drug Administration (FDA) had documented design defects in medical devices that contributed to recalls from 1983 to 1989 that would have been prevented if Quality Systems had been in place. The rule is promulgated at 21 CFR 820. According to current Good Manufacturing Practice (GMP), medical device manufacturers have the responsibility to use good judgment when developing their quality system and apply those sections of the FDA Quality System (QS) Regulation that are applicable to their specific products and operations, in Part 820 of the QS regulation. As with GMP, operating within this flexibility, it is the responsibility of each manufacturer to establish requirements for each type or family of devices that will result in devices that are safe and effective, and to establish methods and procedures to design, produce, and distribute devices that meet the quality system requirements. The FDA has identified in the QS regulation the 7 essential subsystems of a quality system. These subsystems include: Management controls; Design controls; Production and process controls Corrective and preventative actions Material controls Records, documents, and change controls Facilities and equipment controls all overseen by management and quality audits. Because the QS regulation covers a broad spectrum of devices and production processes, it allows some leeway in the details of quality system elements. It is left to manufacturers to determine the necessity for, or extent of, some quality elements and to develop and implement procedures tailored to their particular processes and devices. For example, if it is impossible to mix up labels at a manufacturer because there is only one label to each product, then there is no necessity for the manufacturer to comply with all of the GMP requirements under device labeling. Drug manufacturers are regulated under a different section of the Code of Federal Regulations: Organizations and awards The International Organization for Standardization's ISO 9001:2015 series describes standards for a QMS addressing the principles and processes surrounding the design, development, and delivery of a general product or service. Organizations can participate in a continuing certification process to ISO 9001:2015 to demonstrate their compliance with the standard, which includes a requirement for continual (i.e. planned) improvement of the QMS, as well as more foundational QMS components such as failure mode and effects analysis (FMEA). ISO 9000:2005 provides information on the fundamentals and vocabulary used in quality management systems. ISO 9004:2009 provides guidance on a quality management approach for the sustained success of an organization. Neither of these standards can be used for certification purposes as they provide guidance, not requirements. The Baldrige Performance Excellence Program educates organizations in improving their performance and administers the Malcolm Baldrige National Quality Award. The Baldrige Award recognizes U.S. organizations for performance excellence based on the Baldrige Criteria for Performance Excellence. The Criteria address critical aspects of management that contribute to performance excellence: leadership; strategy; customers; measurement, analysis, and knowledge management; workforce; operations; and results. The European Foundation for Quality Management's EFQM Excellence Model supports an award scheme similar to the Baldrige Award for European companies. In Canada, the National Quality Institute presents the 'Canada Awards for Excellence' on an annual
or more races. Hispanic or Latino of any race were 5.53% of the population. The median income for a household in the town was $26,250. About 22.4% of families and 21.4% of the population were below the poverty line, including 39.4% of those under the age of 18. Transportation There are no significant highways passing through Quantico. All road vehicles must pass through Marine Corps Base Quantico in order to reach the town. Therefore, all vehicle drivers must present a valid driver's license to the military security officer stationed at the gate, and may be required to state their destination and reason for visiting. More thorough searches and checks may also be undertaken, according to the discretion and authority of base security. Amtrak and Virginia Railway Express trains stop at the Quantico station. Notable people Robert L. Crawford Jr., actor on Laramie Geof Isherwood, artist Shelby Lynne, musician, singer, songwriter, producer, owner of Everso Records, actress Roy Thomas, former pitcher for the Seattle Mariners Popular culture The headquarters of the FBI Academy at the Quantico Marine Corps Base are featured in the 1991 film The Silence of the Lambs, the 2013–2015 TV series Hannibal, and the 2015–2018 series Quantico. The headquarters of the FBI Behavioral Analysis Unit are also featured in the 2005–2020 series Criminal Minds and the 2017–2019 series Mindhunter. See also Langley, Virginia Behavioral Analysis
of the HMX-1 presidential helicopter squadron, the FBI Academy, the FBI Laboratory, the Marine Corps Combat Development Command, the Officer Candidates School, The Basic School, The United States Drug Enforcement Administration training academy, the Naval Criminal Investigative Service, the United States Army Criminal Investigation Command, and the Air Force Office of Special Investigations headquarters. A replica of the United States Marine Corps War Memorial stands at one of the entrances to the base. Geography According to the United States Census Bureau, the town has a total area of , of which, of it is land and none of the area is covered with water. Climate Quantico has a humid subtropical climate (Köppen climate classification Cfa). Demographics As of the census of 2000, there were 561 people, 295 households, and 107 families living in the town. The population density was . The racial makeup was 61.32% White, 20.32% African American, 10.16% Asian, 0.36% Native American, 2.32% from other races, and 5.53% from two or more races. Hispanic or Latino of any race were 5.53% of the population. The median income for a household in the town was $26,250. About 22.4% of families and 21.4% of the population were below the poverty line, including 39.4% of those under the age of 18. Transportation There are no significant highways passing through Quantico. All road vehicles must pass through Marine Corps Base Quantico in order to reach the town. Therefore, all vehicle drivers must present a valid driver's license to the military security officer stationed at the gate, and may be required to
Diversion Supplementary Services (QSIG-CF), International and European Versions: ISO/IEC 13873, ETSI ETS 300 257 ECMA-178 - Private Integrated Services Network (PISN) - Inter-Exchange Signalling Protocol - Call Transfer Supplementary Service (QSIG-CT), International and European Versions: ISO/IEC 13869, ETSI ETS 300 261 Source : ECMA - list of standards (search the list for PISN to find all QSIG related standards at ECMA) QSIG basically uses ROSE to invoke specific supplementary service at the remote PINX. These ROSE operations are coded in a Q.931 FACILITY info element. Here a list of QSIG opcodes: List of ISDN standards ETS 300 052 - Multiple Subscriber Number ETS 300 055 - Call Waiting ETS 300 092 - Calling Line Identification Presentation (CLIP) ETS 300 093 - Calling Line Identification Restriction (CLIR) ETS 300 097 - Connected Line Identification Presentation (COLP) ETS 300 098 - Connected Line Identification Restriction (COLR) ETS 300 130 - Malicious Call Identification ETS 300 141 - Call Hold ETS 300 172 - Circuit-Mode Basic Service ETS 300 173 - Called/Calling Line ID Presentation ETS 300 182 - Advice Of Charge ETS
not owned by any company. This allows interoperability between communications platforms provided by disparate vendors. QSIG has two layers, called BC (basic call) and GF (generic function). QSIG BC describes how to set up calls between PBXs. QSIG GF provides supplementary services for large-scale corporate, educational, and government networks, such as line identification, call intrusion and call forwarding. Thus for a large or very distributed company that requires multiple PBXs, users can receive the same services across the network and be unaware of the switch that their telephone is connected to. This greatly eases the problems of management of large networks. QSIG will likely never rival each vendor's private network protocols, but it does provide an option for a higher level of integration than that of the traditional choices. List of QSIG standards Note: This list is not complete. See the "source" after the list for more information. ECMA-143 - Private Integrated Services Network (PISN) - Circuit Mode Bearer Services - Inter-Exchange Signalling Procedures and Protocol (QSIG-BC), Basic Call, International and European Versions: ISO/IEC 11572, ETSI EN 300 172 ECMA-165 - Private Integrated Services Network (PISN) - Generic Functional Protocol for the Support
Penrose tiles, that produced only non-periodic tilings of the plane. These tilings displayed instances of fivefold symmetry. One year later Alan Mackay showed experimentally that the diffraction pattern from the Penrose tiling had a two-dimensional Fourier transform consisting of sharp 'delta' peaks arranged in a fivefold symmetric pattern. Around the same time, Robert Ammann created a set of aperiodic tiles that produced eightfold symmetry. In 1972 de Wolf and van Aalst reported that the diffraction pattern produced by a crystal of sodium carbonate cannot be labeled with three indices but needed one more, which implied that the underlying structure had four dimensions in reciprocal space. Other puzzling cases have been reported, but until the concept of quasicrystal came to be established, they were explained away or denied. Shechtman first observed ten-fold electron diffraction patterns in 1982, while conducting a routine study of an aluminium–manganese alloy, Al6Mn, at the US National Bureau of Standards (later NIST). Shechtman related his observation to Ilan Blech, who responded that such diffractions had been seen before. Around that time, Shechtman also related his finding to John W. Cahn of the NIST, who did not offer any explanation and challenged him to solve the observation. Shechtman quoted Cahn as saying: "Danny, this material is telling us something, and I challenge you to find out what it is". The observation of the ten-fold diffraction pattern lay unexplained for two years until the spring of 1984, when Blech asked Shechtman to show him his results again. A quick study of Shechtman's results showed that the common explanation for a ten-fold symmetrical diffraction pattern, a type of crystal twinning, was ruled out by his experiments. Therefore, Blech looked for a new structure containing cells connected to each other by defined angles and distances but without translational periodicity. He decided to use a computer simulation to calculate the diffraction intensity from a cluster of such a material, which he termed as "multiple polyhedral", and found a ten-fold structure similar to what was observed. The multiple polyhedral structure was termed later by many researchers as icosahedral glass. Shechtman accepted Blech's discovery of a new type of material and chose to publish his observation in a paper entitled "The Microstructure of Rapidly Solidified Al6Mn", which was written around June 1984 and published in a 1985 edition of Metallurgical Transactions A. Meanwhile, on seeing the draft of the paper, John Cahn suggested that Shechtman's experimental results merit a fast publication in a more appropriate scientific journal. Shechtman agreed and, in hindsight, called this fast publication "a winning move”. This paper, published in the Physical Review Letters, repeated Shechtman's observation and used the same illustrations as the original paper. Originally, the new form of matter was dubbed "Shechtmanite". The term "quasicrystal" was first used in print by Steinhardt and Levine shortly after Shechtman's paper was published. Also in 1985, Ishimasa et al. reported twelvefold symmetry in Ni-Cr particles. Soon, eightfold diffraction patterns were recorded in V-Ni-Si and Cr-Ni-Si alloys. Over the years, hundreds of quasicrystals with various compositions and different symmetries have been discovered. The first quasicrystalline materials were thermodynamically unstable—when heated, they formed regular crystals. However, in 1987, the first of many stable quasicrystals were discovered, making it possible to produce large samples for study and applications. In 1992, the International Union of Crystallography altered its definition of a crystal, reducing it to the ability to produce a clear-cut diffraction pattern and acknowledging the possibility of the ordering to be either periodic or aperiodic. In 2001, Paul Steinhardt of Princeton University hypothesized that quasicrystals could exist in nature and developed a method of recognition, inviting all the mineralogical collections of the world to identify any badly cataloged crystals. In 2007 Steinhardt received a reply by Luca Bindi, who found a quasicrystalline specimen from Khatyrka in the University of Florence Mineralogical Collection. The crystal samples were sent to Princeton University for other tests, and in late 2009, Steinhardt confirmed its quasicrystalline character. This quasicrystal, with a composition of Al63Cu24Fe13, was named icosahedrite and it was approved by the International Mineralogical Association in 2010. Analysis indicates it may be meteoritic in origin, possibly delivered from a carbonaceous chondrite asteroid. In 2011, Bindi, Steinhardt, and a team of specialists found more icosahedrite samples from Khatyrka. A further study of Khatyrka meteorites revealed micron-sized grains of another natural quasicrystal, which has a ten-fold symmetry and a chemical formula of Al71Ni24Fe5. This quasicrystal is stable in a narrow temperature range, from 1120 to 1200 K at ambient pressure, which suggests that natural quasicrystals are formed by rapid quenching of a meteorite heated during an impact-induced shock. Shechtman was awarded the Nobel Prize in Chemistry in 2011 for his work on quasicrystals. "His discovery of quasicrystals revealed a new principle for packing of atoms and molecules," stated the Nobel Committee and pointed that "this led to a paradigm shift within chemistry." In 2014, Post of Israel issued a stamp dedicated to quasicrystals and the 2011 Nobel Prize. Earlier in 2009, it was found that thin-film quasicrystals can be formed by self-assembly of uniformly shaped, nano-sized molecular units at an air-liquid interface. It was later demonstrated that those units can be not only inorganic, but also organic. In 2018, chemists from Brown University announced the successful creation of a self-constructing lattice structure based on a strangely shaped quantum dot. While single-component quasicrystal lattices have been previously predicted mathematically and in computer simulations, they had not been demonstrated prior to this. Mathematics There are several ways to mathematically define quasicrystalline patterns. One definition, the "cut and project" construction, is based on the work of Harald Bohr (mathematician brother of Niels Bohr). The concept of an almost periodic function (also called a quasiperiodic function) was studied by Bohr, including work of Bohl and Escanglon. He introduced the notion of a superspace. Bohr showed that quasiperiodic functions arise as restrictions of high-dimensional periodic functions to an irrational slice (an intersection with one or more hyperplanes), and discussed their Fourier point spectrum. These functions are not exactly periodic, but they are arbitrarily close in some sense, as well as being a projection of an exactly periodic function. In order that the quasicrystal itself be aperiodic, this slice must avoid any lattice plane of the higher-dimensional lattice. De Bruijn showed that Penrose tilings can be viewed as two-dimensional slices of five-dimensional hypercubic structures; similarly, icosahedral quasicrystals in three dimensions are projected from a six-dimensional hypercubic lattice, as first described by Peter Kramer and Roberto Neri in 1984. Equivalently, the Fourier transform of such a quasicrystal is nonzero only at a dense set of points spanned by integer multiples of a finite set of basis vectors, which are the projections of the primitive reciprocal lattice vectors of the higher-dimensional lattice. Classical theory of crystals reduces crystals to point lattices where each point is the center of mass of one of the identical units of the crystal. The structure of crystals can be analyzed by defining an associated group. Quasicrystals, on the other hand, are composed of more than one type of unit, so, instead of lattices, quasilattices must be used. Instead of groups, groupoids, the mathematical generalization of groups in category theory, is the appropriate tool for studying quasicrystals. Using mathematics for construction and analysis of quasicrystal structures is a difficult task for most experimentalists. Computer modeling, based on the existing theories of quasicrystals, however, greatly facilitated this task. Advanced programs have been developed allowing one to construct, visualize and analyze quasicrystal structures and their diffraction patterns. The aperiodic nature of quasicrystals can also make theoretical studies of physical properties, such as electronic structure, difficult due to the inapplicability of Bloch's theorem. However, spectra of quasicrystals can still be computed with error control. Study of quasicrystals may shed light on the most basic notions related to the quantum critical point observed in heavy fermion metals. Experimental measurements on an Au-Al-Yb quasicrystal have revealed a quantum critical point defining the divergence of the magnetic susceptibility as temperature tends to zero. It is suggested that the electronic system of some quasicrystals is located at a quantum critical point without tuning, while quasicrystals exhibit the typical scaling behaviour of their thermodynamic properties and belong to the well-known family of
the dimension of the space filled, e.g., the three-dimensional tiling displayed in a quasicrystal may have translational symmetry in two directions. Symmetrical diffraction patterns result from the existence of an indefinitely large number of elements with a regular spacing, a property loosely described as long-range order. Experimentally, the aperiodicity is revealed in the unusual symmetry of the diffraction pattern, that is, symmetry of orders other than two, three, four, or six. In 1982 materials scientist Dan Shechtman observed that certain aluminium-manganese alloys produced the unusual diffractograms which today are seen as revelatory of quasicrystal structures. Due to fear of the scientific community's reaction, it took him two years to publish the results for which he was awarded the Nobel Prize in Chemistry in 2011. On 25 October 2018, Luca Bindi and Paul Steinhardt were awarded the Aspen Institute 2018 Prize for collaboration and scientific research between Italy and the United States, after they discovered icosahedrite, the first quasicrystal known to occur naturally. History On July 16, 1945, in Alamogordo, NM, the Trinity nuclear bomb test produced icosahedral quasicrystals. They went unnoticed at the time of the test but were later identified in samples of red Trinitite, a glass-like substance formed from fused sand and copper transmission lines. Identified in 2021, they are the oldest known anthropogenic quasicrystals. In 1961, Hao Wang asked whether determining if a set of tiles admits a tiling of the plane is an algorithmically unsolvable problem or not. He conjectured that it is solvable, relying on the hypothesis that every set of tiles that can tile the plane can do it periodically (hence, it would suffice to try to tile bigger and bigger patterns until obtaining one that tiles periodically). Nevertheless, two years later, his student Robert Berger constructed a set of some 20,000 square tiles (now called "Wang tiles") that can tile the plane but not in a periodic fashion. As further aperiodic sets of tiles were discovered, sets with fewer and fewer shapes were found. In 1976 Roger Penrose discovered a set of just two tiles, now referred to as Penrose tiles, that produced only non-periodic tilings of the plane. These tilings displayed instances of fivefold symmetry. One year later Alan Mackay showed experimentally that the diffraction pattern from the Penrose tiling had a two-dimensional Fourier transform consisting of sharp 'delta' peaks arranged in a fivefold symmetric pattern. Around the same time, Robert Ammann created a set of aperiodic tiles that produced eightfold symmetry. In 1972 de Wolf and van Aalst reported that the diffraction pattern produced by a crystal of sodium carbonate cannot be labeled with three indices but needed one more, which implied that the underlying structure had four dimensions in reciprocal space. Other puzzling cases have been reported, but until the concept of quasicrystal came to be established, they were explained away or denied. Shechtman first observed ten-fold electron diffraction patterns in 1982, while conducting a routine study of an aluminium–manganese alloy, Al6Mn, at the US National Bureau of Standards (later NIST). Shechtman related his observation to Ilan Blech, who responded that such diffractions had been seen before. Around that time, Shechtman also related his finding to John W. Cahn of the NIST, who did not offer any explanation and challenged him to solve the observation. Shechtman quoted Cahn as saying: "Danny, this material is telling us something, and I challenge you to find out what it is". The observation of the ten-fold diffraction pattern lay unexplained for two years until the spring of 1984, when Blech asked Shechtman to show him his results again. A quick study of Shechtman's results showed that the common explanation for a ten-fold symmetrical diffraction pattern, a type of crystal twinning, was ruled out by his experiments. Therefore, Blech looked for a new structure containing cells connected to each other by defined angles and distances but without translational periodicity. He decided to use a computer simulation to calculate the diffraction intensity from a cluster of such a material, which he termed as "multiple polyhedral", and found a ten-fold structure similar to what was observed. The multiple polyhedral structure was termed later by many researchers as icosahedral glass. Shechtman accepted Blech's discovery of a new type of material and chose to publish his observation in a paper entitled "The Microstructure of Rapidly Solidified Al6Mn", which was written around June 1984 and published in a 1985 edition of Metallurgical Transactions A. Meanwhile, on seeing the draft of the paper, John Cahn suggested that Shechtman's experimental results merit a fast publication in a more appropriate scientific journal. Shechtman agreed and, in hindsight, called this fast publication "a winning move”. This paper, published in the Physical Review Letters, repeated Shechtman's observation and used the same illustrations as the original paper. Originally, the new form of matter was dubbed "Shechtmanite". The term "quasicrystal" was first used in print by Steinhardt and Levine shortly after Shechtman's paper was published. Also in 1985, Ishimasa et al. reported twelvefold symmetry in Ni-Cr particles. Soon, eightfold diffraction patterns were recorded in V-Ni-Si and Cr-Ni-Si alloys. Over the years, hundreds of quasicrystals with various compositions and different symmetries have been discovered. The first quasicrystalline materials were thermodynamically unstable—when heated, they formed regular crystals. However, in 1987, the first of many stable quasicrystals were discovered, making it possible to produce large samples for study and applications. In 1992, the International Union of Crystallography altered its definition of a crystal, reducing it to the ability to produce a clear-cut diffraction pattern and acknowledging the possibility of the ordering to be either periodic or aperiodic. In 2001, Paul Steinhardt of Princeton University hypothesized that quasicrystals could exist in nature and developed a method of recognition, inviting all the mineralogical collections of the world to identify any badly cataloged crystals. In 2007 Steinhardt received a reply by Luca Bindi, who found a quasicrystalline specimen from Khatyrka in the University of Florence Mineralogical Collection. The crystal samples were sent to Princeton University for other tests, and in late 2009, Steinhardt confirmed its quasicrystalline character. This quasicrystal, with a composition of Al63Cu24Fe13, was named icosahedrite and it was approved by the International Mineralogical Association in 2010. Analysis indicates it may be meteoritic in origin, possibly delivered from a carbonaceous chondrite asteroid. In 2011, Bindi, Steinhardt, and a team of specialists found more icosahedrite samples from Khatyrka. A further study of Khatyrka meteorites revealed micron-sized grains of another natural quasicrystal, which has a ten-fold symmetry and a chemical formula of Al71Ni24Fe5. This quasicrystal is stable in a narrow temperature range, from 1120 to 1200 K at ambient pressure, which suggests that natural quasicrystals are formed by rapid quenching of a meteorite heated during an impact-induced shock. Shechtman was awarded the Nobel Prize in Chemistry in 2011 for his work on quasicrystals. "His discovery of quasicrystals revealed a new principle for packing of atoms and molecules," stated the Nobel Committee and pointed that "this led to a paradigm shift within chemistry." In 2014, Post of Israel issued a stamp dedicated to quasicrystals and the 2011 Nobel Prize. Earlier in 2009, it was found that thin-film quasicrystals can be formed by self-assembly of uniformly shaped, nano-sized molecular units at an air-liquid interface. It was later demonstrated that those units can be not only inorganic, but also organic. In 2018, chemists from Brown University announced the successful creation of a self-constructing lattice structure based on a strangely shaped quantum dot. While single-component quasicrystal lattices have been previously predicted mathematically and in computer simulations, they had not been demonstrated prior to this. Mathematics There are several ways to mathematically define quasicrystalline patterns. One definition, the "cut and project" construction, is based on the work of Harald Bohr (mathematician brother of Niels
people investing in hobbies has increased with time. Bricolage Bricolage and DIY are some of the terms describing the building, modifying, or repairing things without the direct aid of experts or professionals. Academic research has described DIY as behaviors where "individuals engage raw and semi-raw materials and parts to produce, transform, or reconstruct material possessions, including those drawn from the natural environment (e.g., landscaping)". DIY behavior can be triggered by various motivations previously categorized as marketplace motivations (economic benefits, lack of product availability, lack of product quality, need for customization), and identity enhancement (craftsmanship, empowerment, community seeking, uniqueness). They could involve crafts that requires particular skills and knowledge of skilled work. Typical interests enjoyed by the maker culture include engineering-oriented pursuits such as home improvement, electronics, robotics, 3-D printing, and the use of Computer Numeric Control tools, as well as more traditional activities such as metalworking, woodworking, and, mainly, its predecessor, traditional arts and crafts. The subculture stresses a cut-and-paste approach to standardized hobbyist technologies, and encourages cookbook re-use of designs published on websites and maker-oriented publications. There is a strong focus on using and learning practical skills and applying them to reference designs. There is also growing work on equity and the maker culture. Games Any structured form of play could become a game. Games are played sometimes purely for recreation, sometimes for achievement or monetary rewards as well. They are played for recreation alone, in teams, or online; by amateurs. Professionals can play as part of their work for entertainment of the audience. The games could be board games, puzzles, computer or video games. Outdoor recreation Recreation engaged in out of doors, most commonly in natural settings. The activities themselves — such as fishing, hunting, backpacking, and horseback riding — characteristically dependent on the environment practiced in. While many of these activities can be classified as sports, they do not all demand that a participant be an athlete. Competition generally is less stressed than in individual or team sports organized into opposing squads in pursuit of a trophy or championship. When the activity involves exceptional excitement, physical challenge, or risk, it is sometimes referred to as "adventure recreation" or "adventure training", rather than an extreme sport. Other traditional examples of outdoor recreational activities include hiking, camping, mountaineering, cycling, canoeing, caving, kayaking, rafting, rock climbing, running, sailing, skiing, sky diving and surfing. As new pursuits, often hybrids of prior ones, emerge, they gain their own identities, such as coasteering, canyoning, fastpacking, and plogging. Performing arts Dance Participatory dance whether it be a folk dance, a social dance, a group dance such as a line, circle, chain or square dance, or a partner dance such as is common in western Western ballroom dancing, is undertaken primarily for a common purpose, such as entertainment, social interaction or exercise, of participants rather than onlookers. The many forms of dance provide recreation for all age groups and cultures. Music Creation Music is composed and performed for many purposes, ranging from recreation, religious or ceremonial purposes, or for entertainment. When music was only available through sheet music scores, such as during the Classical and Romantic eras in Europe, music lovers would buy the sheet music of their favourite pieces and songs so that they could perform them at home on their instruments. Visual arts Woodworking, photography, moviemaking, jewelry making, software projects such as Photoshopping and home music or video production, making bracelets, artistic projects such as drawing, painting, Cosplay (design, creation, and wearing a costume based on an already existing creative property), creating models out of card stock or paper – called papercraft fall under the category visual arts. many of these are practised for recreation. Drawing Drawing goes back at least 16,000 years to Paleolithic cave representations of animals such as those at Lascaux in France and Altamira in Spain. In ancient Egypt, ink drawings on papyrus, often depicting people, were used as models for painting or sculpture. Drawings on Greek vases, initially geometric, later developed to the human form with black-figure pottery during the 7th century BC. With paper becoming common in Europe by the 15th century, drawing was adopted by masters such as Sandro Botticelli, Raphael, Michelangelo, and Leonardo da Vinci who sometimes treated drawing as an art in its own right rather than a preparatory stage for painting or sculpture. Literature Writing may involve letters, journals and weblogs. In the US, about half of all adults read one or more books for pleasure each year. About 5% read more than 50 books per year. Painting Like drawing, painting has its documented origins in caves and on rock faces. The finest examples, believed by some to be 32,000 years old, are in the Chauvet and Lascaux caves in southern France. In shades of red, brown, yellow and black, the paintings on the walls and ceilings are of bison, cattle, horses and deer. Paintings of human figures can be found in the tombs of ancient Egypt. In the great temple of Ramses II, Nefertari, his queen, is depicted being led by Isis. Greek and Roman art like the Hellenistic Fayum mummy portraits and Battle of Issus at Pompeii contributed to Byzantine art in the 4th century BC, which initiated a tradition in icon painting. Models of aeroplanes, boats, cars, tanks, artillery, and even figures of soldiers and superheroes are popular subjects to build, paint and display. Photography An amateur photographer practices photography as a hobby/passion and not for monetary profit. The quality of some amateur work may be highly specialized or eclectic in choice of subjects. Amateur photography is often pre-eminent in photographic subjects which have little prospect of commercial use or reward. Amateur photography grew during the late 19th century due to the popularization of the Hand-held camera. Nowadays it has spread widely through social media and is carried out throughout different platforms and equipment, including the use of cell phone. Clear pictures can now be taken with a cell phone which is a key tool for making photography more accessible to everyone. Organized recreation Many recreational activities are organized, typically by public institutions, voluntary group-work agencies, private groups supported by membership fees, and commercial enterprises. Examples of each of these are the National Park Service, the YMCA, the Kiwanis, and Walt Disney World. Public space such as parks and beaches are essential venues for many recreational
leisure is the purpose of work, and a reward in itself, and "leisure life" reflects the values and character of a nation. Leisure is considered a human right under the Universal Declaration of Human Rights. Play, recreation and work Recreation is difficult to separate from the general concept of play, which is usually the term for children's recreational activity. Children may playfully imitate activities that reflect the realities of adult life. It has been proposed that play or recreational activities are outlets of or expression of excess energy, channeling it into socially acceptable activities that fulfill individual as well as societal needs, without need for compulsion, and providing satisfaction and pleasure for the participant. A traditional view holds that work is supported by recreation, recreation being useful to "recharge the battery" so that work performance is improved. Work, an activity generally performed out of economic necessity and useful for society and organized within the economic framework, however can also be pleasurable and may be self-imposed thus blurring the distinction to recreation. Many activities in entertainment are work for one person and recreation for another. Over time, a recreational activity may become work, and vice versa. Thus, for a musician, playing an instrument may be at one time a profession, and at another a recreation. Similarly, it may be difficult to separate education from recreation as in the case of recreational mathematics. Health and recreation Recreation has many health benefits, and, accordingly, Therapeutic Recreation has been developed to take advantage of this effect. The National Council for Therapeutic Recreation Certification (NCTRC) is the nationally recognized credentialing organization for the profession of Therapeutic Recreation. Professionals in the field of Therapeutic Recreation who are certified by the NCTRC are called "Certified Therapeutic Recreation Specialists". The job title "Recreation Therapist" is identified in the U.S. Dept of Labor's Occupation Outlook. Such therapy is applied in rehabilitation, psychiatric facilities for youth and adults, and in the care of the elderly, the disabled, or people with chronic diseases. Recreational physical activity is important to reduce obesity, and the risk of osteoporosis and of cancer, most significantly in men that of colon and prostate, and in women that of the breast; however, not all malignancies are reduced as outdoor recreation has been linked to a higher risk of melanoma. Extreme adventure recreation naturally carries its own hazards. Forms and activities Recreation is an essential part of human life and finds many different forms which are shaped naturally by individual interests but also by the surrounding social construction. Recreational activities can be communal or solitary, active or passive, outdoors or indoors, healthy or harmful, and useful for society or detrimental. Some recreational activities – such as gambling, recreational drug use, or delinquent activities – may violate societal norms and laws. A list of typical activities could be almost endless Hobby A significant section of recreational activities are designated as hobbies which are activities done for pleasure on a regular basis. A hobby is considered to be a regular activity that is done for enjoyment, typically during one's leisure time, not professionally and not for pay. Hobbies include collecting themed items and objects, engaging in creative and artistic pursuits, playing sports, or pursuing other amusements. Participation in hobbies encourages acquiring substantial skills and knowledge in that area. A list of hobbies changes with renewed interests and developing fashions, making it diverse and lengthy. Hobbies tend to follow trends in society, for example stamp collecting was popular during the nineteenth and twentieth centuries as postal systems were the main means of communication, while video games are more popular nowadays following technological advances. The advancing production and technology of the nineteenth century provided workers with more availability in leisure time to engage in hobbies. Because of this, the efforts of people investing in hobbies has increased with time. Bricolage Bricolage and DIY are some of the terms describing the building, modifying, or repairing things without the direct aid of experts or professionals. Academic research has described DIY as behaviors where "individuals engage raw and semi-raw materials and parts to produce, transform, or reconstruct material possessions, including those drawn from the natural environment (e.g., landscaping)". DIY behavior can be triggered by various motivations previously categorized as marketplace motivations (economic benefits, lack of product availability, lack of product quality, need for customization), and identity enhancement (craftsmanship, empowerment, community seeking, uniqueness). They could involve crafts that requires particular skills and knowledge of skilled work. Typical interests enjoyed by the maker culture include engineering-oriented pursuits such as home improvement, electronics, robotics, 3-D printing, and the use of Computer Numeric Control tools, as well as more traditional activities such as metalworking, woodworking, and, mainly, its predecessor,
the debt incurred to purchase them, then the equity must be negative, meaning the consumer or corporation is insolvent. Economist Paul Krugman wrote in 2014 that "the best working hypothesis seems to be that the financial crisis was only one manifestation of a broader problem of excessive debt—that it was a so-called "balance sheet recession". In Krugman's view, such crises require debt reduction strategies combined with higher government spending to offset declines from the private sector as it pays down its debt. For example, economist Richard Koo wrote that Japan's "Great Recession" that began in 1990 was a "balance sheet recession". It was triggered by a collapse in land and stock prices, which caused Japanese firms to have negative equity, meaning their assets were worth less than their liabilities. Despite zero interest rates and expansion of the money supply to encourage borrowing, Japanese corporations in aggregate opted to pay down their debts from their own business earnings rather than borrow to invest as firms typically do. Corporate investment, a key demand component of GDP, fell enormously (22% of GDP) between 1990 and its peak decline in 2003. Japanese firms overall became net savers after 1998, as opposed to borrowers. Koo argues that it was massive fiscal stimulus (borrowing and spending by the government) that offset this decline and enabled Japan to maintain its level of GDP. In his view, this avoided a U.S. type Great Depression, in which U.S. GDP fell by 46%. He argued that monetary policy was ineffective because there was limited demand for funds while firms paid down their liabilities. In a balance sheet recession, GDP declines by the amount of debt repayment and un-borrowed individual savings, leaving government stimulus spending as the primary remedy. Krugman discussed the balance sheet recession concept during 2010, agreeing with Koo's situation assessment and view that sustained deficit spending when faced with a balance sheet recession would be appropriate. However, Krugman argued that monetary policy could also affect savings behavior, as inflation or credible promises of future inflation (generating negative real interest rates) would encourage less savings. In other words, people would tend to spend more rather than save if they believe inflation is on the horizon. In more technical terms, Krugman argues that the private sector savings curve is elastic even during a balance sheet recession (responsive to changes in real interest rates) disagreeing with Koo's view that it is inelastic (non-responsive to changes in real interest rates). A July 2012 survey of balance sheet recession research reported that consumer demand and employment are affected by household leverage levels. Both durable and non-durable goods consumption declined as households moved from low to high leverage with the decline in property values experienced during the subprime mortgage crisis. Further, reduced consumption due to higher household leverage can account for a significant decline in employment levels. Policies that help reduce mortgage debt or household leverage could therefore have stimulative effects. Liquidity trap A liquidity trap is a Keynesian theory that a situation can develop in which interest rates reach near zero (zero interest-rate policy) yet do not effectively stimulate the economy. In theory, near-zero interest rates should encourage firms and consumers to borrow and spend. However, if too many individuals or corporations focus on saving or paying down debt rather than spending, lower interest rates have less effect on investment and consumption behavior; the lower interest rates are like "pushing on a string". Economist Paul Krugman described the U.S. 2009 recession and Japan's lost decade as liquidity traps. One remedy to a liquidity trap is expanding the money supply via quantitative easing or other techniques in which money is effectively printed to purchase assets, thereby creating inflationary expectations that cause savers to begin spending again. Government stimulus spending and mercantilist policies to stimulate exports and reduce imports are other techniques to stimulate demand. He estimated in March 2010 that developed countries representing 70% of the world's GDP were caught in a liquidity trap. Paradoxes of thrift and deleveraging Behavior that may be optimal for an individual (e.g., saving more during adverse economic conditions) can be detrimental if too many individuals pursue the same behavior, as ultimately one person's consumption is another person's income. Too many consumers attempting to save (or pay down debt) simultaneously is called the paradox of thrift and can cause or deepen a recession. Economist Hyman Minsky also described a "paradox of deleveraging" as financial institutions that have too much leverage (debt relative to equity) cannot all de-leverage simultaneously without significant declines in the value of their assets. During April 2009, U.S. Federal Reserve Vice Chair Janet Yellen discussed these paradoxes: "Once this massive credit crunch hit, it didn't take long before we were in a recession. The recession, in turn, deepened the credit crunch as demand and employment fell, and credit losses of financial institutions surged. Indeed, we have been in the grips of precisely this adverse feedback loop for more than a year. A process of balance sheet deleveraging has spread to nearly every corner of the economy. Consumers are pulling back on purchases, especially on durable goods, to build their savings. Businesses are cancelling planned investments and laying off workers to preserve cash. And, financial institutions are shrinking assets to bolster capital and improve their chances of weathering the current storm. Once again, Minsky understood this dynamic. He spoke of the paradox of deleveraging, in which precautions that may be smart for individuals and firms—and indeed essential to return the economy to a normal state—nevertheless magnify the distress of the economy as a whole." Predictors The U.S. Conference Board's Present Situation Index year-over-year change turns negative by more than 15 points before a recession. The U.S. Conference Board Leading Economic Indicator year-over-year change turns negative before a recession. When the CFNAI Diffusion Index drops below the value of -0.35, then there is an increased probability of the beginning a recession. Usually, the signal happens in the three months of the recession. The CFNAI Diffusion Index signal tends to happen about one month before a related signal by the CFNAI-MA3 (3-month moving average) drops below the -0.7 level. The CFNAI-MA3 correctly identified the 7 recessions between March 1967–August 2019, while triggering only 2 false alarms. Except for the above, there are no known completely reliable predictors, but the following are considered possible predictors. The Federal Reserve Bank of Chicago posts updates of the Brave-Butters-Kelley Indexes (BBKI). The Federal Reserve Bank of St. Louis posts the Weekly Economic Index (Lewis-Mertens-Stock) (WEI). The Federal Reserve Bank of St. Louis posts the Smoothed U.S. Recession Probabilities (RECPROUSM156N). Inverted yield curve, the model developed by economist Jonathan H. Wright, uses yields on 10-year and three-month Treasury securities as well as the Fed's overnight funds rate. Another model developed by Federal Reserve Bank of New York economists uses only the 10-year/three-month spread., The three-month change in the unemployment rate and initial jobless claims. U.S. unemployment index defined as the difference between the 3-month average of the unemployment rate and the 12-month minimum of the unemployment rate. Unemployment momentum and acceleration with Hidden Markov model. Index of Leading (Economic) Indicators (includes some of the above indicators). Lowering of asset prices, such as homes and financial assets, or high personal and corporate debt levels. Commodity prices may increase before recessions, which usually hinders consumer spending by making necessities like transportation and housing costlier. This will tend to constrict spending for non-essential goods and services. Once the recession occurs, commodity prices will usually reset to a lower level. Increased income inequality. Decreasing recreational vehicle shipments. Declining trucking volumes. Analysis by Prakash Loungani of the International Monetary Fund found that only two of the sixty recessions
St. Louis posts the Smoothed U.S. Recession Probabilities (RECPROUSM156N). Inverted yield curve, the model developed by economist Jonathan H. Wright, uses yields on 10-year and three-month Treasury securities as well as the Fed's overnight funds rate. Another model developed by Federal Reserve Bank of New York economists uses only the 10-year/three-month spread., The three-month change in the unemployment rate and initial jobless claims. U.S. unemployment index defined as the difference between the 3-month average of the unemployment rate and the 12-month minimum of the unemployment rate. Unemployment momentum and acceleration with Hidden Markov model. Index of Leading (Economic) Indicators (includes some of the above indicators). Lowering of asset prices, such as homes and financial assets, or high personal and corporate debt levels. Commodity prices may increase before recessions, which usually hinders consumer spending by making necessities like transportation and housing costlier. This will tend to constrict spending for non-essential goods and services. Once the recession occurs, commodity prices will usually reset to a lower level. Increased income inequality. Decreasing recreational vehicle shipments. Declining trucking volumes. Analysis by Prakash Loungani of the International Monetary Fund found that only two of the sixty recessions around the world during the 1990s had been predicted by a consensus of economists one year earlier, while there were zero consensus predictions one year earlier for the 49 recessions during 2009. S&P 500 and BBB bond spread to shows the probability of a recession A study using the S&P 500 and BBB bond spread to shows the probability of a recession in the next year. Peaking of High Yield Bonds Spread relation to S&P 500 Returns A study reported that the average returns of the S&P 500 were -19% in the 3rd month before the Peaking of the High Yield Bonds Spread (BBB), but were +41% in the 24th month after the Peaking. Government responses Most mainstream economists believe that recessions are caused by inadequate aggregate demand in the economy, and favor the use of expansionary macroeconomic policy during recessions. Strategies favored for moving an economy out of a recession vary depending on which economic school the policymakers follow. Monetarists would favor the use of expansionary monetary policy, while Keynesian economists may advocate increased government spending to spark economic growth. Supply-side economists may suggest tax cuts to promote business capital investment. When interest rates reach the boundary of an interest rate of zero percent (zero interest-rate policy) conventional monetary policy can no longer be used and government must use other measures to stimulate recovery. Keynesians argue that fiscal policy—tax cuts or increased government spending—works when monetary policy fails. Spending is more effective because of its larger multiplier but tax cuts take effect faster. For example, Paul Krugman wrote in December 2010 that significant, sustained government spending was necessary because indebted households were paying down debts and unable to carry the U.S. economy as they had previously: "The root of our current troubles lies in the debt American families ran up during the Bush-era housing bubble...highly indebted Americans not only can't spend the way they used to, they're having to pay down the debts they ran up in the bubble years. This would be fine if someone else were taking up the slack. But what's actually happening is that some people are spending much less while nobody is spending more — and this translates into a depressed economy and high unemployment. What the government should be doing in this situation is spending more while the private sector is spending less, supporting employment while those debts are paid down. And this government spending needs to be sustained..." Keynes on Government Response John Maynard Keynes believed that government institutions could stimulate aggregate demand in a crisis. "Keynes showed that if somehow the level of aggregate demand could be triggered, possibly by the government printing currency notes to employ people to dig holes and fill them up, the wages that would be paid out would resuscitate the economy by generating successive rounds of demand through the multiplier process" Stock market Some recessions have been anticipated by the stock market declines. In Stocks for the Long Run, Siegel mentions that since 1948, ten recessions were preceded by a stock market decline, by a lead time of 0 to 13 months (average 5.7 months), while ten stock market declines of greater than 10% in the Dow Jones Industrial Average were not followed by a recession. The real-estate market also usually weakens before a recession. However real-estate declines can last much longer than recessions. Since the business cycle is very hard to predict, Siegel argues that it is not possible to take advantage of economic cycles for timing investments. Even the National Bureau of Economic Research (NBER) takes a few months to determine if a peak or trough has occurred in the US. During an economic decline, high-yield stocks such as fast-moving consumer goods, pharmaceuticals, and tobacco tend to hold up better. However, when the economy starts to recover and the bottom of the market has passed, growth stocks tend to recover faster. There is significant disagreement about how health care and utilities tend to recover. Diversifying one's portfolio into international stocks may provide some safety; however, economies that are closely correlated with that of the U.S. may also be affected by a recession in the U.S. There is a view termed the halfway rule according to which investors start discounting an economic recovery about halfway through a recession. In the 16 U.S. recessions since 1919, the average length has been 13 months, although the recent recessions have been shorter. Thus, if the 2008 recession had followed the average, the downturn in the stock market would have bottomed around November 2008. The actual US stock market bottom of the 2008 recession was in March 2009. Politics Generally, an administration gets credit or blame for the state of economy during its time. This has caused disagreements about on how it actually started. In an economic cycle, a downturn can be considered a consequence of an expansion reaching an unsustainable state, and is corrected by a brief decline. Thus it is not easy to isolate the causes of specific phases of the cycle. The 1981 recession is thought to have been caused by the tight-money policy adopted by Paul Volcker, chairman of the Federal Reserve Board, before Ronald Reagan took office. Reagan supported that policy. Economist Walter Heller, chairman of the Council of Economic Advisers in the 1960s, said that "I call it a Reagan-Volcker-Carter recession." The resulting taming of inflation did, however, set the stage for a robust growth period during Reagan's presidency. Economists usually teach that to some degree recession is unavoidable, and its causes are not well understood. Consequences Unemployment Unemployment is particularly high during a recession. Many economists working within the neoclassical paradigm argue that there is a natural rate of unemployment which, when subtracted from the actual rate of unemployment, can be used to calculate the negative GDP gap during a recession. In other words, unemployment never reaches 0 percent, and thus is not a negative indicator of the health of an economy unless above the "natural rate", in which case it corresponds directly to a loss in the gross domestic product, or GDP. The full impact of a recession on employment may not be felt for several quarters. Research in Britain shows that low-skilled, low-educated workers and the young are most vulnerable
uses Bob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used to sign a message. Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces a hash value of the message, raises it to the power of d (modulo n) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power of e (modulo n) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent. This works because of exponentiation rules: Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to: Decrypt a message only intended for the recipient, which may be encrypted by anyone having the public key (asymmetric encrypted transport). Encrypt a message which may be decrypted by anyone, but which can only be encrypted by one person; this provides a digital signature. Proofs of correctness Proof using Fermat's little theorem The proof of the correctness of RSA is based on Fermat's little theorem, stating that for any integer a and prime p, not dividing a. We want to show that for every integer m when p and q are distinct prime numbers and e and d are positive integers satisfying . Since is, by construction, divisible by both and , we can write for some nonnegative integers h and k. To check whether two numbers, such as med and m, are congruent mod pq, it suffices (and in fact is equivalent) to check that they are congruent mod p and mod q separately. To show , we consider two cases: If , m is a multiple of p. Thus med is a multiple of p. So . If , where we used Fermat's little theorem to replace with 1. The verification that proceeds in a completely analogous way: If , med is a multiple of q. So . If , This completes the proof that, for any integer m, and integers e, d such that , Notes: Proof using Euler's theorem Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead on Euler's theorem. We want to show that , where is a product of two different prime numbers, and e and d are positive integers satisfying . Since e and d are positive, we can write for some non-negative integer h. Assuming that m is relatively prime to n, we have where the second-last congruence follows from Euler's theorem. More generally, for any e and d satisfying , the same conclusion follows from Carmichael's generalization of Euler's theorem, which states that for all m relatively prime to n. When m is not relatively prime to n, the argument just given is invalid. This is highly improbable (only a proportion of numbers have this property), but even in this case, the desired congruence is still true. Either or , and these cases can be treated using the previous proof. Padding Attacks against plain RSA There are a number of attacks against plain RSA as described below. When encrypting with low encryption exponents (e.g., ) and small values of the m (i.e., ), the result of is strictly less than the modulus n. In this case, ciphertexts can be decrypted easily by taking the eth root of the ciphertext over the integers. If the same clear-text message is sent to e or more recipients in an encrypted way, and the receivers share the same exponent e, but different p, q, and therefore n, then it is easy to decrypt the original clear-text message via the Chinese remainder theorem. Johan Håstad noticed that this attack is possible even if the clear texts are not equal, but the attacker knows a linear relation between them. This attack was later improved by Don Coppersmith (see Coppersmith's attack). Because RSA encryption is a deterministic encryption algorithm (i.e., has no random component) an attacker can successfully launch a chosen plaintext attack against the cryptosystem, by encrypting likely plaintexts under the public key and test whether they are equal to the ciphertext. A cryptosystem is called semantically secure if an attacker cannot distinguish two encryptions from each other, even if the attacker knows (or has chosen) the corresponding plaintexts. RSA without padding is not semantically secure. RSA has the property that the product of two ciphertexts is equal to the encryption of the product of the respective plaintexts. That is, . Because of this multiplicative property, a chosen-ciphertext attack is possible. E.g., an attacker who wants to know the decryption of a ciphertext may ask the holder of the private key d to decrypt an unsuspicious-looking ciphertext for some value r chosen by the attacker. Because of the multiplicative property, c′ is the encryption of . Hence, if the attacker is successful with the attack, they will learn from which they can derive the message m by multiplying mr with the modular inverse of r modulo n. Given the private exponent d, one can efficiently factor the modulus n = pq. And given factorization of the modulus n = pq, one can obtain any private key (d′, n) generated against a public key (e′, n). Padding schemes To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value m before encrypting it. This padding ensures that m does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts. Standards such as PKCS#1 have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext m with some number of additional bits, the size of the un-padded message M must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, at Crypto 1998, Bleichenbacher showed that this version is vulnerable to a practical adaptive chosen-ciphertext attack. Furthermore, at Eurocrypt 2000, Coron et al. showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard include Optimal Asymmetric Encryption Padding (OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS). Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted ( and ); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents. Note that using different RSA key pairs for encryption and signing is potentially more secure. Security and practical considerations Using the Chinese remainder algorithm For efficiency, many popular crypto libraries (such as OpenSSL, Java and .NET) use for decryption and signing the following optimization based on the Chinese remainder theorem. The following values are precomputed and stored as part of the private key: and the primes from the key generation, These values allow the recipient to compute the exponentiation more efficiently as follows: , This is more efficient than computing exponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus. Integer factorization and RSA problem The security of the RSA cryptosystem is based on two mathematical problems: the problem of factoring large numbers and the RSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems are hard, i.e., no efficient algorithm exists for solving them. Providing security against partial decryption may require the addition of a secure padding scheme. The RSA problem is defined as the task of taking eth roots modulo a composite n: recovering a value m such that , where is an RSA public key, and c is an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulus n. With the ability to recover prime factors, an attacker can compute the secret exponent d from a public key , then decrypt c using the standard procedure. To accomplish this, an attacker factors n into p and q, and computes that allows the determination of d from e. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; see integer factorization for a discussion of this problem. Multiple polynomial quadratic sieve (MPQS) can be used to factor the public modulus n. The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of approximately seven months. By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-core Athlon64 with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process. Rivest, Shamir, and Adleman noted that Miller has shown that – assuming the truth of the extended Riemann hypothesis – finding d from n and e is as hard as factoring n into p and q (up to a polynomial time difference). However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring. , the largest publicly known factored RSA number had 829 bits (250 decimal digits, RSA-250). Its factorization, by a state-of-the-art distributed implementation, took approximately 2700 CPU years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003, RSA Security estimated that 1024-bit keys were likely to become crackable by 2010. As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits. It is generally presumed that RSA is secure if n is sufficiently large, outside of quantum computing. If n is 300 bits or shorter, it can be factored in a few hours in a personal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, when RSA-155 was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011. A theoretical hardware device named TWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys. In 1994, Peter Shor showed that a quantum computer – if one could ever be practically created for the purpose – would be able to factor in polynomial time, breaking RSA; see Shor's algorithm. Faulty key generation Finding the large primes p and q is usually done by testing random numbers of the correct size with probabilistic primality tests that quickly eliminate virtually all of the nonprimes. The numbers p and q should not be "too close", lest the Fermat factorization for n be successful. If p − q is less than 2n1/4 (n = p⋅q, which even for "small" 1024-bit values of n is ), solving for p and q is trivial. Furthermore, if either p − 1 or q − 1 has only small prime factors, n can be factored quickly by Pollard's p − 1 algorithm, and hence such values of p or q should be discarded. It is important that the private exponent d be large enough. Michael J. Wiener showed that if p is between q and 2q (which is quite typical) and , then d can be computed efficiently from n and e. There is no known attack against small public exponents such as , provided that the proper padding is used. Coppersmith's attack has many applications in attacking RSA specifically if the public exponent e is small and if the encrypted message is short and not padded. 65537 is a commonly used value for e; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponents e smaller than 65537, but does not state a reason for this restriction. In October 2017, a team of researchers from Masaryk University announced the ROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library from Infineon known as RSALib. A large number of smart cards and trusted platform modules (TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released. Importance of strong random number generation A cryptographically strong random number generator, which has been properly seeded with adequate entropy, must be used to generate the primes p and q. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm. They exploited a weakness unique to cryptosystems based on integer factorization. If is one public key, and is another, then if by chance (but q is not equal to q′), then a simple computation of factors both n and n′, totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to choose q given p, instead of choosing p and q independently. Nadia Heninger was part of a group that did a similar experiment. They used an
encryption of the product of the respective plaintexts. That is, . Because of this multiplicative property, a chosen-ciphertext attack is possible. E.g., an attacker who wants to know the decryption of a ciphertext may ask the holder of the private key d to decrypt an unsuspicious-looking ciphertext for some value r chosen by the attacker. Because of the multiplicative property, c′ is the encryption of . Hence, if the attacker is successful with the attack, they will learn from which they can derive the message m by multiplying mr with the modular inverse of r modulo n. Given the private exponent d, one can efficiently factor the modulus n = pq. And given factorization of the modulus n = pq, one can obtain any private key (d′, n) generated against a public key (e′, n). Padding schemes To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value m before encrypting it. This padding ensures that m does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts. Standards such as PKCS#1 have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext m with some number of additional bits, the size of the un-padded message M must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, at Crypto 1998, Bleichenbacher showed that this version is vulnerable to a practical adaptive chosen-ciphertext attack. Furthermore, at Eurocrypt 2000, Coron et al. showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard include Optimal Asymmetric Encryption Padding (OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS). Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted ( and ); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents. Note that using different RSA key pairs for encryption and signing is potentially more secure. Security and practical considerations Using the Chinese remainder algorithm For efficiency, many popular crypto libraries (such as OpenSSL, Java and .NET) use for decryption and signing the following optimization based on the Chinese remainder theorem. The following values are precomputed and stored as part of the private key: and the primes from the key generation, These values allow the recipient to compute the exponentiation more efficiently as follows: , This is more efficient than computing exponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus. Integer factorization and RSA problem The security of the RSA cryptosystem is based on two mathematical problems: the problem of factoring large numbers and the RSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems are hard, i.e., no efficient algorithm exists for solving them. Providing security against partial decryption may require the addition of a secure padding scheme. The RSA problem is defined as the task of taking eth roots modulo a composite n: recovering a value m such that , where is an RSA public key, and c is an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulus n. With the ability to recover prime factors, an attacker can compute the secret exponent d from a public key , then decrypt c using the standard procedure. To accomplish this, an attacker factors n into p and q, and computes that allows the determination of d from e. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; see integer factorization for a discussion of this problem. Multiple polynomial quadratic sieve (MPQS) can be used to factor the public modulus n. The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of approximately seven months. By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-core Athlon64 with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process. Rivest, Shamir, and Adleman noted that Miller has shown that – assuming the truth of the extended Riemann hypothesis – finding d from n and e is as hard as factoring n into p and q (up to a polynomial time difference). However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring. , the largest publicly known factored RSA number had 829 bits (250 decimal digits, RSA-250). Its factorization, by a state-of-the-art distributed implementation, took approximately 2700 CPU years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003, RSA Security estimated that 1024-bit keys were likely to become crackable by 2010. As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits. It is generally presumed that RSA is secure if n is sufficiently large, outside of quantum computing. If n is 300 bits or shorter, it can be factored in a few hours in a personal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, when RSA-155 was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011. A theoretical hardware device named TWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys. In 1994, Peter Shor showed that a quantum computer – if one could ever be practically created for the purpose – would be able to factor in polynomial time, breaking RSA; see Shor's algorithm. Faulty key generation Finding the large primes p and q is usually done by testing random numbers of the correct size with probabilistic primality tests that quickly eliminate virtually all of the nonprimes. The numbers p and q should not be "too close", lest the Fermat factorization for n be successful. If p − q is less than 2n1/4 (n = p⋅q, which even for "small" 1024-bit values of n is ), solving for p and q is trivial. Furthermore, if either p − 1 or q − 1 has only small prime factors, n can be factored quickly by Pollard's p − 1 algorithm, and hence such values of p or q should be discarded. It is important that the private exponent d be large enough. Michael J. Wiener showed that if p is between q and 2q (which is quite typical) and , then d can be computed efficiently from n and e. There is no known attack against small public exponents such as , provided that the proper padding is used. Coppersmith's attack has many applications in attacking RSA specifically if the public exponent e is small and if the encrypted message is short and not padded. 65537 is a commonly used value for e; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponents e smaller than 65537, but does not state a reason for this restriction. In October 2017, a team of researchers from Masaryk University announced the ROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library from Infineon known as RSALib. A large number of smart cards and trusted platform modules (TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released. Importance of strong random number generation A cryptographically strong random number generator, which has been properly seeded with adequate entropy, must be used to generate the primes p and q. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm. They exploited a weakness unique to cryptosystems based on integer factorization. If is one public key, and is another, then if by chance (but q is not equal to q′), then a simple computation of factors both n and n′, totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to choose q given p, instead of choosing p and q independently. Nadia Heninger was part of a group that did a similar experiment. They used an idea of Daniel J. Bernstein to compute the GCD of each RSA key n against the product of all the other keys n′ they had found (a 729-million-digit number), instead of computing each gcd(n, n′) separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size. Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups
have been adapted for film and television. Life Birth, childhood, and early education Heinlein, born on July 7, 1907, to Rex Ivar Heinlein (an accountant) and Bam Lyle Heinlein, in Butler, Missouri, was the third of seven children. He was a sixth-generation German-American; a family tradition had it that Heinleins fought in every American war, starting with the War of Independence. He spent his childhood in Kansas City, Missouri. The outlook and values of this time and place (in his own words, "The Bible Belt") had a definite influence on his fiction, especially in his later works, as he drew heavily upon his childhood in establishing the setting and cultural atmosphere in works like Time Enough for Love and To Sail Beyond the Sunset. The 1910 appearance of Halley's Comet inspired the young child's life-long interest in astronomy. The family could not afford to pay to send Heinlein to college, so he sought an appointment to a military academy. When Heinlein graduated from Central High School in Kansas City in 1924, he was initially prevented from attending the United States Naval Academy at Annapolis because his older brother Rex was a student there, and regulations discouraged multiple family members from attending the academy simultaneously. He instead matriculated at Kansas City Community College and began vigorously petitioning Missouri Senator James A. Reed for an appointment to the Naval Academy. In part due to the influence of the Pendergast machine, the Naval Academy admitted him in June 1925; Heinlein later said that Reed told him that he had 100 letters of recommendation, 50 for other candidates for nomination and 50 for Heinlein. Navy Heinlein's experience in the U.S. Navy exerted a strong influence on his character and writing. In 1929, he graduated from the Naval Academy with more or less the equivalent of a bachelor of arts in engineering (the Academy did not at the time confer degrees). He ranked fifth in his class academically but with a class standing of 20th of 243 due to disciplinary demerits. The U.S. Navy commissioned him as an ensign shortly after his graduation. He advanced to lieutenant junior grade in 1931 while serving aboard the new aircraft carrier , where he worked in radio communications—a technology then still in its earlier stages. The captain of this carrier, Ernest J. King, later served as the Chief of Naval Operations and Commander-in-Chief, U.S. Fleet during World War II. Military historians frequently interviewed Heinlein during his later years and asked him about Captain King and his service as the commander of the U.S. Navy's first modern aircraft carrier. Heinlein also served as gunnery officer aboard the destroyer in 1933 and 1934, reaching the rank of lieutenant. His brother, Lawrence Heinlein, served in the U.S. Army, the U.S. Air Force, and the Missouri National Guard, reaching the rank of major general in the National Guard. Marriages In 1929, Heinlein married Elinor Curry of Kansas City. However, their marriage lasted only about a year. His second marriage in 1932 to Leslyn MacDonald (1904–1981) lasted for 15 years. MacDonald was, according to the testimony of Heinlein's Navy friend, Rear Admiral Cal Laning, "astonishingly intelligent, widely read, and extremely liberal, though a registered Republican", while Isaac Asimov later recalled that Heinlein was, at the time, "a flaming liberal". (See section: Politics of Robert Heinlein.) At the Philadelphia Naval Shipyard Heinlein met and befriended a chemical engineer named Virginia "Ginny" Gerstenfeld. After the war, her engagement having fallen through, she attended UCLA for doctoral studies in chemistry, and while there reconnected with Heinlein. As his second wife's alcoholism gradually spun out of control, Heinlein moved out and the couple filed for divorce. Heinlein's friendship with Virginia turned into a relationship and on October 21, 1948—shortly after the decree nisi came through—they married in the town of Raton, New Mexico, shortly after setting up housekeeping in the Broadmoor district of Colorado Springs in a house that Heinlein and his wife (both engineers) designed. As the area was newly developed, they were allowed to choose their own house number, 1776 Mesa Avenue. The design of the house was featured in Popular Mechanics. They remained married until Heinlein's death. In 1965, after various chronic health problems of Virginia's were traced back to altitude sickness, they moved to Santa Cruz, California, which is at sea level. Robert and Virginia designed and built a new residence in the adjacent village of Bonny Doon; the home is in a circular shape. Ginny undoubtedly served as a model for many of his intelligent, fiercely independent female characters. She was a chemist and rocket test engineer, and held a higher rank in the Navy than Heinlein himself. She was also an accomplished college athlete, earning four letters. In 1953–1954, the Heinleins voyaged around the world (mostly via ocean liners and cargo liners, as Ginny detested flying), which Heinlein described in Tramp Royale, and which also provided background material for science fiction novels set aboard spaceships on long voyages, such as Podkayne of Mars, Friday and Job: A Comedy of Justice, the latter initially being set on a cruise much as detailed in Tramp Royale. Ginny acted as the first reader of his manuscripts. Isaac Asimov believed that Heinlein made a swing to the right politically at the same time he married Ginny. California In 1934, Heinlein was discharged from the Navy due to pulmonary tuberculosis. During a lengthy hospitalization, and inspired by his own experience while bed-ridden, he developed a design for a waterbed. After his discharge, Heinlein attended a few weeks of graduate classes in mathematics and physics at the University of California, Los Angeles (UCLA), but he soon quit either because of his ill-health or because of a desire to enter politics. Heinlein supported himself at several occupations, including real estate sales and silver mining, but for some years found money in short supply. Heinlein was active in Upton Sinclair's socialist End Poverty in California movement (EPIC) in the early 1930s. He was deputy publisher of the EPIC News, which Heinlein noted "recalled a mayor, kicked out a district attorney, replaced the governor with one of our choice." When Sinclair gained the Democratic nomination for Governor of California in 1934, Heinlein worked actively in the campaign. Heinlein himself ran for the California State Assembly in 1938, but was unsuccessful. Heinlein was running as a left-wing Democrat in a conservative district, and he never made it past the Democratic primary. Author While not destitute after the campaign—he had a small disability pension from the Navy—Heinlein turned to writing to pay off his mortgage. His first published story, "Life-Line", was printed in the August 1939 issue of Astounding Science Fiction. Originally written for a contest, he sold it to Astounding for significantly more than the contest's first-prize payoff. Another Future History story, "Misfit", followed in November. Some saw Heinlein's talent and stardom from his first story, and he was quickly acknowledged as a leader of the new movement toward "social" science fiction. In California he hosted the Mañana Literary Society, a 1940–41 series of informal gatherings of new authors. He was the guest of honor at Denvention, the 1941 Worldcon, held in Denver. During World War II, Heinlein was employed by the Navy as a civilian aeronautical engineer at the Navy Aircraft Materials Center at the Philadelphia Naval Shipyard in Pennsylvania. Heinlein recruited Isaac Asimov and L. Sprague de Camp to also work there. While at the Philadelphia Naval Shipyards, Asimov, Heinlein, and de Camp brainstormed unconventional approaches to kamikaze attacks, such as using sound to detect approaching planes. As the war wound down in 1945, Heinlein began to re-evaluate his career. The atomic bombings of Hiroshima and Nagasaki, along with the outbreak of the Cold War, galvanized him to write nonfiction on political topics. In addition, he wanted to break into better-paying markets. He published four influential short stories for The Saturday Evening Post magazine, leading off, in February 1947, with "The Green Hills of Earth". That made him the first science fiction writer to break out of the "pulp ghetto". In 1950, the movie Destination Moon—the documentary-like film for which he had written the story and scenario, co-written the script, and invented many of the effects—won an Academy Award for special effects. Also, he embarked on a series of juvenile novels for the Charles Scribner's Sons publishing company that went from 1947 through 1959, at the rate of one book each autumn, in time for Christmas presents to teenagers. He also wrote for Boys' Life in 1952. Heinlein had used topical materials throughout his juvenile series beginning in 1947, but in 1958 he interrupted work on The Heretic (the working title of Stranger in a Strange Land) to write and publish a book exploring ideas of civic virtue, initially serialized as Starship Soldiers. In 1959, his novel (now entitled Starship Troopers) was considered by the editors and owners of Scribner's to be too controversial for one of its prestige lines, and it was rejected. Heinlein found another publisher (Putnam), feeling himself released from the constraints of writing novels for children. He had told an interviewer that he did not want to do stories that merely added to categories defined by other works. Rather he wanted to do his own work, stating that: "I want to do my own stuff, my own way". He would go on to write a series of challenging books that redrew the boundaries of science fiction, including Stranger in a Strange Land (1961) and The Moon Is a Harsh Mistress (1966). Later life and death Beginning in 1970, Heinlein had a series of health crises, broken by strenuous periods of activity in his hobby of stonemasonry: in a private correspondence, he referred to that as his "usual and favorite occupation between books". The decade began with a life-threatening attack of peritonitis, recovery from which required more than two years, and treatment of which required multiple transfusions of Heinlein's rare blood type, A2 negative. As soon as he was well enough to write again, he began work on Time Enough for Love (1973), which introduced many of the themes found in his later fiction. In the mid-1970s, Heinlein wrote two articles for the Britannica Compton Yearbook. He and Ginny crisscrossed the country helping to reorganize blood donation in the United States in an effort to assist the system which had saved his life. At science fiction conventions to receive his autograph, fans would be asked to co-sign with Heinlein a beautifully embellished pledge form he supplied stating that the recipient agrees that they will donate blood. He was the guest of honor at the Worldcon in 1976 for the third time at MidAmeriCon in Kansas City, Missouri. At that Worldcon, Heinlein hosted a blood drive and donors' reception to thank all those who had helped save lives. Beginning in 1977 and including an episode while vacationing in Tahiti in early 1978, he had episodes of reversible neurologic dysfunction due to transient ischemic attacks. Over the next few months, he became more and more exhausted, and his health again began to decline. The problem was determined to be a blocked carotid artery, and he had one of the earliest known carotid bypass operations to correct it. Heinlein and Virginia had been smokers, and smoking appears often in his fiction, as do fictitious strikable self-lighting cigarettes. In 1980 Robert Heinlein was a member of the Citizens Advisory Council on National Space Policy, chaired by Jerry Pournelle, which met at the home of SF writer Larry Niven to write space policy papers for the incoming Reagan Administration. Members included such aerospace industry leaders as former astronaut Buzz Aldrin, General Daniel O. Graham, aerospace engineer Max Hunter and North American Rockwell VP for Space Shuttle development George Merrick. Policy recommendations from the Council included ballistic missile defense concepts which were later transformed into what was called the Strategic Defense Initiative. Heinlein assisted with Council contribution to the Reagan SDI spring 1983 speech. Asked to appear before a Joint Committee of the United States Congress that year, he testified on his belief that spin-offs from space technology were benefiting the infirm and the elderly. Heinlein's surgical treatment re-energized him, and he wrote five novels from 1980 until he died in his sleep from emphysema and heart failure on May 8, 1988. At that time, he had been putting together the early notes for another World as Myth novel. Several of his other works have been published posthumously. Based on an outline and notes created by Heinlein in 1955, Spider Robinson wrote the novel Variable Star. Heinlein's posthumously published nonfiction includes a selection of correspondence and notes edited into a somewhat autobiographical examination of his career, published in 1989 under the title Grumbles from the Grave by his wife, Virginia; his book on practical politics written in 1946 published as Take Back Your Government; and a travelogue of their first around-the-world tour in 1954, Tramp Royale. The novels Podkayne of Mars and Red Planet, which were edited against his wishes in their original release, have been reissued in restored editions. Stranger In a Strange Land was originally published in a shorter form, but both the long and short versions are now simultaneously available in print. Heinlein's archive is housed by the Special Collections department of McHenry Library at the University of California at Santa Cruz. The collection includes manuscript drafts, correspondence, photographs and artifacts. A substantial portion of the archive has been digitized and it is available online through the Robert A. and Virginia Heinlein Archives. Works Heinlein published 32 novels, 59 short stories, and 16 collections during his life. Four films, two television series, several episodes of a radio series, and a board game have been derived more or less directly from his work. He wrote a screenplay for one of the films. Heinlein edited an anthology of other writers' SF short stories. Three nonfiction books and two poems have been published posthumously. For Us, the Living: A Comedy of Customs was published posthumously in 2003; Variable Star, written by Spider Robinson based on an extensive outline by Heinlein, was published in September 2006. Four collections have been published posthumously. Series Over the course of his career, Heinlein wrote three somewhat overlapping series: Future History series Lazarus Long series The Heinlein juveniles Early work, 1939–1958 Heinlein began his career as a writer of stories for Astounding Science Fiction magazine, which was edited by John Campbell. The science fiction writer Frederik Pohl has described Heinlein as "that greatest of Campbell-era sf writers". Isaac Asimov said that, from the time of his first story, the science fiction world accepted that Heinlein was the best science fiction writer in existence, adding that he would hold this title through his lifetime. Alexei and Cory Panshin noted that Heinlein's impact was immediately felt. In 1940, the year after selling 'Life-Line' to Campbell, he wrote three short novels, four novelettes, and seven short stories. They went on to say that "No one ever dominated the science fiction field as Bob did in the first few years of his career." Alexei expresses awe in Heinlein's ability to show readers a world so drastically different from the one we live in now, yet have so many similarities. He says that "We find ourselves not only in a world other than our own, but identifying with a living, breathing individual who is operating within its context, and thinking and acting according to its terms." The first novel that Heinlein wrote, For Us, the Living: A Comedy of Customs (1939), did not see print during his lifetime, but Robert James tracked down the manuscript and it was published in 2003. Though some regard it as a failure as a novel, considering it little more than a disguised lecture on Heinlein's social theories, some readers took a very different view. In a review of it, John Clute wrote: I'm not about to suggest that if Heinlein had been able to publish [such works] openly in the pages of Astounding in 1939, SF would have gotten the future right; I would suggest, however, that if Heinlein, and his colleagues, had been able to publish adult SF in Astounding and its fellow journals, then SF might not have done such a grotesquely poor job of prefiguring something of the flavor of actually living here at the onset of 2004. For Us, the Living was intriguing as a window into the development of Heinlein's radical ideas about man as a social animal, including his interest in free love. The root of many themes found in his later stories can be found in this book. It also contained a large amount of material that could be considered background for his other novels. This included a detailed description of the protagonist's treatment to avoid being banned to Coventry (a lawless land in the Heinlein mythos where unrepentant law-breakers are exiled). It appears that Heinlein at least attempted to live in a manner consistent with these ideals, even in the 1930s, and had an open relationship in his marriage to his second wife, Leslyn. He was also a nudist; nudism and body taboos are frequently discussed in his work. At the height of the Cold War, he built a bomb shelter under his house, like the one featured in Farnham's Freehold. After For Us, the Living, Heinlein began selling (to magazines) first short stories, then novels, set in a Future History, complete with a time line of significant political, cultural, and technological changes. A chart of the future history was published in the
Paz, regarding the threat posed by government to individual freedom. Although Heinlein had previously written a few short stories in the fantasy genre, during this period he wrote his first fantasy novel, Glory Road. In Stranger in a Strange Land and I Will Fear No Evil, he began to mix hard science with fantasy, mysticism, and satire of organized religion. Critics William H. Patterson, Jr., and Andrew Thornton believe that this is simply an expression of Heinlein's longstanding philosophical opposition to positivism. Heinlein stated that he was influenced by James Branch Cabell in taking this new literary direction. The penultimate novel of this period, I Will Fear No Evil, is according to critic James Gifford "almost universally regarded as a literary failure" and he attributes its shortcomings to Heinlein's near-death from peritonitis. Later work, 1980–1987 After a seven-year hiatus brought on by poor health, Heinlein produced five new novels in the period from 1980 (The Number of the Beast) to 1987 (To Sail Beyond the Sunset). These books have a thread of common characters and time and place. They most explicitly communicated Heinlein's philosophies and beliefs, and many long, didactic passages of dialog and exposition deal with government, sex, and religion. These novels are controversial among his readers and one critic, David Langford, has written about them very negatively. Heinlein's four Hugo awards were all for books written before this period. Most of the novels from this period are recognized by critics as forming an offshoot from the Future History series, and referred to by the term World as Myth. The tendency toward authorial self-reference begun in Stranger in a Strange Land and Time Enough for Love becomes even more evident in novels such as The Cat Who Walks Through Walls, whose first-person protagonist is a disabled military veteran who becomes a writer, and finds love with a female character. The 1982 novel Friday, a more conventional adventure story (borrowing a character and backstory from the earlier short story Gulf, also containing suggestions of connection to The Puppet Masters) continued a Heinlein theme of expecting what he saw as the continued disintegration of Earth's society, to the point where the title character is strongly encouraged to seek a new life off-planet. It concludes with a traditional Heinlein note, as in The Moon Is a Harsh Mistress or Time Enough for Love, that freedom is to be found on the frontiers. The 1984 novel Job: A Comedy of Justice is a sharp satire of organized religion. Heinlein himself was agnostic. Posthumous publications Several Heinlein works have been published since his death, including the aforementioned For Us, the Living as well as 1989's Grumbles from the Grave, a collection of letters between Heinlein and his editors and agent; 1992's Tramp Royale, a travelogue of a southern hemisphere tour the Heinleins took in the 1950s; Take Back Your Government, a how-to book about participatory democracy written in 1946 and reflecting his experience as an organizer with the EPIC campaign of 1934 and the movement's aftermath as an important factor in California politics before the Second World War; and a tribute volume called Requiem: Collected Works and Tributes to the Grand Master, containing some additional short works previously unpublished in book form. Off the Main Sequence, published in 2005, includes three short stories never before collected in any Heinlein book (Heinlein called them "stinkeroos"). Spider Robinson, a colleague, friend, and admirer of Heinlein, wrote Variable Star, based on an outline and notes for a juvenile novel that Heinlein prepared in 1955. The novel was published as a collaboration, with Heinlein's name above Robinson's on the cover, in 2006. A complete collection of Heinlein's published work has been published by the Heinlein Prize Trust as the "Virginia Edition", after his wife. See the Complete Works section of Robert A. Heinlein bibliography for details. On February 1, 2019, Phoenix Pick announced that through a collaboration with the Heinlein Prize Trust, a reconstruction of the full text of an unpublished Heinlein novel had been produced. It was published in March 2020. The reconstructed novel, entitled The Pursuit of the Pankera: A Parallel Novel about Parallel Universes, is an alternative version of The Number of the Beast, with the first one-third of The Pursuit of the Pankera mostly the same as the first one-third of The Number of the Beast but the remainder of The Pursuit of the Pankera deviating entirely from The Number of the Beast, with a completely different story-line. The newly reconstructed novel pays homage to Edgar Rice Burroughs and E. E. “Doc” Smith. It was edited by Patrick Lobrutto. Some reviewers describe the newly reconstructed novel as more in line with the style of a traditional Heinlein novel than was 'The Number of the Beast.' The Pursuit of the Pankera was considered superior to the original version of The Number of the Beast by some reviewers. Both The Pursuit of the Pankera and a new edition of The Number of the Beast were published in March 2020. The new edition of the latter shares the subtitle of The Pursuit of the Pankera, hence entitled The Number of the Beast: A Parallel Novel about Parallel Universes Influences The primary influence on Heinlein's writing style may have been Rudyard Kipling. Kipling is the first known modern example of "indirect exposition", a writing technique for which Heinlein later became famous. In his famous text on "On the Writing of Speculative Fiction", Heinlein quotes Kipling: Stranger in a Strange Land originated as a modernized version of Kipling's The Jungle Book. His wife suggested that the child be raised by Martians instead of wolves. Likewise, Citizen of the Galaxy can be seen as a reboot of Kipling's novel Kim. The Starship Troopers idea of needing to serve in the military in order to vote, can be found in Kipling's "The Army of a Dream": Poul Anderson once said of Kipling's science fiction story "As Easy as A.B.C.", "a wonderful science fiction yarn, showing the same eye for detail that would later distinguish the work of Robert Heinlein". Heinlein described himself as also being influenced by George Bernard Shaw, having read most of his plays. Shaw is an example of an earlier author who used the competent man, a favorite Heinlein archetype. He denied, though, any direct influence of Back to Methuselah on Methuselah's Children. Views Heinlein's books probe a range of ideas about a range of topics such as sex, race, politics, and the military. Many were seen as radical or as ahead of their time in their social criticism. His books have inspired considerable debate about the specifics, and the evolution, of Heinlein's own opinions, and have earned him both lavish praise and a degree of criticism. He has also been accused of contradicting himself on various philosophical questions. Brian Doherty cites William Patterson, saying that the best way to gain an understanding of Heinlein is as a "full-service iconoclast, the unique individual who decides that things do not have to be, and won't continue, as they are". He says this vision is "at the heart of Heinlein, science fiction, libertarianism, and America. Heinlein imagined how everything about the human world, from our sexual mores to our religion to our automobiles to our government to our plans for cultural survival, might be flawed, even fatally so." The critic Elizabeth Anne Hull, for her part, has praised Heinlein for his interest in exploring fundamental life questions, especially questions about "political power—our responsibilities to one another" and about "personal freedom, particularly sexual freedom". Edward R. Murrow hosted a series on CBS Radio called This I Believe, which solicited an entry from Heinlein in 1952. Titled "Our Noble, Essential Decency", it is probably the most enduring and popular of the title. In it, Heinlein broke with the normal trends, stating that he believed in his neighbors (some of whom he named and described), community, and towns across America that share the same sense of good will and intentions as his own, going on to apply this same philosophy to the US, and humanity in general. Politics Heinlein's political positions shifted throughout his life. Heinlein's early political leanings were liberal.}} Heinlein's fiction of the 1940s and 1950s, however, began to espouse conservative views. After 1945, he came to believe that a strong world government was the only way to avoid mutual nuclear annihilation. His 1949 novel Space Cadet describes a future scenario where a military-controlled global government enforces world peace. Heinlein ceased considering himself a Democrat in 1954. The Heinleins formed the Patrick Henry League in 1958, and they worked in the 1964 Barry Goldwater Presidential campaign. That ad was entitled "Who Are the Heirs of Patrick Henry?". It started with the famous Henry quotation: "Is life so dear, or peace so sweet, as to be purchased at the price of chains and slavery? Forbid it, Almighty God! I know not what course others may take, but as for me, give me liberty, or give me death!!". It then went on to admit that there was some risk to nuclear testing (albeit less than the "willfully distorted" claims of the test ban advocates), and risk of nuclear war, but that "The alternative is surrender. We accept the risks." Heinlein was among those who in 1968 signed a pro-Vietnam War ad in Galaxy Science Fiction. Heinlein always considered himself a libertarian; in a letter to Judith Merril in 1967 (never sent) he said, "As for libertarian, I've been one all my life, a radical one. You might use the term 'philosophical anarchist' or 'autarchist' about me, but 'libertarian' is easier to define and fits well enough." Stranger in a Strange Land was embraced by the 1960’s counterculture, and libertarians have found inspiration in The Moon Is a Harsh Mistress. Both groups found resonance with his themes of personal freedom in both thought and action. Race Heinlein grew up in the era of racial segregation in the United States and wrote some of his most influential fiction at the height of the Civil Rights Movement. He explicitly made the case for using his fiction not only to predict the future but also to educate his readers about the value of racial equality and the importance of racial tolerance. His early novels were very much ahead of their time both in their explicit rejection of racism and in their inclusion of protagonists of color. In the context of science fiction before the 1960s, the mere existence of characters of color was a remarkable novelty, with green occurring more often than brown. For example, his 1948 novel Space Cadet explicitly uses aliens as a metaphor for minorities. In his novel The Star Beast, the de facto foreign minister of the Terran government is an undersecretary, a Mr. Kiku, who is from Africa. Heinlein explicitly states his skin is "ebony black" and that Kiku is in an arranged marriage that is happy. In a number of his stories, Heinlein challenges his readers' possible racial preconceptions by introducing a strong, sympathetic character, only to reveal much later that he or she is of African or other ancestry. In several cases, the covers of the books show characters as being light-skinned when the text states or at least implies that they are dark-skinned or of African ancestry. Heinlein repeatedly denounced racism in his nonfiction works, including numerous examples in Expanded Universe. Heinlein reveals in Starship Troopers that the novel's protagonist and narrator, Johnny Rico, the formerly disaffected scion of a wealthy family, is Filipino, actually named "Juan Rico" and speaks Tagalog in addition to English. Race was a central theme in some of Heinlein's fiction. The most prominent example is Farnham's Freehold, which casts a white family into a future in which white people are the slaves of cannibalistic black rulers. In the 1941 novel Sixth Column (also known as The Day After Tomorrow), a white resistance movement in the United States defends itself against an invasion by an Asian fascist state (the "Pan-Asians") using a "super-science" technology that allows ray weapons to be tuned to specific races. The book is sprinkled with racist slurs against Asian people, and black and Hispanic people are not mentioned at all. The idea for the story was pushed on Heinlein by editor John W. Campbell, and Heinlein wrote later that he had "had to re-slant it to remove racist aspects of the original story line" and that he did not "consider it to be an artistic success". However, the novel prompted a heated debate in the scientific community regarding the plausibility of developing ethnic bioweapons. John Hickman, writing in the European Journal of American Studies, identifies examples of anti-East Asian racism in some of Heinlein's works, particularly Sixth Column. Heinlein summed up his attitude toward people of any race in his essay Our Noble, Essential Decency thus: Individualism and self-determination In keeping with his belief in individualism, his work for adults—and sometimes even his work for juveniles—often portrays both the oppressors and the oppressed with considerable ambiguity. Heinlein believed that individualism was incompatible with ignorance. He believed that an appropriate level of adult competence was achieved through a wide-ranging education, whether this occurred in a classroom or not. In his juvenile novels, more than once a character looks with disdain at a student's choice of classwork, saying, "Why didn't you study something useful?" In Time Enough for Love, Lazarus Long gives a long list of capabilities that anyone should have, concluding, "Specialization is for insects." The ability of the individual to create himself is explored in stories such as I Will Fear No Evil, "—All You Zombies—", and "By His Bootstraps". Heinlein claimed to have written Starship Troopers in response to "calls for the unilateral ending of nuclear testing by the United States". Heinlein suggests in the book that the Bugs are a good example of Communism being something that humans cannot successfully adhere to, since humans are strongly defined individuals, whereas the Bugs, being a collective, can all contribute to the whole without consideration of individual desire. Sexual issues For Heinlein, personal liberation included sexual liberation, and free love was a major subject of his writing starting in 1939, with For Us, the Living. During his early period, Heinlein's writing for younger readers needed to take account of both editorial perceptions of sexuality in his novels, and potential perceptions among the buying public; as critic William H. Patterson has put it, his dilemma was "to sort out what was really objectionable from what was only excessive over-sensitivity to imaginary librarians". By his middle period, sexual freedom and the elimination of sexual jealousy became a major theme; for instance, in Stranger in a Strange Land (1961), the progressively minded but sexually conservative reporter, Ben Caxton, acts as a dramatic foil for the less parochial characters, Jubal Harshaw and Valentine Michael Smith (Mike). Another of the main characters, Jill, is homophobic, and says that "nine times out of ten, if a girl gets raped it's partly her own fault." According to Gary Westfahl, In books written as early as 1956, Heinlein dealt with incest and the sexual nature of children. Many of his books including Time for the Stars, Glory Road, Time Enough for Love, and The Number of the Beast dealt explicitly or implicitly with incest, sexual feelings and relations between adults, children, or both. The treatment of these themes include the romantic relationship and eventual marriage of two characters in The Door into Summer who met when one was a 30-year-old engineer and the other was an 11-year-old girl, and who eventually married when time-travel rendered the girl an adult while the engineer aged minimally, or the more overt intra-familial incest in To Sail Beyond the Sunset and Farnham's Freehold. Heinlein often posed situations where the nominal purpose of sexual taboos was irrelevant to a particular situation, due to future advances in technology. For example, in Time Enough for Love Heinlein describes a brother and sister (Joe and Llita) who were mirror twins, being complementary diploids with entirely disjoint genomes, and thus not at increased risk for unfavorable gene duplication due to consanguinity. In this instance, Llita and Joe were props used to explore the concept of incest, where the usual objection to incest—heightened risk of genetic defect in their children—was not a consideration. Peers such as L. Sprague de Camp and Damon Knight have commented critically on Heinlein's portrayal of incest and pedophilia in a lighthearted and even approving manner. Diane Parkin-Speer suggests that Heinlein's intent seems more to provoke the reader and to question sexual norms than to promote any particular sexual agenda. Philosophy In To Sail Beyond the Sunset, Heinlein has the main character, Maureen, state that the purpose of metaphysics is to ask questions: "Why are we here?" "Where are we going after we die?" (and so on); and that you are not allowed to answer the questions. Asking the questions is the point of metaphysics, but answering them is not, because once you answer this kind of question, you cross the line into religion. Maureen does not state a reason for this; she simply remarks that such questions are "beautiful" but lack answers. Maureen's son/lover Lazarus Long makes a related remark in Time Enough for Love. In order for us to answer the "big questions" about the universe, Lazarus states at one point, it would be necessary to stand outside the universe. During the 1930s and 1940s, Heinlein was deeply interested in Alfred Korzybski's general semantics and attended a number of seminars on the subject. His views on epistemology seem to have flowed from that interest, and his fictional characters continue to express Korzybskian views to the very end of his writing career. Many of his stories, such as Gulf, If This Goes On—, and Stranger in a Strange Land, depend strongly on the premise, related to the well-known Sapir–Whorf hypothesis, that by using a correctly designed language, one can change or improve oneself mentally, or even realize untapped potential (as in the case of Joe in Gulf – whose last name may be Greene, Gilead or Briggs). When Ayn Rand's novel The Fountainhead was published, Heinlein was very favorably impressed, as quoted in "Grumbles ..." and mentioned John Galt—the hero in Rand's Atlas Shrugged—as a heroic archetype in The Moon Is a Harsh Mistress. He was also strongly affected by the religious philosopher P. D. Ouspensky. Freudianism and psychoanalysis were at the height of their influence during the peak of Heinlein's career, and stories such as Time for the Stars indulged in psychological theorizing. However, he was skeptical about Freudianism, especially after a struggle with an editor who insisted on reading Freudian sexual symbolism into his juvenile novels. Heinlein was fascinated by the social credit movement in the 1930s. This is shown in Beyond This Horizon and in his 1938 novel For Us, the Living: A Comedy of Customs, which was finally published in 2003, long after his death. Pay it forward On that theme, the phrase "pay it forward", though it was already in occasional use as a quotation, was popularized by Robert A. Heinlein in his book Between Planets, published in 1951: He referred to this in a number of other stories, although sometimes just saying to pay a debt back by helping others, as in one of his last works, Job, a Comedy of Justice. Heinlein was a mentor to Ray Bradbury, giving him help and quite possibly passing on the concept, made famous by the publication of a letter from him to Heinlein thanking him. In Bradbury's novel Dandelion Wine, published in 1957, when the main character Douglas Spaulding is reflecting on his life being saved by Mr. Jonas, the Junkman: Bradbury has also advised that writers he has helped thank him by helping other writers. Heinlein both preached and practiced this philosophy; now the Heinlein Society, a humanitarian organization founded in his name, does so, attributing the philosophy to its various efforts, including Heinlein for Heroes, the Heinlein Society Scholarship Program, and Heinlein Society blood drives. Author Spider Robinson made repeated reference to the doctrine, attributing it to his spiritual mentor Heinlein. Influence and legacy Honorifics Heinlein is usually identified, along with Isaac Asimov and Arthur C. Clarke, as one of the three masters of science fiction to arise in the so-called Golden Age of science fiction, associated with John W. Campbell and his magazine Astounding. In the 1950s he was a leader in bringing science fiction out of the low-paying and less prestigious "pulp ghetto". Most of his works, including short stories, have been continuously in print in many languages since their initial appearance and are still available as new paperbacks decades after his death. He was at the top of his form during, and himself helped to initiate, the trend toward social science fiction, which went along with a general maturing of the genre away from space opera to a more literary approach touching on such adult issues as politics and human sexuality. In reaction to this trend, hard science fiction began to be distinguished as a separate subgenre, but paradoxically Heinlein is also considered a seminal figure in hard science fiction, due to his extensive knowledge of engineering and the careful scientific research demonstrated in his stories. Heinlein himself stated—with obvious pride—that in the days before pocket calculators, he and his wife Virginia once worked for several days on a mathematical equation describing an Earth-Mars rocket orbit, which was then subsumed in a single sentence of the novel Space Cadet. Writing style Heinlein is often credited with bringing serious writing techniques to the genre of science fiction. For example, when writing about fictional worlds, previous authors were often limited by the reader's existing knowledge of a typical "space opera" setting, leading to a relatively low creativity level: The same starships, death rays, and horrifying rubbery aliens becoming ubiquitous. This was necessary unless the author was willing to go into long expositions about the setting of the story, at a time when the word count was at a premium in SF. But Heinlein utilized a technique called "indirect exposition", perhaps first introduced by Rudyard Kipling in his own science fiction venture, the Aerial Board of Control stories. Kipling had picked this up during his time in India, using it to avoid bogging down his stories set in India with explanations for his English readers. This technique—mentioning details in a way that lets the reader infer more about the universe than is actually spelled out became a trademark rhetorical technique of both Heinlein and generation of writers influenced by him. Heinlein was significantly influenced by Kipling beyond this, for example quoting him in "On the Writing of Speculative Fiction". Likewise, Heinlein's name is often associated with the competent hero, a character archetype who, though he or she may have flaws and limitations, is a strong, accomplished person able to overcome any soluble problem set in their path. They tend to feel confident overall, have a broad life experience and set of skills, and not give up when the going gets tough. This style influenced not only the writing style of a generation of authors, but even their personal character. Harlan Ellison once said, "Very early in life when I read Robert Heinlein I got the thread that runs through his stories—the notion of the competent man ... I've always held that as my ideal. I've tried to be a very competent man." Rules of writing When fellow writers, or fans, wrote Heinlein asking for writing advice, he famously gave out his own list of rules for becoming a successful writer: You must write. Finish what you start. You must refrain from rewriting, except to editorial order. You must put your story on the market. You must keep it on the market until it has sold. About which he said: Heinlein later published an entire article, "On the Writing of Speculative Fiction", which included his rules, and from which the above quote is taken. When he says "anything said above them", he refers to his other guidelines. For example, he describes most stories as fitting into one of a handful of basic categories: The gadget story The human interest story Boy meets girl The Little Tailor The man-who-learned-better In the article, Heinlein proposes that most stories fit into the either the gadget story or the human interest story, which is itself subdivided into the three latter categories. He also credits L. Ron Hubbard as having identified "The Man-Who-Learned-Better". Influence among writers Heinlein has had a pervasive influence on other science fiction writers. In a 1953 poll of leading science fiction authors, he was cited more frequently as an influence than any other modern writer. Critic James Gifford writes that Heinlein gave Larry Niven and Jerry Pournelle extensive advice on a draft manuscript of The Mote in God's Eye. He contributed a cover blurb "Possibly the finest science fiction novel I have ever read." Writer David Gerrold, responsible for creating the tribbles in Star Trek, also credited Heinlein as the inspiration for his Dingilliad series of novels. Gregory Benford refers to his novel Jupiter Project as a Heinlein tribute. Similarly, Charles Stross says his Hugo Award-nominated novel Saturn's Children is "a space opera and late-period Robert A. Heinlein tribute", referring to Heinlein's Friday. The theme and plot of Kameron Hurley's novel, The Light Brigade clearly echo those of Heinlein's Starship Troopers. Words and phrases coined Even outside the science fiction community, several words and phrases coined or adopted by Heinlein have passed into common English usage: Waldo, protagonist in the eponymous short story "Waldo", whose name came to mean mechanical or robot arms in the real world that are akin to the ones used by the character in the story. Moonbat used in United States politics as a pejorative political epithet referring to progressives or leftists, was originally the name of a space ship in his story "Space Jockey". Grok, a "Martian" word for understanding a thing so fully as to become one with it, from Stranger in a Strange Land. Space marine, an existing term popularized by Heinlein in short stories, the concept then being made famous by Starship Troopers, though the term "space marine" is not used in that novel. Speculative fiction, a term Heinlein used for the separation of serious, consistent science fiction writing, from the pop "sci fi" of the day, which generally took great artistic license with human knowledge, amounting to being more like space fantasy than science fiction. Inspiring culture and technology In 1962, Oberon Zell-Ravenheart (then still using his birth name, Tim Zell) founded the Church of All Worlds, a Neopagan religious organization modeled in many ways (including its name) after the treatment of religion in the novel Stranger in a Strange Land. This spiritual path included several ideas from the book, including non-mainstream family structures, social libertarianism, water-sharing rituals, an acceptance of all religious paths by a single tradition, and the use of several terms such as "grok", "Thou art God", and "Never Thirst". Though Heinlein was neither a member nor a promoter of the Church, there was a frequent exchange of correspondence between Zell and Heinlein, and he was a paid subscriber to their magazine, Green Egg. This Church still exists as a 501(C)(3) religious organization incorporated in California, with membership worldwide, and it remains an active part of the neopagan community today. Zell-Ravenheart's wife, Morning Glory coined the term polyamory in 1990, another movement that includes Heinlein concepts among its roots. Heinlein was influential in making space exploration seem to the public more like a practical possibility. His stories in publications such as The Saturday Evening Post took a matter-of-fact approach to their outer-space setting, rather than the "gee whiz" tone that had previously been common. The documentary-like film Destination Moon advocated a Space Race with an unspecified foreign power almost a decade before such an idea became commonplace, and was promoted by an unprecedented publicity campaign in print publications. Many of the astronauts and others working in the U.S. space program grew up on a diet of the Heinlein juveniles, best evidenced by the naming of a crater on Mars after him, and a tribute interspersed by the Apollo 15 astronauts into their radio conversations while on the moon. Heinlein was also a guest commentator (along with fellow sci-fi author Arthur C. Clarke) for Walter Cronkite's coverage of the Apollo 11 Moon landing. He remarked to Cronkite during the landing that, "This is the greatest event in human history, up to this time. This is—today is New Year's Day of the Year One." Businessman and entrepreneur Elon
was found at Kostyonki and Borshchyovo, and at Sungir, dating back to 34,600 years ago—both, respectively in western Russia. Humans reached Arctic Russia at least 40,000 years ago, in Mamontovaya Kurya. Nomadic pastoralism developed in the Pontic–Caspian steppe beginning in the Chalcolithic. Remnants of these steppe civilizations were discovered in places such as Ipatovo, Sintashta, Arkaim, and Pazyryk, which bear the earliest known traces of horses in warfare. In classical antiquity, the Pontic-Caspian Steppe was known as Scythia. In late 8th century BCE, Ancient Greek traders brought classical civilization to the trade emporiums in Tanais and Phanagoria. In the 3rd to 4th centuries AD, the Gothic kingdom of Oium existed in Southern Russia, which was later overrun by Huns. Between the 3rd and 6th centuries AD, the Bosporan Kingdom, which was a Hellenistic polity that succeeded the Greek colonies, was also overwhelmed by nomadic invasions led by warlike tribes such as the Huns and Eurasian Avars. The Khazars, who were of Turkic origin, ruled the lower Volga basin steppes between the Caspian and Black Seas until the 10th century. The ancestors of Russians are among the Slavic tribes that separated from the Proto-Indo-Europeans, who appeared in the northeastern part of Europe ca. 1500 years ago. The East Slavs gradually settled western Russia in two waves: one moving from Kiev towards present-day Suzdal and Murom and another from Polotsk towards Novgorod and Rostov. From the 7th century onwards, the East Slavs constituted the bulk of the population in western Russia, and slowly but peacefully assimilated the native Finnic peoples. Kievan Rus' The establishment of the first East Slavic states in the 9th century coincided with the arrival of Varangians, the Vikings who ventured along the waterways extending from the eastern Baltic to the Black and Caspian Seas. According to the Primary Chronicle, a Varangian from the Rus' people, named Rurik, was elected ruler of Novgorod in 862. In 882, his successor Oleg ventured south and conquered Kiev, which had been previously paying tribute to the Khazars. Rurik's son Igor and Igor's son Sviatoslav subsequently subdued all local East Slavic tribes to Kievan rule, destroyed the Khazar Khaganate, and launched several military expeditions to Byzantium and Persia. In the 10th to 11th centuries, Kievan Rus' became one of the largest and most prosperous states in Europe. The reigns of Vladimir the Great (980–1015) and his son Yaroslav the Wise (1019–1054) constitute the Golden Age of Kiev, which saw the acceptance of Orthodox Christianity from Byzantium, and the creation of the first East Slavic written legal code, the Russkaya Pravda. The age of feudalism and decentralization had come, marked by constant in-fighting between members of the Rurik dynasty that ruled Kievan Rus' collectively. Kiev's dominance waned, to the benefit of Vladimir-Suzdal in the north-east, Novgorod Republic in the north-west and Galicia-Volhynia in the south-west. Kievan Rus' ultimately disintegrated, with the final blow being the Mongol invasion of 1237–40, which resulted in the sacking of Kiev, and the death of a major part of the population of Rus'. The invaders, later known as Tatars, formed the state of the Golden Horde, which pillaged the Russian principalities and ruled the southern and central expanses of Russia for over two centuries. Galicia-Volhynia was eventually assimilated by the Kingdom of Poland, while the Novgorod Republic and Vladimir-Suzdal, two regions on the periphery of Kiev, established the basis for the modern Russian nation. Led by Prince Alexander Nevsky, Novgorodians repelled the invading Swedes in the Battle of the Neva in 1240, as well as the Germanic crusaders in the Battle of the Ice in 1242. Grand Duchy of Moscow The most powerful state to eventually arise after the destruction of Kievan Rus' was the Grand Duchy of Moscow, initially a part of Vladimir-Suzdal. While still under the domain of the Mongol-Tatars and with their connivance, Moscow began to assert its influence in the region in the early 14th century, gradually becoming the leading force in the process of the Rus' lands' reunification and expansion of Russia. Moscow's last rival, the Novgorod Republic, prospered as the chief fur trade centre and the easternmost port of the Hanseatic League. Led by Prince Dmitry Donskoy of Moscow and helped by the Russian Orthodox Church, the united army of Russian principalities inflicted a milestone defeat on the Mongol-Tatars in the Battle of Kulikovo in 1380. Moscow gradually absorbed its parent Vladimir-Suzdal, and then surrounding principalities, including formerly strong rivals such as Tver and Novgorod. Ivan III ("the Great") finally threw off the control of the Golden Horde and consolidated the whole of northern Rus' under Moscow's dominion, and was the first Russian ruler to take the title title "Grand Duke of all Rus'". After the fall of Constantinople in 1453, Moscow claimed succession to the legacy of the Eastern Roman Empire. Ivan III married Sophia Palaiologina, the niece of the last Byzantine emperor Constantine XI, and made the Byzantine double-headed eagle his own, and eventually Russia's, coat-of-arms. Tsardom of Russia In development of the Third Rome ideas, the grand duke Ivan IV (the "Terrible") was officially crowned the first tsar of Russia in 1547. The tsar promulgated a new code of laws (Sudebnik of 1550), established the first Russian feudal representative body (Zemsky Sobor), revamped the military, curbed the influence of the clergy, and reorganized local government. During his long reign, Ivan nearly doubled the already large Russian territory by annexing the three Tatar khanates: Kazan and Astrakhan along the Volga, and the Khanate of Sibir in southwestern Siberia. Ultimately, by the end of the 16th century, Russia expanded east of the Ural Mountains. However, the Tsardom was weakened by the long and unsuccessful Livonian War against the coalition of the Kingdom of Poland and the Grand Duchy of Lithuania (later the united Polish–Lithuanian Commonwealth), the Kingdom of Sweden, and Denmark–Norway for access to the Baltic coast and sea trade. In 1572, an invading army of Crimean Tatars were thoroughly defeated in the crucial Battle of Molodi. The death of Ivan's sons marked the end of the ancient Rurik dynasty in 1598, and in combination with the disastrous famine of 1601–03, led to a civil war, the rule of pretenders, and foreign intervention during the Time of Troubles in the early 17th century. The Polish–Lithuanian Commonwealth, taking advantage, occupied parts of Russia, extending into the capital Moscow. In 1612, the Poles were forced to retreat by the Russian volunteer corps, led by merchant Kuzma Minin and prince Dmitry Pozharsky. The Romanov dynasty acceded to the throne in 1613 by the decision of Zemsky Sobor, and the country started its gradual recovery from the crisis. Russia continued its territorial growth through the 17th century, which was the age of the Cossacks. In 1654, the Ukrainian leader, Bohdan Khmelnytsky, offered to place Ukraine under the protection of the Russian tsar, Alexis; whose acceptance of this offer led to another Russo-Polish War. Ultimately, Ukraine was split along the Dnieper, leaving the eastern part, (Left-bank Ukraine and Kiev) under Russian rule. In the east, the rapid Russian exploration and colonisation of vast Siberia continued, hunting for valuable furs and ivory. Russian explorers pushed eastward primarily along the Siberian River Routes, and by the mid-17th century, there were Russian settlements in eastern Siberia, on the Chukchi Peninsula, along the Amur River, and on the coast of the Pacific Ocean. In 1648, Semyon Dezhnyov became the first European to navigate through the Bering Strait. Imperial Russia Under Peter the Great, Russia was proclaimed an empire in 1721, and became one of the European great powers. Ruling from 1682 to 1725, Peter defeated Sweden in the Great Northern War (1700−1721), securing Russia's access to the sea and sea trade. In 1703, on the Baltic Sea, Peter founded Saint Petersburg as Russia's new capital. Throughout his rule, sweeping reforms were made, which brought significant Western European cultural influences to Russia. The reign of Peter I's daughter Elizabeth in 1741–62 saw Russia's participation in the Seven Years' War (1756–63). During the conflict, Russian troops overran East Prussia, and even reached the gates of Berlin. However, upon Elizabeth's death, all these conquests were returned to the Kingdom of Prussia by pro-Prussian Peter III of Russia. Catherine II ("the Great"), who ruled in 1762–96, presided over the Russian Age of Enlightenment. She extended Russian political control over the Polish–Lithuanian Commonwealth and annexed most of its territories into Russia, making it the most populous country in Europe. In the south, after the successful Russo-Turkish Wars against the Ottoman Empire, Catherine advanced Russia's boundary to the Black Sea, by dissolving the Crimean Khanate, and annexing Crimea. As a result of victories over Qajar Iran through the Russo-Persian Wars, by the first half of the 19th century, Russia also made significant territorial gains in the Caucasus. Catherine's successor, her son Paul, was unstable and focused predominantly on domestic issues. Following his short reign, Catherine's strategy was continued with Alexander I's (1801–25) wresting of Finland from the weakened Sweden in 1809, and of Bessarabia from the Ottomans in 1812. In North America, the Russians became the first Europeans to reach and colonize Alaska. In 1803–1806, the first Russian circumnavigation was made. In 1820, a Russian expedition discovered the continent of Antarctica. During the Napoleonic Wars, Russia joined alliances with various European powers, and fought against France. The French invasion of Russia at the height of Napoleon's power in 1812 reached Moscow, but eventually failed miserably as the obstinate resistance in combination with the bitterly cold Russian winter led to a disastrous defeat of invaders, in which the pan-European Grande Armée faced utter destruction. Led by Mikhail Kutuzov and Michael Andreas Barclay de Tolly, the Imperial Russian Army ousted Napoleon and drove throughout Europe in the War of the Sixth Coalition, ultimately entering Paris. Alexander I controlled Russia's delegation at the Congress of Vienna, which defined the map of post-Napoleonic Europe. The officers who pursued Napoleon into Western Europe brought ideas of liberalism back to Russia, and attempted to curtail the tsar's powers during the abortive Decembrist revolt of 1825. At the end of the conservative reign of Nicholas I (1825–55), a zenith period of Russia's power and influence in Europe, was disrupted by defeat in the Crimean War. Nicholas's successor Alexander II (1855–81) enacted significant changes throughout the country, including the emancipation reform of 1861. These reforms spurred industrialisation, and modernized the Imperial Russian Army, which liberated much of the Balkans from Ottoman rule in the aftermath of the 1877–78 Russo-Turkish War. During most of the 19th and early 20th century, Russia and Britain colluded over Afghanistan and its neighboring territories in Central and South Asia; the rivalry between the two major European empires came to be known as the Great Game. The late 19th century saw the rise of various socialist movements in Russia. Alexander II was assassinated in 1881 by revolutionary terrorists. The reign of his son Alexander III (1881–94) was less liberal but more peaceful. The last Russian emperor, Nicholas II (1894–1917), was unable to prevent the events of the Russian Revolution of 1905, triggered by the humiliating Russo-Japanese War and the demonstration incident known as Bloody Sunday. The uprising was put down, but the government was forced to concede major reforms (Russian Constitution of 1906), including granting the freedoms of speech and assembly, the legalisation of political parties, and the creation of an elected legislative body, the State Duma. Revolution and civil war In 1914, Russia entered World War I in response to Austria-Hungary's declaration of war on Russia's ally Serbia, and fought across multiple fronts while isolated from its Triple Entente allies. In 1916, the Brusilov Offensive of the Imperial Russian Army almost completely destroyed the Austro-Hungarian Army. However, the already-existing public distrust of the regime was deepened by the rising costs of war, high casualties, and rumors of corruption and treason. All this formed the climate for the Russian Revolution of 1917, carried out in two major acts. In early 1917, Nicholas II was forced to abdicate; he and his family were imprisoned and later executed in Yekaterinburg during the Russian Civil War. The monarchy was replaced by a shaky coalition of political parties that declared itself the Provisional Government. The Provisional Government proclaimed the Russian Republic in September. On 6 January (19), 1918, the Russian Constituent Assembly declared Russia a democratic federal republic (thus ratifying the Provisional Government's decision). The next day the Constituent Assembly was dissolved by the All-Russian Central Executive Committee. An alternative socialist establishment co-existed, the Petrograd Soviet, wielding power through the democratically elected councils of workers and peasants, called Soviets. The rule of the new authorities only aggravated the crisis in the country instead of resolving it, and eventually, the October Revolution, led by Bolshevik leader Vladimir Lenin, overthrew the Provisional Government and gave full governing power to the Soviets, leading to the creation of the world's first socialist state. The Russian Civil War broke out between the anti-communist White movement and the new Soviet regime with its Red Army. In the aftermath of signing the Treaty of Brest-Litovsk that concluded hostilities with the Central Powers of World War I; Bolshevist Russia surrendered most of its western territories, which hosted 34% of its population, 54% of its industries, 32% of its agricultural land, and roughly 90% of its coal mines. The Allied powers launched an unsuccessful military intervention in support of anti-communist forces. In the meantime, both the Bolsheviks and White movement carried out campaigns of deportations and executions against each other, known respectively as the Red Terror and White Terror. By the end of the violent civil war, Russia's economy and infrastructure were heavily damaged, and as many as 10 million perished during the war, mostly civilians. Millions became White émigrés, and the Russian famine of 1921–22 claimed up to five million victims. Soviet Union On 30 December 1922, Lenin and his aides formed the Soviet Union, by joining the Russian SFSR into a single state with the Byelorussian, Transcaucasian, and Ukrainian republics. Eventually internal border changes and annexations during World War II created a union of 15 republics; the largest in size and population being the Russian SFSR, which dominated the union for its entire history politically, culturally, and economically. Following Lenin's death in 1924, a troika was designated to take charge. Eventually Joseph Stalin, the General Secretary of the Communist Party, managed to suppress all opposition factions and consolidate power in his hands to become the country's dictator by the 1930s. Leon Trotsky, the main proponent of world revolution, was exiled from the Soviet Union in 1929, and Stalin's idea of Socialism in One Country became the official line. The continued internal struggle in the Bolshevik party culminated in the Great Purge. Under Stalin's leadership, the government launched a command economy, industrialisation of the largely rural country, and collectivisation of its agriculture. During this period of rapid economic and social change, millions of people were sent to penal labor camps, including many political convicts for their suspected or real opposition to Stalin's rule; and millions were deported and exiled to remote areas of the Soviet Union. The transitional disorganisation of the country's agriculture, combined with the harsh state policies and a drought, led to the Soviet famine of 1932–1933; which killed up to 8.7 million. The Soviet Union, ultimately, made the costly transformation from a largely agrarian economy to a major industrial powerhouse within a short span of time. World War II The Soviet Union entered World War II on 17 September 1939 with its invasion of Poland, in accordance with a secret protocol within the Molotov–Ribbentrop Pact with Nazi Germany. The Soviet Union later invaded Finland, and occupied and annexed the Baltic states, as well as parts of Romania. On 22 June 1941, Germany invaded the Soviet Union, opening the Eastern Front, the largest theater of World War II. Eventually, some 5 million Red Army troops were captured by the Nazis; the latter deliberately starved to death or otherwise killed 3.3 million Soviet POWs, and a vast number of civilians, as the "Hunger Plan" sought to fulfill Generalplan Ost. Although the Wehrmacht had considerable early success, their attack was halted in the Battle of Moscow. Subsequently, the Germans were dealt major defeats first at the Battle of Stalingrad in the winter of 1942–43, and then in the Battle of Kursk in the summer of 1943. Another German failure was the Siege of Leningrad, in which the city was fully blockaded on land between 1941 and 1944 by German and Finnish forces, and suffered starvation and more than a million deaths, but never surrendered. Soviet forces steamrolled through Eastern and Central Europe in 1944–45 and captured Berlin in May 1945. In August 1945, the Soviet Army invaded Manchuria and ousted the Japanese from Northeast Asia, contributing to the Allied victory over Japan. The 1941–45 period of World War II is known in Russia as the Great Patriotic War. The Soviet Union, along with the United States, the United Kingdom and China were considered the Big Four of Allied powers in World War II, and later became the Four Policemen, which was the foundation of the United Nations Security Council. During the war, Soviet civilian and military death were about 26–27 million, accounting for about half of all World War II casualties. The Soviet economy and infrastructure suffered massive devastation, which caused the Soviet famine of 1946–47. However, at the expense of a large sacrifice, the Soviet Union emerged as a global superpower. Cold War After World War II, parts of Eastern and Central Europe, including East Germany and eastern parts of Austria were occupied by Red Army according to the Potsdam Conference. Dependent communist governments were installed in the Eastern Bloc satellite states. After becoming the world's second nuclear power, the Soviet Union established the Warsaw Pact alliance, and entered into a struggle for global dominance, known as the Cold War, with the rivaling United States and NATO. After Stalin's death in 1953 and a short period of collective rule, the new leader Nikita Khrushchev denounced Stalin and launched the policy of de-Stalinization, releasing many political prisoners from the Gulag labor camps. The general easement of repressive policies became known later as the Khrushchev Thaw. At the same time, Cold War tensions reached its peak when the two rivals clashed over the deployment of the United States Jupiter missiles in Turkey and Soviet missiles in Cuba. In 1957, the Soviet Union launched the world's first artificial satellite, Sputnik 1, thus starting the Space Age. Russian cosmonaut Yuri Gagarin became the first human to orbit the Earth, aboard the Vostok 1 manned spacecraft on 12 April 1961. Following the ousting of Khrushchev in 1964, another period of collective rule ensued, until Leonid Brezhnev became the leader. The era of the 1970s and the early 1980s was later designated as the Era of Stagnation. The 1965 Kosygin reform aimed for partial decentralisation of the Soviet economy. In 1979, after a communist-led revolution in Afghanistan, Soviet forces invaded the country, ultimately starting the Soviet–Afghan War. In May 1988, the Soviets started to withdraw from Afghanistan, due to international opposition, persistent anti-Soviet guerrilla warfare, and a lack of support by Soviet citizens. From 1985 onwards, the last Soviet leader Mikhail Gorbachev, who sought to enact liberal reforms in the Soviet system, introduced the policies of glasnost (openness) and perestroika (restructuring) in an attempt to end the period of economic stagnation and to democratize the government. This, however, led to the rise of strong nationalist and separatist movements across the country. Prior to 1991, the Soviet economy was the world's second-largest, but during its final years, it went into a crisis. By 1991, economic and political turmoil began to boil over as the Baltic states chose to secede from the Soviet Union. On 17 March, a referendum was held, in which the vast majority of participating citizens voted in favour of changing the Soviet Union into a renewed federation. In June 1991, Boris Yeltsin became the first directly elected president in Russian history when he was elected president of the Russian SFSR. In August 1991, a coup d'état attempt by members of Gorbachev's government, directed against Gorbachev and aimed at preserving the Soviet Union, instead led to the end of the Communist Party of the Soviet Union. On 25 December 1991, following the dissolution of the Soviet Union, along with contemporary Russia, fourteen other post-Soviet states emerged. Post-Soviet Russia (1991–present) The economic and political collapse of the Soviet Union led Russia into a deep and prolonged depression. During and after the disintegration of the Soviet Union, wide-ranging reforms including privatisation and market and trade liberalisation were undertaken, including radical changes along the lines of "shock therapy". The privatisation largely shifted control of enterprises from state agencies to individuals with inside connections in the government, which led to the rise of the infamous Russian oligarchs. Many of the newly rich moved billions in cash and assets outside of the country in an enormous capital flight. The depression of the economy led to the collapse of social services—the birth rate plummeted while the death rate skyrocketed, and millions plunged into poverty; while extreme corruption, as well as criminal gangs and organised crime rose significantly. In late 1993, tensions between Yeltsin and the Russian parliament culminated in a constitutional crisis which ended violently through military force. During the crisis, Yeltsin was backed by Western governments, and over 100 people were killed. In December, a referendum was held and approved, which introduced a new constitution, giving the president enormous powers. The 1990s were plagued by armed conflicts in the North Caucasus, both local ethnic skirmishes and separatist Islamist insurrections. From the time Chechen separatists declared independence in the early 1990s, an intermittent guerrilla war was fought between the rebel groups and Russian forces. Terrorist attacks against civilians were carried out by Chechen separatists, claiming the lives of thousands of Russian civilians. After the dissolution of the Soviet Union, Russia assumed responsibility for settling the latter's external debts. In 1992, most consumer price controls were eliminated, causing extreme inflation and significantly devaluing the ruble. High budget deficits coupled with increasing capital flight and inability to pay back debts, caused the 1998 Russian financial crisis, which resulted in a further GDP decline. Putin era On 31 December 1999, President Yeltsin unexpectedly resigned, handing the post to the recently appointed prime minister and his chosen successor, Vladimir Putin. Yeltsin left office widely unpopular, with an approval rating as low as 2% by some estimates. Putin then won the 2000 presidential election, and suppressed the Chechen insurgency. Putin went on to win a second presidential term in 2004. As a result of high oil prices, a rise in foreign investment, and prudent economic and fiscal policies, the Russian economy grew significantly; dramatically improving Russia's standard of living, and increasing its influence in global politics. Putin's rule increased stability, while transforming Russia into an authoritarian state. On 2 March 2008, Dmitry Medvedev was elected president while Putin became prime minister, as the constitution barred Putin from serving a third consecutive presidential term. Putin returned to the presidency following the 2012 presidential elections, and Medvedev was appointed prime minister. This four year joint leadership by the two was coined "tandemocracy" by foreign media. In 2014, Putin deployed Russian troops to Ukraine to seize the Crimean parliament, leading to the takeover of Crimea. Russia's subsequent annexation of Crimea and the referendum that preceded it remain globally unrecognised, and led to sanctions by Western countries, following which the Russian government responded with counter-sanctions against the latter. In March 2018, Putin was elected for a fourth presidential term overall. In January 2020, substantial amendments to the constitution were proposed, taking effect in July following a national vote, allowing Putin to run for two more six-year presidential terms after his current term ends. On 24 February 2022, Russia launched an invasion of Ukraine. At about 06:00 Moscow time, Putin announced a military operation on Ukraine; minutes later, cities of Ukraine were attacked by missiles. Geography Russia is a transcontinental country, stretching vastly over the easternmost part of Europe and the northernmost part of Asia. It spans the northernmost edge of Eurasia; and has the world's fourth-longest coastline, of over . Russia lies between latitudes 41° and 82° N, and longitudes 19° E and 169° W, extending some east to west, and north to south. Russia, by landmass, is larger than three continents, and has the same surface area as Pluto. Russia has nine major mountain ranges, and they are found along the southernmost regions, which share a significant portion of the Caucasus Mountains (containing Mount Elbrus, which at is the highest peak in Russia and Europe); the Altai and Sayan Mountains in Siberia; and in the East Siberian Mountains and the Kamchatka Peninsula in the Russian Far East (containing Klyuchevskaya Sopka, which at is the highest active volcano in Eurasia). The Ural Mountains, running north to south through the country's west, are rich in mineral resources, and form the traditional boundary between Europe and Asia. Russia, as one of the world's only two countries bordering three oceans, has links with a great number of seas. Its major islands and archipelagos include Novaya Zemlya, Franz Josef Land, Severnaya Zemlya, the New Siberian Islands, Wrangel Island, the Kuril Islands, and Sakhalin. The Diomede Islands, administered by Russia and the United States, are just apart; and Kunashir Island of the Kuril Islands is merely from Hokkaido, Japan. Russia, home to over 100,000 rivers, has one of the world's largest surface water resources, with its lakes containing approximately one-quarter of the world's liquid fresh water. Lake Baikal, the largest and most prominent among Russia's fresh water bodies, is the world's deepest, purest, oldest and most capacious fresh water lake, containing over one-fifth of the world's fresh surface water. Ladoga and Onega in northwestern Russia are two of the largest lakes in Europe. Russia is second only to Brazil by total renewable water resources. The Volga in western Russia, widely regarded as Russia's national river, is the longest river in Europe; while the rivers of Ob, Yenisey, Lena, and Amur in Siberia are among the world's longest rivers. Climate The size of Russia and the remoteness of many of its areas from the sea result in the dominance of the humid continental climate throughout most of the country, except for the tundra and the extreme southwest. Mountain ranges in the south and east obstruct the flow of warm air masses from the Indian and Pacific oceans, while the European Plain spanning its west and north opens it to influence from the Atlantic and Arctic oceans. Most of northwest Russia and Siberia have a subarctic climate, with extremely severe winters in the inner regions of northeast Siberia (mostly Sakha, where the Northern Pole of Cold is located with the record low temperature of ), and more moderate winters elsewhere. Russia's vast coastline along the Arctic Ocean and the Russian Arctic islands have a polar climate. The coastal part of Krasnodar Krai on the Black Sea, most notably Sochi, and some coastal and interior strips of the North Caucasus possess a humid subtropical climate with mild and wet winters. In many regions of East Siberia and the Russian Far East, winter is dry compared to summer; while other parts of the country experience more even precipitation across seasons. Winter precipitation in most parts of the country usually falls as snow. The westernmost parts of Kaliningrad Oblast and some parts in the south of Krasnodar Krai and the North Caucasus have an oceanic climate. The region along the Lower Volga and Caspian Sea coast, as well as some southernmost slivers of Siberia, possess a semi-arid climate. Throughout much of the territory, there are only two distinct seasons, winter and summer; as spring and autumn are usually brief periods of change between extremely low and extremely high temperatures. The coldest month is January (February on the coastline); the warmest is usually July. Great ranges of temperature are typical. In winter, temperatures get colder both from south to north and from west to east. Summers can be quite hot, even in Siberia. Biodiversity Russia, owing to its gigantic size, has diverse ecosystems, including polar deserts, tundra, forest tundra, taiga, mixed and broadleaf forest, forest steppe, steppe, semi-desert, and subtropics. About half of Russia's territory is forested, and it has the world's largest forest reserves, which are known as the "Lungs of Europe"; coming second only to the Amazon rainforest in the amount of carbon dioxide it absorbs. Russian biodiversity includes 12,500 species of vascular plants, 2,200 species of bryophytes, about 3,000 species of lichens, 7,000-9,000 species of algae, and 20,000-25,000 species of fungi. Russian fauna is composed of 320 species of mammals, over 732 species of birds, 75 species of reptiles, about 30 species of amphibians, 343 species of freshwater fish (high endemism), approximately 1,500 species of saltwater fishes, 9 species of cyclostomata, and approximately 100–150,000 invertebrates (high endemism). Approximately 1,100 of rare and endangered plant and animal species are included in the Russian Red Data Book. Russia's entirely natural ecosystems are conserved in nearly 15,000 specially protected natural territories of various statuses, occupying more than 10% of the country's total area. They include 45 biosphere reserves, 64 national parks, and 101 nature reserves. Russia still has many ecosystems which are still untouched by man; mainly in the northern taiga areas, and the subarctic tundra of Siberia. Russia had a Forest Landscape Integrity Index mean score of 9.02 in 2019, ranking 10th out of 172 countries; and the first ranked major nation globally. Government and politics Russia is an asymmetric federation, and semi-presidential republic, wherein the president is the head of state, and the prime minister is the head of government. It is fundamentally structured as a multi-party representative democracy, with the federal government composed of three branches: Legislative: The bicameral Federal Assembly of Russia, made up of the 450-member State Duma and the 170-member Federation Council, adopts federal law, declares war, approves treaties, has the power of the purse and the power of impeachment of the president. Executive: The president is the commander-in-chief of the Armed Forces, and appoints the Government of Russia (Cabinet) and other officers, who administer and enforce federal laws and policies. Judiciary: The Constitutional Court, Supreme Court and lower federal courts, whose judges are appointed by the Federation Council on the recommendation of the president, interpret laws and can overturn laws they deem unconstitutional. The president is elected by popular vote for a six-year term and may be elected no more than twice. Ministries of the government are composed of the premier and his deputies, ministers, and selected other individuals; all are appointed by the president on the recommendation of the prime minister (whereas the appointment of the latter requires the consent of the State Duma). United Russia is the dominant political party in Russia, and has been described as "big tent". Political divisions According to the constitution, the Russian Federation is composed of 85 federal subjects. In 1993, when the new constitution was adopted, there were 89 federal subjects listed, but some were later merged. The federal subjects have equal representation—two delegates each—in the Federation Council, the upper house of the Federal Assembly. They do, however, differ in the degree of autonomy they enjoy. The federal districts of Russia were established by Putin in 2000 to facilitate central government control of the federal subjects. Originally seven, currently there are eight federal districts, each headed by an envoy appointed by the president. Foreign relations Russia had the world's fifth-largest diplomatic network in 2019. It maintains diplomatic relations with 190 United Nations member states, two partially-recognized states, and three United Nations observer states; along with 144 embassies. Russia is one of the five permanent members of the United Nations Security Council, and is a potential superpower. It has historically been a great power, and a significant regional power. Russia is a member of the G20, the Council of Europe, the OSCE, and the APEC. It also takes a leading role in organisations such as the CIS, the EAEU, the CSTO, the SCO, and BRICS. Russia maintains close relations with neighbouring Belarus, which is in the Union State, a supranational confederation of the latter with Russia. Serbia has been a historically close ally of Russia, as both countries share a strong mutual cultural, ethnic, and religious affinity. India is the largest customer of Russian military equipment, and the two countries share a strong strategic and diplomatic relationship since the Soviet era. Russia wields enormous influence across the geopolitically important South Caucasus and Central Asia; and the two regions have been described as Russia's "backyard". In the 21st century, relations between Russia and China have significantly strengthened bilaterally and economically; due to shared political interests. Turkey and Russia share a complex strategic, energy, and defense relationship. Russia maintains cordial relations with Iran, as it is a strategic and economic ally. Russia has also increasingly pushed to expand its influence across the Arctic, Asia-Pacific, Africa, the Middle East, and Latin America. In contrast, Russia's relations with the Western world; especially the United States, the European Union, and NATO; have worsened gradually. Military The Russian Armed Forces are divided into the Ground Forces, the Navy, and the Aerospace Forces—and there are also two independent arms of service: the Strategic Missile Troops and the Airborne Troops. , the military have around a million active-duty personnel, which is the world's fifth-largest, and about 2-20 million reserve personnel. It is mandatory for all male citizens aged 18–27 to be drafted for a year of service in the Armed Forces. Russia boasts the world's second-most powerful military. It is among the five recognized nuclear-weapons states, with the world's largest stockpile of nuclear weapons; over half of the world's nuclear weapons are owned by Russia. Russia possesses the second-largest fleet of ballistic missile submarines, and is one of the only three countries operating strategic bombers. It has the world's most powerful ground force, and the second-most powerful air force and navy fleet. Russia maintains the world's fourth-highest military expenditure, spending $61.7 billion in 2020. It is the world's second-largest arms exporter, and has a large and entirely indigenous defence industry, producing most of its own military equipment. Human rights and corruption Russia's human rights management has been increasingly criticised by leading democracy and human rights watchdogs. In particular, organisations such as Amnesty International and Human Rights Watch consider Russia to have not enough democratic attributes and to allow few political rights and civil liberties to its citizens. Putin, in response, has argued Western liberalism has become "obsolete" in Russia, while maintaining that the country is still democratic. Since 2004, Freedom House has ranked Russia as "not free" in its Freedom in the World survey. Since 2011, the Economist Intelligence Unit has ranked Russia as an "authoritarian regime" in its Democracy Index, ranking it 124th out of 167 countries for 2020. In regards to media freedom, Russia was ranked 150th out of 180 countries in Reporters Without Borders' Press Freedom Index for 2021. The Russian government has been widely criticized by political dissidents and human rights activists for unfair elections, crackdowns on opposition political parties and protests, persecution of non-governmental organisations and independent journalists, and censorship of media and internet. In 2017, Jehovah's Witnesses were labelled as "extremist" and were outlawed in Russia, facing persecution ever since. Russia has been described as a kleptocracy. It was the lowest rated European country in Transparency International's Corruption Perceptions Index for 2020, ranking 136th out of 180 countries.
Central Powers of World War I; Bolshevist Russia surrendered most of its western territories, which hosted 34% of its population, 54% of its industries, 32% of its agricultural land, and roughly 90% of its coal mines. The Allied powers launched an unsuccessful military intervention in support of anti-communist forces. In the meantime, both the Bolsheviks and White movement carried out campaigns of deportations and executions against each other, known respectively as the Red Terror and White Terror. By the end of the violent civil war, Russia's economy and infrastructure were heavily damaged, and as many as 10 million perished during the war, mostly civilians. Millions became White émigrés, and the Russian famine of 1921–22 claimed up to five million victims. Soviet Union On 30 December 1922, Lenin and his aides formed the Soviet Union, by joining the Russian SFSR into a single state with the Byelorussian, Transcaucasian, and Ukrainian republics. Eventually internal border changes and annexations during World War II created a union of 15 republics; the largest in size and population being the Russian SFSR, which dominated the union for its entire history politically, culturally, and economically. Following Lenin's death in 1924, a troika was designated to take charge. Eventually Joseph Stalin, the General Secretary of the Communist Party, managed to suppress all opposition factions and consolidate power in his hands to become the country's dictator by the 1930s. Leon Trotsky, the main proponent of world revolution, was exiled from the Soviet Union in 1929, and Stalin's idea of Socialism in One Country became the official line. The continued internal struggle in the Bolshevik party culminated in the Great Purge. Under Stalin's leadership, the government launched a command economy, industrialisation of the largely rural country, and collectivisation of its agriculture. During this period of rapid economic and social change, millions of people were sent to penal labor camps, including many political convicts for their suspected or real opposition to Stalin's rule; and millions were deported and exiled to remote areas of the Soviet Union. The transitional disorganisation of the country's agriculture, combined with the harsh state policies and a drought, led to the Soviet famine of 1932–1933; which killed up to 8.7 million. The Soviet Union, ultimately, made the costly transformation from a largely agrarian economy to a major industrial powerhouse within a short span of time. World War II The Soviet Union entered World War II on 17 September 1939 with its invasion of Poland, in accordance with a secret protocol within the Molotov–Ribbentrop Pact with Nazi Germany. The Soviet Union later invaded Finland, and occupied and annexed the Baltic states, as well as parts of Romania. On 22 June 1941, Germany invaded the Soviet Union, opening the Eastern Front, the largest theater of World War II. Eventually, some 5 million Red Army troops were captured by the Nazis; the latter deliberately starved to death or otherwise killed 3.3 million Soviet POWs, and a vast number of civilians, as the "Hunger Plan" sought to fulfill Generalplan Ost. Although the Wehrmacht had considerable early success, their attack was halted in the Battle of Moscow. Subsequently, the Germans were dealt major defeats first at the Battle of Stalingrad in the winter of 1942–43, and then in the Battle of Kursk in the summer of 1943. Another German failure was the Siege of Leningrad, in which the city was fully blockaded on land between 1941 and 1944 by German and Finnish forces, and suffered starvation and more than a million deaths, but never surrendered. Soviet forces steamrolled through Eastern and Central Europe in 1944–45 and captured Berlin in May 1945. In August 1945, the Soviet Army invaded Manchuria and ousted the Japanese from Northeast Asia, contributing to the Allied victory over Japan. The 1941–45 period of World War II is known in Russia as the Great Patriotic War. The Soviet Union, along with the United States, the United Kingdom and China were considered the Big Four of Allied powers in World War II, and later became the Four Policemen, which was the foundation of the United Nations Security Council. During the war, Soviet civilian and military death were about 26–27 million, accounting for about half of all World War II casualties. The Soviet economy and infrastructure suffered massive devastation, which caused the Soviet famine of 1946–47. However, at the expense of a large sacrifice, the Soviet Union emerged as a global superpower. Cold War After World War II, parts of Eastern and Central Europe, including East Germany and eastern parts of Austria were occupied by Red Army according to the Potsdam Conference. Dependent communist governments were installed in the Eastern Bloc satellite states. After becoming the world's second nuclear power, the Soviet Union established the Warsaw Pact alliance, and entered into a struggle for global dominance, known as the Cold War, with the rivaling United States and NATO. After Stalin's death in 1953 and a short period of collective rule, the new leader Nikita Khrushchev denounced Stalin and launched the policy of de-Stalinization, releasing many political prisoners from the Gulag labor camps. The general easement of repressive policies became known later as the Khrushchev Thaw. At the same time, Cold War tensions reached its peak when the two rivals clashed over the deployment of the United States Jupiter missiles in Turkey and Soviet missiles in Cuba. In 1957, the Soviet Union launched the world's first artificial satellite, Sputnik 1, thus starting the Space Age. Russian cosmonaut Yuri Gagarin became the first human to orbit the Earth, aboard the Vostok 1 manned spacecraft on 12 April 1961. Following the ousting of Khrushchev in 1964, another period of collective rule ensued, until Leonid Brezhnev became the leader. The era of the 1970s and the early 1980s was later designated as the Era of Stagnation. The 1965 Kosygin reform aimed for partial decentralisation of the Soviet economy. In 1979, after a communist-led revolution in Afghanistan, Soviet forces invaded the country, ultimately starting the Soviet–Afghan War. In May 1988, the Soviets started to withdraw from Afghanistan, due to international opposition, persistent anti-Soviet guerrilla warfare, and a lack of support by Soviet citizens. From 1985 onwards, the last Soviet leader Mikhail Gorbachev, who sought to enact liberal reforms in the Soviet system, introduced the policies of glasnost (openness) and perestroika (restructuring) in an attempt to end the period of economic stagnation and to democratize the government. This, however, led to the rise of strong nationalist and separatist movements across the country. Prior to 1991, the Soviet economy was the world's second-largest, but during its final years, it went into a crisis. By 1991, economic and political turmoil began to boil over as the Baltic states chose to secede from the Soviet Union. On 17 March, a referendum was held, in which the vast majority of participating citizens voted in favour of changing the Soviet Union into a renewed federation. In June 1991, Boris Yeltsin became the first directly elected president in Russian history when he was elected president of the Russian SFSR. In August 1991, a coup d'état attempt by members of Gorbachev's government, directed against Gorbachev and aimed at preserving the Soviet Union, instead led to the end of the Communist Party of the Soviet Union. On 25 December 1991, following the dissolution of the Soviet Union, along with contemporary Russia, fourteen other post-Soviet states emerged. Post-Soviet Russia (1991–present) The economic and political collapse of the Soviet Union led Russia into a deep and prolonged depression. During and after the disintegration of the Soviet Union, wide-ranging reforms including privatisation and market and trade liberalisation were undertaken, including radical changes along the lines of "shock therapy". The privatisation largely shifted control of enterprises from state agencies to individuals with inside connections in the government, which led to the rise of the infamous Russian oligarchs. Many of the newly rich moved billions in cash and assets outside of the country in an enormous capital flight. The depression of the economy led to the collapse of social services—the birth rate plummeted while the death rate skyrocketed, and millions plunged into poverty; while extreme corruption, as well as criminal gangs and organised crime rose significantly. In late 1993, tensions between Yeltsin and the Russian parliament culminated in a constitutional crisis which ended violently through military force. During the crisis, Yeltsin was backed by Western governments, and over 100 people were killed. In December, a referendum was held and approved, which introduced a new constitution, giving the president enormous powers. The 1990s were plagued by armed conflicts in the North Caucasus, both local ethnic skirmishes and separatist Islamist insurrections. From the time Chechen separatists declared independence in the early 1990s, an intermittent guerrilla war was fought between the rebel groups and Russian forces. Terrorist attacks against civilians were carried out by Chechen separatists, claiming the lives of thousands of Russian civilians. After the dissolution of the Soviet Union, Russia assumed responsibility for settling the latter's external debts. In 1992, most consumer price controls were eliminated, causing extreme inflation and significantly devaluing the ruble. High budget deficits coupled with increasing capital flight and inability to pay back debts, caused the 1998 Russian financial crisis, which resulted in a further GDP decline. Putin era On 31 December 1999, President Yeltsin unexpectedly resigned, handing the post to the recently appointed prime minister and his chosen successor, Vladimir Putin. Yeltsin left office widely unpopular, with an approval rating as low as 2% by some estimates. Putin then won the 2000 presidential election, and suppressed the Chechen insurgency. Putin went on to win a second presidential term in 2004. As a result of high oil prices, a rise in foreign investment, and prudent economic and fiscal policies, the Russian economy grew significantly; dramatically improving Russia's standard of living, and increasing its influence in global politics. Putin's rule increased stability, while transforming Russia into an authoritarian state. On 2 March 2008, Dmitry Medvedev was elected president while Putin became prime minister, as the constitution barred Putin from serving a third consecutive presidential term. Putin returned to the presidency following the 2012 presidential elections, and Medvedev was appointed prime minister. This four year joint leadership by the two was coined "tandemocracy" by foreign media. In 2014, Putin deployed Russian troops to Ukraine to seize the Crimean parliament, leading to the takeover of Crimea. Russia's subsequent annexation of Crimea and the referendum that preceded it remain globally unrecognised, and led to sanctions by Western countries, following which the Russian government responded with counter-sanctions against the latter. In March 2018, Putin was elected for a fourth presidential term overall. In January 2020, substantial amendments to the constitution were proposed, taking effect in July following a national vote, allowing Putin to run for two more six-year presidential terms after his current term ends. On 24 February 2022, Russia launched an invasion of Ukraine. At about 06:00 Moscow time, Putin announced a military operation on Ukraine; minutes later, cities of Ukraine were attacked by missiles. Geography Russia is a transcontinental country, stretching vastly over the easternmost part of Europe and the northernmost part of Asia. It spans the northernmost edge of Eurasia; and has the world's fourth-longest coastline, of over . Russia lies between latitudes 41° and 82° N, and longitudes 19° E and 169° W, extending some east to west, and north to south. Russia, by landmass, is larger than three continents, and has the same surface area as Pluto. Russia has nine major mountain ranges, and they are found along the southernmost regions, which share a significant portion of the Caucasus Mountains (containing Mount Elbrus, which at is the highest peak in Russia and Europe); the Altai and Sayan Mountains in Siberia; and in the East Siberian Mountains and the Kamchatka Peninsula in the Russian Far East (containing Klyuchevskaya Sopka, which at is the highest active volcano in Eurasia). The Ural Mountains, running north to south through the country's west, are rich in mineral resources, and form the traditional boundary between Europe and Asia. Russia, as one of the world's only two countries bordering three oceans, has links with a great number of seas. Its major islands and archipelagos include Novaya Zemlya, Franz Josef Land, Severnaya Zemlya, the New Siberian Islands, Wrangel Island, the Kuril Islands, and Sakhalin. The Diomede Islands, administered by Russia and the United States, are just apart; and Kunashir Island of the Kuril Islands is merely from Hokkaido, Japan. Russia, home to over 100,000 rivers, has one of the world's largest surface water resources, with its lakes containing approximately one-quarter of the world's liquid fresh water. Lake Baikal, the largest and most prominent among Russia's fresh water bodies, is the world's deepest, purest, oldest and most capacious fresh water lake, containing over one-fifth of the world's fresh surface water. Ladoga and Onega in northwestern Russia are two of the largest lakes in Europe. Russia is second only to Brazil by total renewable water resources. The Volga in western Russia, widely regarded as Russia's national river, is the longest river in Europe; while the rivers of Ob, Yenisey, Lena, and Amur in Siberia are among the world's longest rivers. Climate The size of Russia and the remoteness of many of its areas from the sea result in the dominance of the humid continental climate throughout most of the country, except for the tundra and the extreme southwest. Mountain ranges in the south and east obstruct the flow of warm air masses from the Indian and Pacific oceans, while the European Plain spanning its west and north opens it to influence from the Atlantic and Arctic oceans. Most of northwest Russia and Siberia have a subarctic climate, with extremely severe winters in the inner regions of northeast Siberia (mostly Sakha, where the Northern Pole of Cold is located with the record low temperature of ), and more moderate winters elsewhere. Russia's vast coastline along the Arctic Ocean and the Russian Arctic islands have a polar climate. The coastal part of Krasnodar Krai on the Black Sea, most notably Sochi, and some coastal and interior strips of the North Caucasus possess a humid subtropical climate with mild and wet winters. In many regions of East Siberia and the Russian Far East, winter is dry compared to summer; while other parts of the country experience more even precipitation across seasons. Winter precipitation in most parts of the country usually falls as snow. The westernmost parts of Kaliningrad Oblast and some parts in the south of Krasnodar Krai and the North Caucasus have an oceanic climate. The region along the Lower Volga and Caspian Sea coast, as well as some southernmost slivers of Siberia, possess a semi-arid climate. Throughout much of the territory, there are only two distinct seasons, winter and summer; as spring and autumn are usually brief periods of change between extremely low and extremely high temperatures. The coldest month is January (February on the coastline); the warmest is usually July. Great ranges of temperature are typical. In winter, temperatures get colder both from south to north and from west to east. Summers can be quite hot, even in Siberia. Biodiversity Russia, owing to its gigantic size, has diverse ecosystems, including polar deserts, tundra, forest tundra, taiga, mixed and broadleaf forest, forest steppe, steppe, semi-desert, and subtropics. About half of Russia's territory is forested, and it has the world's largest forest reserves, which are known as the "Lungs of Europe"; coming second only to the Amazon rainforest in the amount of carbon dioxide it absorbs. Russian biodiversity includes 12,500 species of vascular plants, 2,200 species of bryophytes, about 3,000 species of lichens, 7,000-9,000 species of algae, and 20,000-25,000 species of fungi. Russian fauna is composed of 320 species of mammals, over 732 species of birds, 75 species of reptiles, about 30 species of amphibians, 343 species of freshwater fish (high endemism), approximately 1,500 species of saltwater fishes, 9 species of cyclostomata, and approximately 100–150,000 invertebrates (high endemism). Approximately 1,100 of rare and endangered plant and animal species are included in the Russian Red Data Book. Russia's entirely natural ecosystems are conserved in nearly 15,000 specially protected natural territories of various statuses, occupying more than 10% of the country's total area. They include 45 biosphere reserves, 64 national parks, and 101 nature reserves. Russia still has many ecosystems which are still untouched by man; mainly in the northern taiga areas, and the subarctic tundra of Siberia. Russia had a Forest Landscape Integrity Index mean score of 9.02 in 2019, ranking 10th out of 172 countries; and the first ranked major nation globally. Government and politics Russia is an asymmetric federation, and semi-presidential republic, wherein the president is the head of state, and the prime minister is the head of government. It is fundamentally structured as a multi-party representative democracy, with the federal government composed of three branches: Legislative: The bicameral Federal Assembly of Russia, made up of the 450-member State Duma and the 170-member Federation Council, adopts federal law, declares war, approves treaties, has the power of the purse and the power of impeachment of the president. Executive: The president is the commander-in-chief of the Armed Forces, and appoints the Government of Russia (Cabinet) and other officers, who administer and enforce federal laws and policies. Judiciary: The Constitutional Court, Supreme Court and lower federal courts, whose judges are appointed by the Federation Council on the recommendation of the president, interpret laws and can overturn laws they deem unconstitutional. The president is elected by popular vote for a six-year term and may be elected no more than twice. Ministries of the government are composed of the premier and his deputies, ministers, and selected other individuals; all are appointed by the president on the recommendation of the prime minister (whereas the appointment of the latter requires the consent of the State Duma). United Russia is the dominant political party in Russia, and has been described as "big tent". Political divisions According to the constitution, the Russian Federation is composed of 85 federal subjects. In 1993, when the new constitution was adopted, there were 89 federal subjects listed, but some were later merged. The federal subjects have equal representation—two delegates each—in the Federation Council, the upper house of the Federal Assembly. They do, however, differ in the degree of autonomy they enjoy. The federal districts of Russia were established by Putin in 2000 to facilitate central government control of the federal subjects. Originally seven, currently there are eight federal districts, each headed by an envoy appointed by the president. Foreign relations Russia had the world's fifth-largest diplomatic network in 2019. It maintains diplomatic relations with 190 United Nations member states, two partially-recognized states, and three United Nations observer states; along with 144 embassies. Russia is one of the five permanent members of the United Nations Security Council, and is a potential superpower. It has historically been a great power, and a significant regional power. Russia is a member of the G20, the Council of Europe, the OSCE, and the APEC. It also takes a leading role in organisations such as the CIS, the EAEU, the CSTO, the SCO, and BRICS. Russia maintains close relations with neighbouring Belarus, which is in the Union State, a supranational confederation of the latter with Russia. Serbia has been a historically close ally of Russia, as both countries share a strong mutual cultural, ethnic, and religious affinity. India is the largest customer of Russian military equipment, and the two countries share a strong strategic and diplomatic relationship since the Soviet era. Russia wields enormous influence across the geopolitically important South Caucasus and Central Asia; and the two regions have been described as Russia's "backyard". In the 21st century, relations between Russia and China have significantly strengthened bilaterally and economically; due to shared political interests. Turkey and Russia share a complex strategic, energy, and defense relationship. Russia maintains cordial relations with Iran, as it is a strategic and economic ally. Russia has also increasingly pushed to expand its influence across the Arctic, Asia-Pacific, Africa, the Middle East, and Latin America. In contrast, Russia's relations with the Western world; especially the United States, the European Union, and NATO; have worsened gradually. Military The Russian Armed Forces are divided into the Ground Forces, the Navy, and the Aerospace Forces—and there are also two independent arms of service: the Strategic Missile Troops and the Airborne Troops. , the military have around a million active-duty personnel, which is the world's fifth-largest, and about 2-20 million reserve personnel. It is mandatory for all male citizens aged 18–27 to be drafted for a year of service in the Armed Forces. Russia boasts the world's second-most powerful military. It is among the five recognized nuclear-weapons states, with the world's largest stockpile of nuclear weapons; over half of the world's nuclear weapons are owned by Russia. Russia possesses the second-largest fleet of ballistic missile submarines, and is one of the only three countries operating strategic bombers. It has the world's most powerful ground force, and the second-most powerful air force and navy fleet. Russia maintains the world's fourth-highest military expenditure, spending $61.7 billion in 2020. It is the world's second-largest arms exporter, and has a large and entirely indigenous defence industry, producing most of its own military equipment. Human rights and corruption Russia's human rights management has been increasingly criticised by leading democracy and human rights watchdogs. In particular, organisations such as Amnesty International and Human Rights Watch consider Russia to have not enough democratic attributes and to allow few political rights and civil liberties to its citizens. Putin, in response, has argued Western liberalism has become "obsolete" in Russia, while maintaining that the country is still democratic. Since 2004, Freedom House has ranked Russia as "not free" in its Freedom in the World survey. Since 2011, the Economist Intelligence Unit has ranked Russia as an "authoritarian regime" in its Democracy Index, ranking it 124th out of 167 countries for 2020. In regards to media freedom, Russia was ranked 150th out of 180 countries in Reporters Without Borders' Press Freedom Index for 2021. The Russian government has been widely criticized by political dissidents and human rights activists for unfair elections, crackdowns on opposition political parties and protests, persecution of non-governmental organisations and independent journalists, and censorship of media and internet. In 2017, Jehovah's Witnesses were labelled as "extremist" and were outlawed in Russia, facing persecution ever since. Russia has been described as a kleptocracy. It was the lowest rated European country in Transparency International's Corruption Perceptions Index for 2020, ranking 136th out of 180 countries. The phenomenon of corruption in Russia has been strongly established in the historical model of public governance, and is perceived as a significant problem. It impacts various aspects of life, including the economy, business, public administration, law enforcement, healthcare, and education. Economy Russia has a mixed economy, with enormous natural resources, particularly oil and natural gas. It has the world's eleventh-largest economy by nominal GDP and the sixth-largest by PPP. In 2017, the large service sector contributed to 62% of the total GDP, the industrial sector 32%, and the small agricultural sector roughly 5%. Russia has a low unemployment rate of 4.3%. Russia's foreign exchange reserves are worth $638 billion, and are the world's fourth-largest. It has a labour force of roughly 70 million, which is the world's sixth-largest. Russia's large automotive industry ranks as the world's tenth-largest by production. Russia is the world's twentieth-largest exporter and importer. In 2016, the oil-and-gas sector accounted for 36% of federal budget revenues. In 2019, the Natural Resources and Environment Ministry estimated the value of natural resources to 60% of the country's GDP. Russia has one of the lowest external debts among major economies, although its inequality of household income and wealth is one of the highest among developed countries. Transport and energy Railway transport in Russia is mostly under the control of the state-run Russian Railways. The total length of common-used railway tracks is the world's third-longest, and exceeds . , Russia has the world's fifth-largest road network, with some 1,452.2 thousand km of roads, while its road density is among the world's lowest. Russia's inland waterways are the world's second-longest, and total . Its pipelines total some , and are the world's third-longest. Among Russia's 1,218 airports, the busiest is Sheremetyevo International Airport in Moscow, which is the second-busiest airport in Europe. Russia's largest port is the Port of Novorossiysk in Krasnodar Krai along the Black Sea. Russia has been widely described as an energy superpower. It has the world's largest proven gas reserves, the second-largest coal reserves, the eighth-largest oil reserves, and the largest oil shale reserves in Europe. Russia is also the world's leading natural gas exporter, the second-largest natural gas producer, and the second-largest oil producer and exporter. Russia is committed to the Paris Agreement, after joining the pact formally in 2019. It is the world's fourth-largest greenhouse gas emitter. Russia is the world's fourth-largest electricity producer, and the ninth-largest renewable energy producer in 2019. It was also the world's first country to develop civilian nuclear power, and to construct the world's first nuclear power plant. Russia was also the world's fourth-largest nuclear energy producer in 2019, and was the fifth-largest hydroelectric producer in 2021. Agriculture and fishery Russia's agriculture sector contributes about 5% of the country's total GDP, although the sector employs about one-eighth of the total labour force. It has the world's third-largest cultivated area, at . However, due to the harshness of its environment, about 13.1% of its land is agricultural, and only 7.4% of its land is arable. The main product of Russian farming has always been grain, which occupies considerably more than half of the cropland. Russia is the world's largest exporter of wheat. Various analysts of climate change adaptation foresee large opportunities for Russian agriculture during the rest of the 21st century as arability increases in Siberia, which would lead to both internal and external migration to the region. More than one-third of the sown
order to part with it. In experiments, the latter price is sometimes significantly higher than the former (but see Plott and Zeiler 2005, Plott and Zeiler 2007 and Klass and Zeiler, 2013). Tversky and Kahneman do not characterize loss aversion as irrational. Behavioral economics includes a large number of other amendments to its picture of human behavior that go against neoclassical assumptions. Utility maximization Often preferences are described by their utility function or payoff function. This is an ordinal number that an individual assigns over the available actions, such as: The individual's preferences are then expressed as the relation between these ordinal assignments. For example, if an individual prefers the candidate Sara over Roger over abstaining, their preferences would have the relation: A preference relation that as above satisfies completeness, transitivity, and, in addition, continuity, can be equivalently represented by a utility function. Benefits The rational choice approach allows preferences to be represented as real-valued utility functions. Economic decision making then becomes a problem of maximizing this utility function, subject to constraints (e.g. a budget). This has many advantages. It provides a compact theory that makes empirical predictions with a relatively sparse model - just a description of the agent's objectives and constraints. Furthermore, optimization theory is a well-developed field of mathematics. These two factors make rational choice models tractable compared to other approaches to choice. Most importantly, this approach is strikingly general. It has been used to analyze not only personal and household choices about traditional economic matters like consumption and savings, but also choices about education, marriage, child-bearing, migration, crime and so on, as well as business decisions about output, investment, hiring, entry, exit, etc. with varying degrees of success. In the field of political science rational choice theory has been used to help predict human decision making and model for the future; therefore it is useful in creating effective public policy, and enables the government to develop solutions quickly and efficiently. Despite the empirical shortcomings of rational choice theory, the flexibility and tractability of rational choice models (and the lack of equally powerful alternatives) lead to them still being widely used. Applications Rational choice theory has become increasingly employed in social sciences other than economics, such as sociology, evolutionary theory and political science in recent decades. It has had far-reaching impacts on the study of political science, especially in fields like the study of interest groups, elections, behaviour in legislatures, coalitions, and bureaucracy. In these fields, the use of the rational choice theory to explain broad social phenomena is the subject of controversy. Rational choice theory in politics The relationship between the rational choice theory and politics takes many forms, whether that be in voter behaviour, the actions of world leaders or even the way that important matters are dealt with. Voter behaviour shifts significantly thanks to rational theory, which is ingrained in human nature, the most significant of which occurs when there are times of economic trouble. This was assessed in detail by Anthony Downs who concluded that voters were acting on thoughts of higher income as a person ‘votes for whatever party he believes would provide him with the highest utility income from government action’. This is a significant simplification of how the theory influences people's thoughts but makes up a core part of rational theory as a whole. In a more complex fashion, voters will react often radically in times of real economic strife, which can lead to an increase in extremism. The government will be made responsible by the voters and thus they see a need to make a change. Some of the most infamous extremist parties came to power on the back of economic recessions, the most significant being the far right Nazi Party in Germany, who used the hyperinflation at the time to gain power rapidly, as they promised a solution and a scapegoat for the blame. There is a trend to this, as a comprehensive study carried out by three political scientists concluded, as a ‘turn to the right’ occurs and it is clear that it is the work of the rational theory because within ten years the politics returns to a more common state. Anthony Downs also suggested that voting involves a cost/benefit analysis in order to determine how a person would vote. He argues that someone will vote if B+D>C, where B= The benefit of the voter winning, D= Satisfaction and C being the cost of voting. It is from this that we can determine that parties have moved their policy outlook to be more centric in order to maximise the amount of voters they have for support. This is becoming more and more prevalent with every election as each party tries to appeal to a broader range of voters. This is especially prevalent as there has been a decline in party memberships, meaning that each party has much less guaranteed votes. In the last 10 years there has been a 37% decrease in party memberships, with this trend having started soon after the Second World War. This shows that the electorate a leaning towards making informed, rational decisions as opposed to relying on a pattern of behaviours. Overall the electorate are becoming more inclined to vote based on recency factors in order to protect their interests and maximise their utility. Meaning Rational Choice Theory has the ability to be used in modelling and forecasting, owing to its nature being derived from economic thought to explain human behaviour. This is useful in politics as the theory can quantify human decision making and behaviour into data that can be interpreted, helping to predict behaviours and outcomes. Therefore enabling the ability to direct and shape political thinking and campaigns, maximizing utility. As useful as the use of empirical data is in building a clear picture of voting behaviour it doesn't full show all aspects of political decision making whether that be from the electorate or the policy makers. As brings the idea of commitment as a key concept to the behaviour of political agents. That it is not only self interest that is the outcome of personal cost benefit analysis but it is also the idea of shared interests. That the key idea of utility needs to be defined not only as material utility but also as experienced utility, these expansions to classical rational choice theory could then begin to remove the weakness in regards to morals of the agents which it aims to interoperate their actions. A downfall of rational choice theory in a political sense, is that is the pursuit of individual goals can lead to collectively irrational outcomes. This problem of collective action can disincentivise people to vote. Even though a group of people may have common interests, they also have conflicting ones that cause misalignment within the group and therefore an outcome that does not benefit the group as a whole as people want to pursue their own individual interests. This problem is rooted in Rational Choice theory because of the theories emphasis on the rational agents performing their own cost-benefit analysis to maximize their self-interests. An example of this can be shown by some of the world’s most troubling problems, such as the climate crisis. Nation states can be seen as rational as they fulfil their own interests of economic growth, however, this economic growth often leads to pollution as increasing a nation’s factors of production takes a toll on the environment. It is irrational for a state to forego this economic growth as the cost of pollution does not entirely fall on them, as one state’s carbon emissions would not entirely affect that state alone, as it impacts elsewhere. This means the benefit of the economic growth outweighs the cost of pollution, according to the theory of Rational Choice. However, If all countries made this rational calculation it would lead to a massive amount of pollution. Making the outcome of a rational choice, a collectively irrational outcome. Rational choice theory in international relations Rational choice theory has become one of the major approaches in the study of international relations. Its proponents typically assume that states are the key actors in world politics and that they seek goals such as power, security, or wealth. Such motivation for power and security can therefore be seen as pre-emptive to initiatives that focus on the pursuit of maximising satisfaction. Such that conflict between states occurs due to the attainment of international goals infringing upon another states'. Subsequently Rational choice theory can be applied to policy issues ranging from international trade and international cooperation to sanctions, arms competition, (nuclear) deterrence, and war. For example, some scholars have examined how states can make credible threats to deter other states from a (nuclear) attack. Others have explored under what conditions states wage war against each other. Yet others have investigated under what circumstances the threat and imposition of international economic sanctions tend to succeed and when they are likely to fail. Rational Choice theory in Social Interactions Rational Choice Theory and Social exchange theory involves looking at all social relations in the form of costs and rewards, both tangible and non tangible. According to Abell, Rational Choice Theory is "understanding individual actors... as acting, or more likely interacting, in a manner such that they can be deemed to be doing the best they can for themselves, given their objectives, resources, circumstances, as they seem them". Rational Choice Theory has been used to comprehend the complex social phenomena, of which derives from the actions and motivations of an individual. Individuals are often highly motivated by the wants and needs. By making calculative decisions, it is considered as rational action. Individuals are often making calculative decisions in social situations by weighing out the pros and cons of an action taken towards a person. The decision to act on a rational decision is also dependent on the unforeseen benefits of the friendship. Homan mentions that actions of humans are motivated by punishment or rewards. This reinforcement through punishments or rewards determines the course of action taken by a person in a social situation as well. Individuals are motivated by mutual reinforcement and are also fundamentally motivated by the approval of others. Attaining the approval of others has been a generalized character, along with money, as a means of exchange in both Social and Economic exchanges. In Economic exchanges, it involves the exchange of goods or services. In Social exchange, it is the exchange of approval and certain other valued behaviors. Rational Choice Theory in this instance, heavily emphasizes the individual's interest as a starting point for making social decisions. Despite differing view points about Rational choice theory, it all comes down to the individual as a basic unit of theory. Even though sharing, cooperation and cultural norms emerge, it all stems from an individual's initial concern about the self. G.S Becker offers an example of how Rational choice can be applied to personal decisions, specifically regarding the rationale that goes behind decisions on whether to marry or divorce another individual. Due to the self-serving drive on which the theory of rational choice is derived, Becker concludes that people marry if the expected utility from such marriage exceeds the utility one would gain from remaining single, and in the same way couples would separate should the utility of being together be less than expected and provide less (economic) benefit than being separated would. Since the theory behind rational choice is that individuals will take the course of action that best serves their personal interests, when considering relationships it is still assumed that they will display such mentality due to deep-rooted, self-interested aspects of human nature. Social Exchange and Rational Choice Theory both comes down to an individual's efforts to meet their own personal needs and interests through the choices they make. Even though some may be done sincerely for the welfare of others at that point of time, both theories point to the benefits received in return. These returns may be received immediately or in the future, be it tangible or not. Coleman discussed a number of theories to elaborate on the premises and promises of rational choice theory. One of the concepts that He introduced was Trust. It is where "individuals place trust, in both judgement and performance of others, based on rational considerations of what is best, given the alternatives they confront". In a social situation, there has to be a level of trust among the individuals. He noted that this level of trust is a consideration that an individual takes into concern before deciding on a rational action towards another individual. It affects the social situation as one navigates the risks and benefits of an action. By assessing the possible outcomes or alternatives to an action for another individual, the person is making a calculated decision. In another situation such as making a bet, you are calculating the possible lost and how much can be won. If the chances of winning exceeds the cost of losing, the rational decision would be to place the bet. Therefore, the decision to place trust in another individual involves the same rational calculations that are involved in the decision of making a bet. Even though rational theory is used in Economics and Social settings, there are some similarities and differences. The concept of reward and reinforcement is parallel to each other while the concept of cost is also parallel to the concept of punishment. However, there is a difference of underlying assumptions in both contexts. In social a social setting, the focus is often on the current or past reinforcements instead of the future although there is no guarantee of immediate tangible or intangible returns from another individual. In Economics, decisions are made with heavier emphasis on future rewards. Despite having both perspectives differ in focus, they primarily reflect on how individuals make different rational decisions when given an immediate or long-term circumstances to consider in their rational decision making. Criticism This theory critically helps us to understand the choices an individual or society makes. Even though some decisions are not entirely rational, it is possible that Rational Choice Theory still helps us to understand the motivations behind it. Moreover, there has been a lot of discourse about Rational Choice Theory. It has often been too individualistic, minimalistic and heavily focused on rational decisions in social actions. Sociologists tend to justify any human action as rational as individuals are solely motivated by the pursuit of self-interest. It does not consider the possibility of pure altruism of a social exchange between individuals. Criticism Both the assumptions and the behavioral predictions of rational choice theory have sparked criticism from various camps. The limits of rationality As mentioned above, some economists have developed models of bounded rationality, such as Herbert Simon, which hope to be more psychologically plausible without completely abandoning the idea that reason underlies decision-making processes. Simon argues factors such as imperfect information, uncertainty and time constraints all affect and limit our rationality, and therefore our decision making skills. Furthermore his concepts of 'satisficing' and 'optimizing' suggest sometimes because of these factors, we settle for a decision which is good enough, rather than the best decision. Other economists have developed more theories of human decision-making that allow for the roles of uncertainty, institutions, and determination of individual tastes by their socioeconomic environment (cf. Fernandez-Huerga, 2008). Philosophical critiques Martin Hollis and Edward J. Nell's 1975 book offers both a philosophical critique of neo-classical economics and an innovation in the field of economic methodology. Further, they outlined an alternative vision to neo-classicism based on a rationalist theory of knowledge. Within neo-classicism, the authors addressed consumer behaviour (in the form of indifference curves and simple versions of revealed preference theory) and marginalist producer behaviour in both product and factor markets. Both are based on rational optimizing behaviour. They consider imperfect as well as perfect markets since neo-classical thinking embraces many market varieties and disposes of a whole system for their classification. However, the authors believe that the issues arising from basic maximizing models have extensive implications for econometric methodology (Hollis and Nell, 1975, p. 2). In particular it is this class of models – rational behavior as maximizing behaviour – which provide support for specification and identification. And this, they argue, is where the flaw is to be found. Hollis and Nell (1975) argued that positivism (broadly conceived) has provided neo-classicism with important support, which they then show to be unfounded. They base their critique of neo-classicism not only on their critique of positivism but also on the alternative they propose, rationalism. Indeed, they argue that rationality is central to neo-classical economics – as rational choice – and that this conception of rationality is misused. Demands are made of it that it cannot fulfill. Ultimately, individuals do not always act rationally or conduct themselves in a utility maximising manner. Duncan K. Foley (2003, p. 1) has also provided an important criticism of the concept of rationality and its role in economics. He argued that“Rationality” has played a central role in shaping and establishing the hegemony of contemporary mainstream economics. As the specific claims of robust neoclassicism fade into the history of economic thought, an orientation toward situating explanations of economic phenomena in relation to rationality has increasingly become the touchstone by which mainstream economists identify themselves and recognize each other. This is not so much a question of adherence to any particular conception of rationality, but of taking rationality of individual behavior as the unquestioned starting point of economic analysis. Foley (2003, p. 9) went on to argue thatThe concept of rationality, to use Hegelian language, represents the relations of modern capitalist society one-sidedly. The burden of rational-actor theory is the assertion that ‘naturally’ constituted individuals facing existential conflicts over scarce resources would rationally impose on themselves the institutional structures of modern capitalist society, or something approximating them. But this way of looking at matters systematically neglects the ways in which modern capitalist society and its social relations in fact constitute the ‘rational’, calculating individual. The well-known limitations of rational-actor theory, its static quality, its logical antinomies, its vulnerability to arguments of infinite regress, its failure to develop a progressive concrete research program, can all be traced to this starting-point. More recently Edward J. Nell and Karim Errouaki (2011, Ch. 1) argued that:The DNA of neoclassical economics is defective. Neither the induction problem nor the problems of methodological individualism can be solved within the framework of neoclassical assumptions. The neoclassical approach is to call on rational economic man to solve both.
the rationality assumption. However, the predictions made by a specific version of the theory are testable. In recent years, the most prevalent version of rational choice theory, expected utility theory, has been challenged by the experimental results of behavioral economics. Economists are learning from other fields, such as psychology, and are enriching their theories of choice in order to get a more accurate view of human decision-making. For example, the behavioral economist and experimental psychologist Daniel Kahneman won the Nobel Memorial Prize in Economic Sciences in 2002 for his work in this field. Rational choice theory has proposed that there are two outcomes of two choices regarding human action. Firstly, the feasible region will be chosen within all the possible and related action. Second, after the preferred option has been chosen, the feasible region that has been selected was picked based on restriction of financial, legal, social, physical or emotional restrictions that the agent is facing. After that, a choice will be made based on the preference order. The concept of rationality used in rational choice theory is different from the colloquial and most philosophical use of the word. In this sense, "rational" behaviour can refer to "sensible", "predictable", or "in a thoughtful, clear-headed manner." Rational choice theory uses a much more narrow definition of rationality. At its most basic level, behavior is rational if it is goal-oriented, reflective (evaluative), and consistent (across time and different choice situations). This contrasts with behavior that is random, impulsive, conditioned, or adopted by (unevaluative) imitation. Early neoclassical economists writing about rational choice, including William Stanley Jevons, assumed that agents make consumption choices so as to maximize their happiness, or utility. Contemporary theory bases rational choice on a set of choice axioms that need to be satisfied, and typically does not specify where the goal (preferences, desires) comes from. It mandates just a consistent ranking of the alternatives. Individuals choose the best action according to their personal preferences and the constraints facing them. E.g., there is nothing irrational in preferring fish to meat the first time, but there is something irrational in preferring fish to meat in one instant and preferring meat to fish in another, without anything else having changed. Actions, assumptions, and individual preferences The basic premise of rational choice theory is that the decisions made by individual actors will collectively produce aggregate social behaviour. Thus, each individual makes a decision based on their own preferences and the constraints (or choice set) they face. Rational choice theory can be viewed in different contexts. At an individual level, the theory suggests that the agent will decide on the action (or outcome) they most prefer. If the actions (or outcomes) are evaluated in terms of costs and benefits, the choice with the maximum net benefit will be chosen by the rational individual. Rational behaviour is not solely driven by monetary gain, but can also be driven by emotional motives. The theory can be applied to general settings outside of those identified by costs and benefits. In general, rational decision making entails choosing among all available alternatives the alternative that the individual most prefers. The "alternatives" can be a set of actions ("what to do?") or a set of objects ("what to choose/buy"). In the case of actions, what the individual really cares about are the outcomes that results from each possible action. Actions, in this case, are only an instrument for obtaining a particular outcome. Formal statement The available alternatives are often expressed as a set of objects, for example a set of j exhaustive and exclusive actions: For example, if a person can choose to vote for either Roger or Sara or to abstain, their set of possible alternatives is: The theory makes two technical assumptions about individuals' preferences over alternatives: Completeness – for any two alternatives ai and aj in the set, either ai is preferred to aj, or aj is preferred to ai, or the individual is indifferent between ai and aj. In other words, all pairs of alternatives can be compared with each other. Transitivity – if alternative a1 is preferred to a2, and alternative a2 is preferred to a3, then a1 is preferred to a3. Together these two assumptions imply that given a set of exhaustive and exclusive actions to choose from, an individual can rank the elements of this set in terms of his preferences in an internally consistent way (the ranking constitutes a partial ordering), and the set has at least one maximal element. The preference between two alternatives can be: Strict preference occurs when an individual prefers a1 to a2 and does not view them as equally preferred. Weak preference implies that individual either strictly prefers a1 over a2 or is indifferent between them. Indifference occurs when an individual neither prefers a1 to a2, nor a2 to a1. Since (by completeness) the individual does not refuse a comparison, they must therefore be indifferent in this case. Research that took off in the 1980s sought to develop models that drop these assumptions and argue that such behaviour could still be rational, Anand (1993). This work, often conducted by economic theorists and analytical philosophers, suggests ultimately that the assumptions or axioms above are not completely general and might at best be regarded as approximations. Additional assumptions Perfect information: The simple rational choice model above assumes that the individual has full or perfect information about the alternatives, i.e., the ranking between two alternatives involves no uncertainty. Choice under uncertainty: In a richer model that involves uncertainty about the how choices (actions) lead to eventual outcomes, the individual effectively chooses between lotteries, where each lottery induces a different probability distribution over outcomes. The additional assumption of independence of irrelevant alternatives then leads to expected utility theory. Inter-temporal choice: when decisions affect choices (such as consumption) at different points in time, the standard method for evaluating alternatives across time involves discounting future payoffs. Limited cognitive ability: identifying and weighing each alternative against every other may take time, effort, and mental capacity. Recognising the cost that these impose or cognitive limitations of individuals gives rise to theories of bounded rationality. Alternative theories of human action include such components as Amos Tversky and Daniel Kahneman's prospect theory, which reflects the empirical finding that, contrary to standard preferences assumed under neoclassical economics, individuals attach extra value to items that they already own compared to similar items owned by others. Under standard preferences, the amount that an individual is willing to pay for an item (such as a drinking mug) is assumed to equal the amount they are willing to be paid in order to part with it. In experiments, the latter price is sometimes significantly higher than the former (but see Plott and Zeiler 2005, Plott and Zeiler 2007 and Klass and Zeiler, 2013). Tversky and Kahneman do not characterize loss aversion as irrational. Behavioral economics includes a large number of other amendments to its picture of human behavior that go against neoclassical assumptions. Utility maximization Often preferences are described by their utility function or payoff function. This is an ordinal number that an individual assigns over the available actions, such as: The individual's preferences are then expressed as the relation between these ordinal assignments. For example, if an individual prefers the candidate Sara over Roger over abstaining, their preferences would have the relation: A preference relation that as above satisfies completeness, transitivity, and, in addition, continuity, can be equivalently represented by a utility function. Benefits The rational choice approach allows preferences to be represented as real-valued utility functions. Economic decision making then becomes a problem of maximizing this utility function, subject to constraints (e.g. a budget). This has many advantages. It provides a compact theory that makes empirical predictions with a relatively sparse model - just a description of the agent's objectives and constraints. Furthermore, optimization theory is a well-developed field of mathematics. These two factors make rational choice models tractable compared to other approaches to choice. Most importantly, this approach is strikingly general. It has been used to analyze not only personal and household choices about traditional economic matters like consumption and savings, but also choices about education, marriage, child-bearing, migration, crime and so on, as well as business decisions about output, investment, hiring, entry, exit, etc. with varying degrees of success. In the field of political science rational choice theory has been used to help predict human decision making and model for the future; therefore it is useful in creating effective public policy, and enables the government to develop solutions quickly and efficiently. Despite the empirical shortcomings of rational choice theory, the flexibility and tractability of rational choice models (and the lack of equally powerful alternatives) lead to them still being widely used. Applications Rational choice theory has become increasingly employed in social sciences other than economics, such as sociology, evolutionary theory and political science in recent decades. It has had far-reaching impacts on the study of political science, especially in fields like the study of interest groups, elections, behaviour in legislatures, coalitions, and bureaucracy. In these fields, the use of the rational choice theory to explain broad social phenomena is the subject of controversy. Rational choice theory in politics The relationship between the rational choice theory and politics takes many forms, whether that be in voter behaviour, the actions of world leaders or even the way that important matters are dealt with. Voter behaviour shifts significantly thanks to rational theory, which is ingrained in human nature, the most significant of which occurs when there are times of economic trouble. This was assessed in detail by Anthony Downs who concluded that voters were acting on thoughts of higher income as a person ‘votes for whatever party he believes would provide him with the highest utility income from government action’. This is a significant simplification of how the theory influences people's thoughts but makes up a core part of rational theory as a whole. In a more complex fashion, voters will react often radically in times of real economic strife, which can lead to an increase in extremism. The government will be made responsible by the voters and thus they see a need to make a change. Some of the most infamous extremist parties came to power on the back of economic recessions, the most significant being the far right Nazi Party in Germany, who used the hyperinflation at the time to gain power rapidly, as they promised a solution and a scapegoat for the blame. There is a trend to this, as a comprehensive study carried out by three political scientists concluded, as a ‘turn to the right’ occurs and it is clear that it is the work of the rational theory because within ten years the politics returns to a more common state. Anthony Downs also suggested that voting involves a cost/benefit analysis in order to determine how a person would vote. He argues that someone will vote if B+D>C, where B= The benefit of the voter winning, D= Satisfaction and C being the cost of voting. It is from this that we can determine that parties have moved their policy outlook to be more centric in order to maximise the amount of voters they have for support. This is becoming more and more prevalent with every election as each party tries to appeal to a broader range of voters. This is especially prevalent as there has been a decline in party memberships, meaning that each party has much less guaranteed votes. In the last 10 years there has been a 37% decrease in party memberships, with this trend having started soon after the Second World War. This shows that the electorate a leaning towards making informed, rational decisions as opposed to relying on a pattern of behaviours. Overall the electorate are becoming more inclined to vote based on recency factors in order to protect their interests and maximise their utility. Meaning Rational Choice Theory has the ability to be used in modelling and forecasting, owing to its nature being derived from economic thought to explain human behaviour. This is useful in politics as the theory can quantify human decision making and behaviour into data that can be interpreted, helping to predict behaviours and outcomes. Therefore enabling the ability to direct and shape political thinking and campaigns, maximizing utility. As useful as the use of empirical data is in building a clear picture of voting behaviour it doesn't full show all aspects of political decision making whether that be from the electorate or the policy makers. As brings the idea of commitment as a key concept to the behaviour of political agents. That it is not only self interest that is the outcome of personal cost benefit analysis but it is also the idea of shared interests. That the key idea of utility needs to be defined not only as material utility but also as experienced utility, these expansions to classical rational choice theory could then begin to remove the weakness in regards to morals of the agents which it aims to interoperate their actions. A downfall of rational choice theory in a political sense, is that is the pursuit of individual goals can lead to collectively irrational outcomes. This problem of collective action can disincentivise people to vote. Even though a group of people may have common interests, they also have conflicting ones that cause misalignment within the group and therefore an outcome that does not benefit the group as a whole as people want to pursue their own individual interests. This problem is rooted in Rational Choice theory because of the theories emphasis on the rational agents performing their own cost-benefit analysis to maximize their self-interests. An example of this can be shown by some of the world’s most troubling problems, such as the climate crisis. Nation states can be seen as rational as they fulfil their own interests of economic growth, however, this economic growth often leads to pollution as increasing a nation’s factors of production takes a toll on the environment. It is irrational for a state to forego this economic growth as the cost of pollution does not entirely fall on them, as one state’s carbon emissions would not entirely affect that state alone, as it impacts elsewhere. This means the benefit of the economic growth outweighs the cost of pollution, according to the theory of Rational Choice. However, If all countries made this rational calculation it would lead to a massive amount of pollution. Making the outcome of a rational choice, a collectively irrational outcome. Rational choice theory in international relations Rational choice theory has become one of the major approaches in the study of international relations. Its proponents typically assume that states are the key actors in world politics and that they seek goals such as power, security, or wealth. Such motivation for power and security can therefore be seen as pre-emptive to initiatives that focus on the pursuit of maximising satisfaction. Such that conflict between states occurs due to the attainment of international goals infringing upon another states'. Subsequently Rational choice theory can be applied to policy issues ranging from international trade and international cooperation to sanctions, arms competition, (nuclear) deterrence, and war. For example, some scholars have examined how states can make credible threats to deter other states from a (nuclear) attack. Others have explored under what conditions states wage war against each other. Yet others have investigated under what circumstances the threat and imposition of international economic sanctions tend to succeed and when they are likely
in between. All of these languages do have the "northwest" characteristics of lenition and loss of gemination. However: The Gallo‒Italic languages have vowel-changing plurals rather than /s/ plurals. The Lombard language in north-central Italy and the Rhaeto-Romance languages have the "southeast" characteristic of instead of for palatalized /k/. The Venetian language in northeast Italy and some of the Rhaeto-Romance languages have the "southeast" characteristic of developing to . Lenition of post-vocalic /p t k/ is widespread as an allophonic phonetic realization in Italy below the La Spezia-Rimini line, including Corsica and most of Sardinia. On top of this, the ancient Mozarabic language in southern Spain, at the far end of the "northwest" group, had the "southeast" characteristics of lack of lenition and palatalization of /k/ to . Certain languages around the Pyrenees (e.g. some highland Aragonese dialects) also lack lenition, and northern French dialects such as Norman and Picard have palatalization of /k/ to (although this is possibly an independent, secondary development, since /k/ between vowels, i.e. when subject to lenition, developed to /dz/ rather than , as would be expected for a primary development). The usual solution to these issues is to create various nested subgroups. Western Romance is split into the Gallo-Iberian languages, in which lenition happens and which include nearly all the Western Romance languages, and the Pyrenean-Mozarabic group, which includes the remaining languages without lenition (and is unlikely to be a valid clade; probably at least two clades, one for Mozarabic and one for Pyrenean). Gallo-Iberian is split in turn into the Iberian languages (e.g. Spanish and Portuguese), and the larger Gallo-Romance languages (stretching from eastern Spain to northeast Italy). Probably a more accurate description, however, would be to say that there was a focal point of innovation located in central France, from which a series of innovations spread out as areal changes. The La Spezia–Rimini Line represents the farthest point to the southeast that these innovations reached, corresponding to the northern chain of the Apennine Mountains, which cuts straight across northern Italy and forms a major geographic barrier to further language spread. This would explain why some of the "northwest" features (almost all of which can be characterized as innovations) end at differing points in northern Italy, and why some of the languages in geographically remote parts of Spain (in the south, and high in the Pyrenees) are lacking some of these features. It also explains why the languages in France (especially standard French) seem to have innovated earlier and more extensively than other Western Romance languages. Many of the "southeast" features also apply to the Eastern Romance languages (particularly, Romanian), despite the geographic discontinuity. Examples are lack of lenition, maintenance of intertonic vowels, use of vowel-changing plurals, and palatalization of /k/ to . This has led some researchers, following Walther von Wartburg, to postulate a basic two-way east–west division, with the "Eastern" languages including Romanian and central and southern Italian, although this view is troubled by the contrast of numerous Romanian phonological developments with those found in Italy below the La Spezia-Rimini line. Among these features, in Romanian geminates reduced historically to single units — which may be an independent development or perhaps due to Slavic influence — and /kt/ developed into /pt/, whereas in central and southern Italy geminates are preserved and /kt/ underwent assimilation to /tt/. Despite being the first Romance language to diverge from spoken Latin, Sardinian does not fit at all into this sort of division. It is clear that Sardinian became linguistically independent from the remainder of the Romance languages at an extremely early date, possibly already by the first century BC. Sardinian contains a large number of archaic features, including total lack of palatalization of /k/ and /ɡ/ and a large amount of vocabulary preserved nowhere else, including some items already archaic by the time of Classical Latin (first century BC). Sardinian has plurals in /s/ but post-vocalic lenition of voiceless consonants is normally limited to the status of an allophonic rule (e.g. [k]ane 'dog' but su [ɡ]ane or su [ɣ]ane 'the dog'), and there are a few innovations unseen elsewhere, such as a change of /au/ to /a/. Use of su < ipsum as an article is a retained archaic feature that also exists in the Catalan of the Balearic Islands and that used to be more widespread in Occitano-Romance, and is known as (literally the "salted article"), while Sardinian shares develarisation of earlier /kw/ and /ɡw/ with Romanian: Sard. abba, Rum. apă 'water'; Sard. limba, Rom. limbă 'language' (cf. Italian acqua, lingua). Dialects of southern Italy, Sardinia and Corsica The Sardinian-type vowel system is also found in a small region belonging to the (also known as Lausberg zone; compare ) of southern Italy, in southern Basilicata, and there is evidence that the Romanian-type "compromise" vowel system was once characteristic of most of southern Italy, although it is now limited to a small area in western Basilicata centered on the Castelmezzano dialect, the area being known as , the German word for 'outpost'. The Sicilian vowel system, now generally thought to be a development based on the Italo-Western system, is also represented in southern Italy, in southern Cilento, Calabria and the southern tip of Apulia, and may have been more widespread in the past. The greatest variety of vowel systems outside of southern Italy is found in Corsica, where the Italo-Western type is represented in most of the north and center and the Sardinian type in the south, as well as a system resembling the Sicilian vowel system (and even more closely the Carovignese system) in the Cap Corse region; finally, in between the Italo-Western and Sardinian system is found, in the Taravo region, a unique vowel system that cannot be derived from any other system, which has reflexes like Sardinian for the most part, but the short high vowels of Latin are uniquely reflected as mid-low vowels. Gallo-Romance languages Gallo-Romance can be divided into the following subgroups: The Langues d'oïl, including French and closely related languages. The Franco-Provençal language (also known as Arpitan) of southeastern France, western Switzerland, and Aosta Valley region of northwestern Italy. The following groups are also sometimes considered part of Gallo-Romance: The Occitano-Romance languages of southern France, namely Occitan and Gascon. The Catalan language of eastern Iberia is also sometimes included in Gallo-Romance. This is however disputed by some linguists who prefer to group it with Iberian Romance, since although Old Catalan is close to Old Occitan, it later adjusted its lexicon to some degree to align with Spanish. In general however, modern Catalan, especially grammatically, remains closer to modern Occitan than to either Spanish or Portuguese. The Gallo-Italian languages of northern Italy, including Piedmontese, Ligurian, Lombard, Emilian and Romagnol. Ligurian retains the final -o, being the exception in Gallo-Romance. The Rhaeto-Romance languages, including Romansh, and Friulian, and Ladin dialects. The Gallo-Romance languages are generally considered the most innovative (least conservative) among the Romance languages. Characteristic Gallo-Romance features generally developed earliest and appear in their most extreme manifestation in the Langue d'oïl, gradually spreading out along riverways and transalpine roads. In some ways, however, the Gallo-Romance languages are conservative. The older stages of many of the languages preserved a two-case system consisting of nominative and oblique, fully marked on nouns, adjectives and determiners, inherited almost directly from the Latin nominative and accusative and preserving a number of different declensional classes and irregular forms. The languages closest to the oïl epicenter preserve the case system the best, while languages at the periphery lose it early. Notable characteristics of the Gallo-Romance languages are: Early loss of unstressed final vowels other than — a defining characteristic of the group. Further reductions of final vowels in Langue d'oïl and many Gallo-Italic languages, with the feminine and prop vowel merging into , which is often subsequently dropped. Early, heavy reduction of unstressed vowels in the interior of a word (another defining characteristic). Loss of final vowels phonemicized the long vowels that used to be automatic concomitants of stressed open syllables. These phonemic long vowels are maintained directly in many Northern Italian dialects; elsewhere, phonemic length was lost, but in the meantime many of the long vowels diphthongized, resulting in a maintenance of the original distinction. The langue d'oïl branch is again at the forefront of innovation, with no less than five of the seven long vowels diphthongizing (only high vowels were spared). Front rounded vowels are present in all branches of Gallo-Romance. usually fronts to , and secondary mid front rounded vowels often develop from long or . Extreme lenition (i.e. multiple rounds of lenition) occurs in many languages especially in Langue d'oïl and many Gallo-Italian languages. The Langue d'oïl, Swiss Rhaeto-Romance languages and many of the northern dialects of Occitan have a secondary palatalization of and before , producing different results from the primary Romance palatalization: e.g. centum "hundred" > cent , cantum "song" > chant . Other than the Occitano-Romance languages, most Gallo-Romance languages are subject-obligatory (whereas all the rest of the Romance languages are pro-drop languages). This is a late development triggered by progressive phonetic erosion: Old French was still a null-subject language, and this only changed upon loss of secondarily final consonants in Middle French. Pidgins, creoles, and mixed languages Some Romance languages have developed varieties which seem dramatically restructured as to their grammars or to be mixtures with other languages. There are several dozens of creoles of French, Spanish, and Portuguese origin, some of them spoken as national languages in former European colonies. Creoles of French: Antillean (French Antilles, Saint Lucia, Dominica) Haitian (one of Haiti's two official languages) Louisiana (US) Mauritian (lingua franca of Mauritius) Réunion (native language of Réunion) Seychellois (Seychelles' official language) Creoles of Spanish: Chavacano (in part of Philippines) Palenquero (in part of Colombia) Creoles of Portuguese: Angolar (regional language in São Tomé and Principe) Cape Verdean (Cape Verde's national language; includes several distinct varieties) Forro (regional language in São Tomé and Príncipe) Kristang (Malaysia) Macanese (Macau) Papiamento (Dutch Antilles official language) Guinea-Bissau Creole (Guinea-Bissau's national language) Auxiliary and constructed languages Latin and the Romance languages have also served as the inspiration and basis of numerous auxiliary and constructed languages, so-called "Neo-Romance languages". The concept was first developed in 1903 by Italian mathematician Giuseppe Peano, under the title Latino sine flexione. He wanted to create a naturalistic international language, as opposed to an autonomous constructed language like Esperanto or Volapük which were designed for maximal simplicity of lexicon and derivation of words. Peano used Latin as the base of his language, because at the time of his flourishing it was the de facto international language of scientific communication. Other languages developed include Idiom Neutral (1902), Interlingue-Occidental (1922), Interlingua (1951) and Lingua Franca Nova (1998). The most famous and successful of these is Interlingua. Each of these languages has attempted to varying degrees to achieve a pseudo-Latin vocabulary as common as possible to living Romance languages. Some languages have been constructed specifically for communication among speakers of Romance languages, the Pan-Romance languages. There are also languages created for artistic purposes only, such as Talossan. Because Latin is a very well attested ancient language, some amateur linguists have even constructed Romance languages that mirror real languages that developed from other ancestral languages. These include Brithenig (which mirrors Welsh), Breathanach (mirrors Irish), Wenedyk (mirrors Polish), Þrjótrunn (mirrors Icelandic), and Helvetian (mirrors German). Modern status The Romance language most widely spoken natively today is Spanish, followed by Portuguese, French, Italian and Romanian, which together cover a vast territory in Europe and beyond, and work as official and national languages in dozens of countries. French, Italian, Portuguese, Spanish, and Romanian are also official languages of the European Union. Spanish, Portuguese, French, Italian, Romanian, and Catalan were the official languages of the defunct Latin Union; and French and Spanish are two of the six official languages of the United Nations. Outside Europe, French, Portuguese and Spanish are spoken and enjoy official status in various countries that emerged from the respective colonial empires. Spanish is an official language in Spain and in nine countries of South America, home to about half that continent's population; in six countries of Central America (all except Belize); and in Mexico. In the Caribbean, it is official in Cuba, the Dominican Republic, and Puerto Rico. In all these countries, Latin American Spanish is the vernacular language of the majority of the population, giving Spanish the most native speakers of any Romance language. In Africa it is an official language of Equatorial Guinea. Portuguese, in its original homeland, Portugal, is spoken by virtually the entire population of 10 million. As the official language of Brazil, it is spoken by more than 200 million people in that country, as well as by neighboring residents of eastern Paraguay and northern Uruguay, accounting for a little more than half the population of South America, thus making Portuguese the most spoken official Romance language in a single country. It is the official language of six African countries (Angola, Cape Verde, Guinea-Bissau, Mozambique, Equatorial Guinea, and São Tomé and Príncipe), and is spoken as a first language by perhaps 30 million residents of that continent. In Asia, Portuguese is co-official with other languages in East Timor and Macau, while most Portuguese-speakers in Asia—some 400,000—are in Japan due to return immigration of Japanese Brazilians. In North America 1,000,000 people speak Portuguese as their home language. In Oceania, Portuguese is the second most spoken Romance language, after French, due mainly to the number of speakers in East Timor. Its closest relative, Galician, has official status in the autonomous community of Galicia in Spain, together with Spanish. Outside Europe, French is spoken natively most in the Canadian province of Quebec, and in parts of New Brunswick and Ontario. Canada is officially bilingual, with French and English being the official languages. In parts of the Caribbean, such as Haiti, French has official status, but most people speak creoles such as Haitian Creole as their native language. French also has official status in much of Africa, with relatively few native speakers but larger numbers of second language speakers. In France's remaining overseas possessions, native use of French is increasing. Although Italy also had some colonial possessions before World War II, its language did not remain official after the end of the colonial domination. As a result, Italian outside of Italy and Switzerland is now spoken only as a minority language by immigrant communities in North and South America and Australia. In some former Italian colonies in Africa—namely Libya, Eritrea and Somalia—it is spoken by a few educated people in commerce and government. Romania did not establish a colonial empire, and the native range of Romanian includes not only the former Soviet republic of Moldova, where it is the dominant language and spoken by a majority of the population, but neighboring areas in Serbia (Vojvodina and the Bor District), Bulgaria, Hungary, and Ukraine (Bukovina, Budjak) and in some villages between the Dniester and Bug rivers. As with Italian, Romanian is spoken outside of its ethnic range by immigrant communities, such as other European countries (notably Italy, Spain, and Portugal, where in all three of which Romanian-speakers form about two percent of the population), as well as to Israel by Romanian Jews, where it is the native language of five percent of the population, and is spoken by many more as a secondary language. The Aromanian language is spoken today by Aromanians in Bulgaria, Macedonia, Albania, Kosovo, and Greece. The total of 880 million native speakers of Romance languages (ca. 2020) are divided as follows: Spanish 54% (475 million, plus 75 million L2 for 550 million Hispanophones) Portuguese 26% (230 million, plus 30 million L2 for 260 million Lusophones) French 9% (80 million, plus 195 million L2 for 275 million Francophones) Italian 7% (65 million, plus 3 million L2) Romanian 3% (24 million) Catalan 0.5% (4 million, plus 5 million L2) Others 3% (26 million, nearly all bilingual in one of the national languages) Catalan is the official language of Andorra. In Spain, it is co-official with Spanish in Catalonia, the Valencian Community (under the name Valencian, and the Balearic Islands, and it is recognized, but not official, in an area of Aragon known as La Franja. In addition, it is spoken by many residents of Alghero, on the island of Sardinia, and it is co-official in that city. Galician, with more than a million native speakers, is official together with Spanish in Galicia, and has legal recognition in neighbouring territories in Castilla y León. A few other languages have official recognition on a regional or otherwise limited level; for instance, Asturian and Aragonese in Spain; Mirandese in Portugal; Friulian, Sardinian and Franco-Provençal in Italy; and Romansh in Switzerland. The remaining Romance languages survive mostly as spoken languages for informal contact. National governments have historically viewed linguistic diversity as an economic, administrative or military liability, as well as a potential source of separatist movements; therefore, they have generally fought to eliminate it, by extensively promoting the use of the official language, restricting the use of the other languages in the media, recognizing them as mere "dialects", or even persecuting them. As a result, all of these languages are considered endangered to varying degrees according to the UNESCO Red Book of Endangered Languages, ranging from "vulnerable" (e.g. Sicilian and Venetian) to "severely endangered" (Franco-Provençal, most of the Occitan varieties). Since the late twentieth and early twenty-first centuries, increased sensitivity to the rights of minorities has allowed some of these languages to start recovering their prestige and lost rights. Yet it is unclear whether these political changes will be enough to reverse the decline of minority Romance languages. History Romance languages are the continuation of Vulgar Latin, the popular and colloquial sociolect of Latin spoken by soldiers, settlers, and merchants of the Roman Empire, as distinguished from the classical form of the language spoken by the Roman upper classes, the form in which the language was generally written. Between 350 BC and 150 AD, the expansion of the Empire, together with its administrative and educational policies, made Latin the dominant native language in continental Western Europe. Latin also exerted a strong influence in southeastern Britain, the Roman province of Africa, western Germany, Pannonia and the whole Balkans. During the Empire's decline, and after its fragmentation and the collapse of its Western half in the fifth and sixth centuries, the spoken varieties of Latin became more isolated from each other, with the western dialects coming under heavy Germanic influence (the Goths and Franks in particular) and the eastern dialects coming under Slavic influence. The dialects diverged from classical Latin at an accelerated rate and eventually evolved into a continuum of recognizably different typologies. The colonial empires established by Portugal, Spain, and France from the fifteenth century onward spread their languages to the other continents to such an extent that about two-thirds of all Romance language speakers today live outside Europe. Despite other influences (e.g. substratum from pre-Roman languages, especially Continental Celtic languages; and superstratum from later Germanic or Slavic invasions), the phonology, morphology, and lexicon of all Romance languages consist mainly of evolved forms of Vulgar Latin. However, some notable differences occur between today's Romance languages and their Roman ancestor. With only one or two exceptions, Romance languages have lost the declension system of Latin and, as a result, have SVO sentence structure and make extensive use of prepositions. Vulgar Latin Documentary evidence is limited about Vulgar Latin for the purposes of comprehensive research, and the literature is often hard to interpret or generalize. Many of its speakers were soldiers, slaves, displaced peoples, and forced resettlers, more likely to be natives of conquered lands than natives of Rome. In Western Europe, Latin gradually replaced Celtic and other Italic languages, which were related to it by a shared Indo-European origin. Commonalities in syntax and vocabulary facilitated the adoption of Latin. Vulgar Latin is believed to have already had most of the features shared by all Romance languages, which distinguish them from Classical Latin, such as the almost complete loss of the Latin grammatical case system and its replacement by prepositions; the loss of the neuter grammatical gender and comparative inflections; replacement of some verb paradigms by innovations (e.g. the synthetic future gave way to an originally analytic strategy now typically formed by infinitive + evolved present indicative forms of 'have'); the use of articles; and the initial stages of the palatalization of the plosives /k/, /ɡ/, and /t/. To some scholars, this suggests the form of Vulgar Latin that evolved into the Romance languages was around during the time of the Roman Empire (from the end of the first century BC), and was spoken alongside the written Classical Latin which was reserved for official and formal occasions. Other scholars argue that the distinctions are more rightly viewed as indicative of sociolinguistic and register differences normally found within any language. Both were mutually intelligible as one and the same language, which was true until very approximately the second half of the 7th century. However, within two hundred years Latin became a dead language since "the Romanized people of Europe could no longer understand texts that were read aloud or recited to them," i.e. Latin had ceased to be a first language and became a foreign language that had to be learned, if the label Latin is constrained to refer to a state of the language frozen in past time and restricted to linguistic features for the most part typical of higher registers. With the rise of the Roman Empire, Vulgar Latin spread first throughout Italy and then through southern, western, central, and southeastern Europe, and northern Africa along parts of western Asia. Fall of the Western Roman Empire During the political decline of the Western Roman Empire in the fifth century, there were large-scale migrations into the empire, and the Latin-speaking world was fragmented into several independent states. Central Europe and the Balkans were occupied by Germanic and Slavic tribes, as well as by Huns. These incursions isolated the Vlachs from the rest of Romance-speaking Europe. British and African Romance—the forms of Vulgar Latin used in Britain and the Roman province of Africa, where it had been spoken by much of the urban population—disappeared in the Middle Ages (as did Pannonian Romance in what is now Hungary, and Moselle Romance in Germany). But the Germanic tribes that had penetrated Roman Italy, Gaul, and Hispania eventually adopted Latin/Romance and the remnants of the culture of ancient Rome alongside existing inhabitants of those regions, and so Latin remained the dominant language there. In part due to regional dialects of the Latin language and local environments, several languages evolved from it. Fall of the Eastern Roman empire Meanwhile, large-scale migrations into the Eastern Roman Empire started with the Goths and continued with Huns, Avars, Bulgars, Slavs, Pechenegs, Hungarians and Cumans. The invasions of Slavs were the most thoroughgoing, and they partially reduced the Romanic element in the Balkans. The invasion of the Turks and conquest of Constantinople in 1453 marked
Lexical and grammatical similarities among the Romance languages, and between Latin and each of them, are apparent from the following examples having the same meaning in various Romance lects: {| cellspacing="3px" | English || She always closes the window before she dines/before dining. |- | Latin || |- | Vulgar Latin || / (later, only in Italy, ) |- | Apulian || |- | Aragonese || |- | Aromanian || |- | Asturian || |- | Cantabrian || |- | Catalan || |- | Northern Corsican || . |- | Southern Corsican || . |- | Emilian (Reggiano) || |- | Emilian (Bolognese) || |- | Emilian (Piacenza Language) || Ad sira lé la sèra seimpar la finéstra prima da seina. |- | Extremaduran || |- | Franco-Provençal || |- | French || |- | Friulian || |- | Galician || |- | Gallurese || |- | Italian || |- | Judaeo-Spanish || |- | Ladin || Badiot: Centro Cadore: Auronzo di Cadore: Gherdëina: |- | Leonese || |- | Ligurian || |- | Lombard (east.)(Bergamasque) || |- | Lombard (west.) || |- | Magoua || |- | Mirandese || |- | Neapolitan || |- | Norman || |- | Occitan || |- | Picard || |- | Piedmontese || |- | Portuguese || |- | Romagnol || |- | Romanian || |- | Romansh || |- | South Sardinian (Campidanese) || |- | North Sardinian (Logudorese) || |- | Sassarese || |- | Sicilian || |- | Spanish || |- | Tuscan || |- | Umbrian || |- | Venetian || |- | Walloon || |} {| cellspacing="3px" |+ Romance-based creoles and pidgins |- | Haitian Creole || |- | Mauritian Creole || |- | Seychellois Creole || |- | Papiamento || |- | Kriolu || |- | Chavacano || |- | Palenquero || |} Some of the divergence comes from semantic change: where the same root words have developed different meanings. For example, the Portuguese word is descended from Latin "window" (and is thus cognate to French , Italian , Romanian and so on), but now means "skylight" and "slit". Cognates may exist but have become rare, such as in Spanish, or dropped out of use entirely. The Spanish and Portuguese terms meaning "to throw through a window" and meaning "replete with windows" also have the same root, but are later borrowings from Latin. Likewise, Portuguese also has the word , a cognate of Italian and Spanish , but uses it in the sense of "to have a late supper" in most varieties, while the preferred word for "to dine" is (related to archaic Spanish "to eat") because of semantic changes in the 19th century. Galician has both (from medieval fẽestra, the ancestor of standard Portuguese ) and the less frequently used and . As an alternative to (originally the genitive form), Italian has the pronoun , a cognate of the other words for "she", but it is hardly ever used in speaking. Spanish, Asturian, and Leonese and Mirandese and Sardinian come from Latin "wind" (cf. English window, etymologically 'wind eye'), and Portuguese , Galician , Mirandese from Latin * "small opening", a derivative of "door". Sardinian (alternative for /) comes from Old Italian and is similar to other Romance languages such as French (from Italian ), Portuguese , Romanian , Spanish , Catalan and Corsican (alternative for ). Classification and related languages The classification of the Romance languages is inherently difficult, because most of the linguistic area is a dialect continuum, and in some cases political biases can come into play. Along with Latin (which is not included among the Romance languages) and a few extinct languages of ancient Italy, they make up the Italic branch of the Indo-European family. Proposed divisions There are various schemes used to subdivide the Romance languages. Three of the most common schemes are as follows: Italo-Western vs. Eastern vs. Southern. This is the scheme followed by Ethnologue, and is based primarily on the outcome of the ten monophthong vowels in Classical Latin. This is discussed more below. West vs. East. This scheme divides the various languages along the La Spezia–Rimini Line, which runs across north-central Italy just to the north of the city of Florence (whose speech forms the basis of standard Italian). In this scheme, "East" includes the languages of central and southern Italy, and the Balkan Romance (or "Eastern Romance") languages in Romania, Greece, and elsewhere in the Balkans; "West" includes the languages of Portugal, Spain, France, northern Italy and Switzerland. Sardinian does not easily fit in this scheme. "Conservative" vs. "innovatory". This is a non-genetic division whose precise boundaries are subject to debate. Generally, the Gallo-Romance languages (discussed further below) form the core "innovatory" languages, with standard French generally considered the most innovatory of all, while the languages near the periphery (which include Spanish, Portuguese, Italian and Romanian) are "conservative". Sardinian is generally acknowledged the most conservative Romance language, and was also the first language to split off genetically from the rest, possibly as early as the first century BC. Dante famously denigrated the Sardinians for the conservativeness of their speech, remarking that they imitate Latin "like monkeys imitate men". Italo-Western vs. Eastern vs. Sardinian The main subfamilies that have been proposed by Ethnologue within the various classification schemes for Romance languages are: Italo-Western, the largest group, which includes languages such as Catalan, Portuguese, Italian, Spanish, and French. Eastern Romance, which includes the Romance languages of Eastern Europe, such as Romanian. Southern Romance, which includes a few languages with particularly conservative features, such as Sardinian and, according to some authors, Corsican as well to a more limited extent. This family is thought to have included the now-vanished Romance languages of North Africa (or at least, they appear to have evolved some phonological features and their vowels in the same way). This three-way division is made primarily based on the outcome of Vulgar Latin (Proto-Romance) vowels: Italo-Western is in turn split along the so-called La Spezia–Rimini Line in northern Italy, which divides the central and southern Italian languages from the so-called Western Romance languages to the north and west. The primary characteristics dividing the two are: Phonemic lenition of intervocalic stops, which happens to the northwest but not to the southeast. Degemination of geminate stops (producing new intervocalic single voiceless stops, after the old ones were lenited), which again happens to the northwest but not to the southeast. Deletion of intertonic vowels (between the stressed syllable and either the first or last syllable), again in the northwest but not the southeast. Use of plurals in /s/ in the northwest vs. plurals using vowel change in the southeast. Development of palatalized /k/ before /e,i/ to in the northwest vs. in the southeast. Development of , which develops to > (sometimes progressing further to ) in the northwest but in the southeast. The reality is somewhat more complex. All of the "southeast" characteristics apply to all languages southeast of the line, and all of the "northwest" characteristics apply to all languages in France and (most of) Spain. However, the Gallo-Italic languages are somewhere in between. All of these languages do have the "northwest" characteristics of lenition and loss of gemination. However: The Gallo‒Italic languages have vowel-changing plurals rather than /s/ plurals. The Lombard language in north-central Italy and the Rhaeto-Romance languages have the "southeast" characteristic of instead of for palatalized /k/. The Venetian language in northeast Italy and some of the Rhaeto-Romance languages have the "southeast" characteristic of developing to . Lenition of post-vocalic /p t k/ is widespread as an allophonic phonetic realization in Italy below the La Spezia-Rimini line, including Corsica and most of Sardinia. On top of this, the ancient Mozarabic language in southern Spain, at the far end of the "northwest" group, had the "southeast" characteristics of lack of lenition and palatalization of /k/ to . Certain languages around the Pyrenees (e.g. some highland Aragonese dialects) also lack lenition, and northern French dialects such as Norman and Picard have palatalization of /k/ to (although this is possibly an independent, secondary development, since /k/ between vowels, i.e. when subject to lenition, developed to /dz/ rather than , as would be expected for a primary development). The usual solution to these issues is to create various nested subgroups. Western Romance is split into the Gallo-Iberian languages, in which lenition happens and which include nearly all the Western Romance languages, and the Pyrenean-Mozarabic group, which includes the remaining languages without lenition (and is unlikely to be a valid clade; probably at least two clades, one for Mozarabic and one for Pyrenean). Gallo-Iberian is split in turn into the Iberian languages (e.g. Spanish and Portuguese), and the larger Gallo-Romance languages (stretching from eastern Spain to northeast Italy). Probably a more accurate description, however, would be to say that there was a focal point of innovation located in central France, from which a series of innovations spread out as areal changes. The La Spezia–Rimini Line represents the farthest point to the southeast that these innovations reached, corresponding to the northern chain of the Apennine Mountains, which cuts straight across northern Italy and forms a major geographic barrier to further language spread. This would explain why some of the "northwest" features (almost all of which can be characterized as innovations) end at differing points in northern Italy, and why some of the languages in geographically remote parts of Spain (in the south, and high in the Pyrenees) are lacking some of these features. It also explains why the languages in France (especially standard French) seem to have innovated earlier and more extensively than other Western Romance languages. Many of the "southeast" features also apply to the Eastern Romance languages (particularly, Romanian), despite the geographic discontinuity. Examples are lack of lenition, maintenance of intertonic vowels, use of vowel-changing plurals, and palatalization of /k/ to . This has led some researchers, following Walther von Wartburg, to postulate a basic two-way east–west division, with the "Eastern" languages including Romanian and central and southern Italian, although this view is troubled by the contrast of numerous Romanian phonological developments with those found in Italy below the La Spezia-Rimini line. Among these features, in Romanian geminates reduced historically to single units — which may be an independent development or perhaps due to Slavic influence — and /kt/ developed into /pt/, whereas in central and southern Italy geminates are preserved and /kt/ underwent assimilation to /tt/. Despite being the first Romance language to diverge from spoken Latin, Sardinian does not fit at all into this sort of division. It is clear that Sardinian became linguistically independent from the remainder of the Romance languages at an extremely early date, possibly already by the first century BC. Sardinian contains a large number of archaic features, including total lack of palatalization of /k/ and /ɡ/ and a large amount of vocabulary preserved nowhere else, including some items already archaic by the time of Classical Latin (first century BC). Sardinian has plurals in /s/ but post-vocalic lenition of voiceless consonants is normally limited to the status of an allophonic rule (e.g. [k]ane 'dog' but su [ɡ]ane or su [ɣ]ane 'the dog'), and there are a few innovations unseen elsewhere, such as a change of /au/ to /a/. Use of su < ipsum as an article is a retained archaic feature that also exists in the Catalan of the Balearic Islands and that used to be more widespread in Occitano-Romance, and is known as (literally the "salted article"), while Sardinian shares develarisation of earlier /kw/ and /ɡw/ with Romanian: Sard. abba, Rum. apă 'water'; Sard. limba, Rom. limbă 'language' (cf. Italian acqua, lingua). Dialects of southern Italy, Sardinia and Corsica The Sardinian-type vowel system is also found in a small region belonging to the (also known as Lausberg zone; compare ) of southern Italy, in southern Basilicata, and there is evidence that the Romanian-type "compromise" vowel system was once characteristic of most of southern Italy, although it is now limited to a small area in western Basilicata centered on the Castelmezzano dialect, the area being known as , the German word for 'outpost'. The Sicilian vowel system, now generally thought to be a development based on the Italo-Western system, is also represented in southern Italy, in southern Cilento, Calabria and the southern tip of Apulia, and may have been more widespread in the past. The greatest variety of vowel systems outside of southern Italy is found in Corsica, where the Italo-Western type is represented in most of the north and center and the Sardinian type in the south, as well as a system resembling the Sicilian vowel system (and even more closely the Carovignese system) in the Cap Corse region; finally, in between the Italo-Western and Sardinian system is found, in the Taravo region, a unique vowel system that cannot be derived from any other system, which has reflexes like Sardinian for the most part, but the short high vowels of Latin are uniquely reflected as mid-low vowels. Gallo-Romance languages Gallo-Romance can be divided into the following subgroups: The Langues d'oïl, including French and closely related languages. The Franco-Provençal language (also known as Arpitan) of southeastern France, western Switzerland, and Aosta Valley region of northwestern Italy. The following groups are also sometimes considered part of Gallo-Romance: The Occitano-Romance languages of southern France, namely Occitan and Gascon. The Catalan language of eastern Iberia is also sometimes included in Gallo-Romance. This is however disputed by some linguists who prefer to group it with Iberian Romance, since although Old Catalan is close to Old Occitan, it later adjusted its lexicon to some degree to align with Spanish. In general however, modern Catalan, especially grammatically, remains closer to modern Occitan than to either Spanish or Portuguese. The Gallo-Italian languages of northern Italy, including Piedmontese, Ligurian, Lombard, Emilian and Romagnol. Ligurian retains the final -o, being the exception in Gallo-Romance. The Rhaeto-Romance languages, including Romansh, and Friulian, and Ladin dialects. The Gallo-Romance languages are generally considered the most innovative (least conservative) among the Romance languages. Characteristic Gallo-Romance features generally developed earliest and appear in their most extreme manifestation in the Langue d'oïl, gradually spreading out along riverways and transalpine roads. In some ways, however, the Gallo-Romance languages are conservative. The older stages of many of the languages preserved a two-case system consisting of nominative and oblique, fully marked on nouns, adjectives and determiners, inherited almost directly from the Latin nominative and accusative and preserving a number of different declensional classes and irregular forms. The languages closest to the oïl epicenter preserve the case system the best, while languages at the periphery lose it early. Notable characteristics of the Gallo-Romance languages are: Early loss of unstressed final vowels other than — a defining characteristic of the group. Further reductions of final vowels in Langue d'oïl and many Gallo-Italic languages, with the feminine and prop vowel merging into , which is often subsequently dropped. Early, heavy reduction of unstressed vowels in the interior of a word (another defining characteristic). Loss of final vowels phonemicized the long vowels that used to be automatic concomitants of stressed open syllables. These phonemic long vowels are maintained directly in many Northern Italian dialects; elsewhere, phonemic length was lost, but in the meantime many of the long vowels diphthongized, resulting in a maintenance of the original distinction. The langue d'oïl branch is again at the forefront of innovation, with no less than five of the seven long vowels diphthongizing (only high vowels were spared). Front rounded vowels are present in all branches of Gallo-Romance. usually fronts to , and secondary mid front rounded vowels often develop from long or . Extreme lenition (i.e. multiple rounds of lenition) occurs in many languages especially in Langue d'oïl and many Gallo-Italian languages. The Langue d'oïl, Swiss Rhaeto-Romance languages and many of the northern dialects of Occitan have a secondary palatalization of and before , producing different results from the primary Romance palatalization: e.g. centum "hundred" > cent , cantum "song" > chant . Other than the Occitano-Romance languages, most Gallo-Romance languages are subject-obligatory (whereas all the rest of the Romance languages are pro-drop languages). This is a late development triggered by progressive phonetic erosion: Old French was still a null-subject language, and this only changed upon loss of secondarily final consonants in Middle French. Pidgins, creoles, and mixed languages Some Romance languages have developed varieties which seem dramatically restructured as to their grammars or to be mixtures with other languages. There are several dozens of creoles of French, Spanish, and Portuguese origin, some of them spoken as national languages in former European colonies. Creoles of French: Antillean (French Antilles, Saint Lucia, Dominica) Haitian (one of Haiti's two official languages) Louisiana (US) Mauritian (lingua franca of Mauritius) Réunion (native language of Réunion) Seychellois (Seychelles' official language) Creoles of Spanish: Chavacano (in part of Philippines) Palenquero (in part of Colombia) Creoles of Portuguese: Angolar (regional language in São Tomé and Principe) Cape Verdean (Cape Verde's national language; includes several distinct varieties) Forro (regional language in São Tomé and Príncipe) Kristang (Malaysia) Macanese (Macau) Papiamento (Dutch Antilles official language) Guinea-Bissau Creole (Guinea-Bissau's national language) Auxiliary and constructed languages Latin and the Romance languages have also served as the inspiration and basis of numerous auxiliary and constructed languages, so-called "Neo-Romance languages". The concept was first developed in 1903 by Italian mathematician Giuseppe Peano, under the title Latino sine flexione. He wanted to create a naturalistic international language, as opposed to an autonomous constructed language like Esperanto or Volapük which were designed for maximal simplicity of lexicon and derivation of words. Peano used Latin as the base of his language, because at the time of his flourishing it was the de facto international language of scientific communication. Other languages developed include Idiom Neutral (1902), Interlingue-Occidental (1922), Interlingua (1951) and Lingua Franca Nova (1998). The most famous and successful of these is Interlingua. Each of these languages has attempted to varying degrees to achieve a pseudo-Latin vocabulary as common as possible to living Romance languages. Some languages have been constructed specifically for communication among speakers of Romance languages, the Pan-Romance languages. There are also languages created for artistic purposes only, such as Talossan. Because Latin is a very well attested ancient language, some amateur linguists have even constructed Romance languages that mirror real languages that developed from other ancestral languages. These include Brithenig (which mirrors Welsh), Breathanach (mirrors Irish), Wenedyk (mirrors Polish), Þrjótrunn (mirrors Icelandic), and Helvetian (mirrors German). Modern status The Romance language most widely spoken natively today is Spanish, followed by Portuguese, French, Italian and Romanian, which together cover a vast territory in Europe and beyond, and work as official and national languages in dozens of countries. French, Italian, Portuguese, Spanish, and Romanian are also official languages of the European Union. Spanish, Portuguese, French, Italian, Romanian, and Catalan were the official languages of the defunct Latin Union; and French and Spanish are two of the six official languages of the United Nations. Outside Europe, French, Portuguese and Spanish are spoken and enjoy official status in various countries that emerged from the respective colonial empires. Spanish is an official language in Spain and in nine countries of South America, home to about half that continent's population; in six countries of Central America (all except Belize); and in Mexico. In the Caribbean, it is official in Cuba, the Dominican Republic, and Puerto Rico. In all these countries, Latin American Spanish is the vernacular language of the majority of the population, giving Spanish the most native speakers of any Romance language. In Africa it is an official language of Equatorial Guinea. Portuguese, in its original homeland, Portugal, is spoken by virtually the entire population of 10 million. As the official language of Brazil, it is spoken by more than 200 million people in that country, as well as by neighboring residents of eastern Paraguay and northern Uruguay, accounting for a little more than half the population of South America, thus making Portuguese the most spoken official Romance language in a single country. It is the official language of six African countries (Angola, Cape Verde, Guinea-Bissau, Mozambique, Equatorial Guinea, and São Tomé and Príncipe), and is spoken as a first language by perhaps 30 million residents of that continent. In Asia, Portuguese is co-official with other languages in East Timor and Macau, while most Portuguese-speakers in Asia—some 400,000—are in Japan due to return immigration of Japanese Brazilians. In North America 1,000,000 people speak Portuguese as their home language. In Oceania, Portuguese is the second most spoken Romance language, after French, due mainly to the number of speakers in East Timor. Its closest relative, Galician, has official status in the autonomous community of Galicia in Spain, together with Spanish. Outside Europe, French is spoken natively most in the Canadian province of Quebec, and in parts of New Brunswick and Ontario. Canada is officially bilingual, with French and English being the official languages. In parts of the Caribbean, such as Haiti, French has official status, but most people speak creoles such as Haitian Creole as their native language. French also has official status in much of Africa, with relatively few native speakers but larger numbers of second language speakers. In France's remaining overseas possessions, native use of French is increasing. Although Italy also had some colonial possessions before World War II, its language did not remain official after the end of the colonial domination. As a result, Italian outside of Italy and Switzerland is now spoken only as a minority language by immigrant communities in North and South America and Australia. In some former Italian colonies in Africa—namely Libya, Eritrea and Somalia—it is spoken by a few educated people in commerce and government. Romania did not establish a colonial empire, and the native range of Romanian includes not only the former Soviet republic of Moldova, where it is the dominant language and spoken by a majority of the population, but neighboring areas in Serbia (Vojvodina and the Bor District), Bulgaria, Hungary, and Ukraine (Bukovina, Budjak) and in some villages between the Dniester and Bug rivers. As with Italian, Romanian is spoken outside of its ethnic range by immigrant communities, such as other European countries (notably Italy, Spain, and Portugal, where in all three of which Romanian-speakers form about two percent of the population), as well as to Israel by Romanian Jews, where it is the native language of five percent of the population, and is spoken by many more as a secondary language. The Aromanian language is spoken today by Aromanians in Bulgaria, Macedonia, Albania, Kosovo, and Greece. The total of 880 million native speakers of Romance languages (ca. 2020) are divided as follows: Spanish 54% (475 million, plus 75 million L2 for 550 million Hispanophones) Portuguese 26% (230 million, plus 30 million L2 for 260 million Lusophones) French 9% (80 million, plus 195 million L2 for 275 million Francophones) Italian 7% (65 million, plus 3 million L2) Romanian 3% (24 million) Catalan 0.5% (4 million, plus 5 million L2) Others 3% (26 million, nearly all bilingual in one of the national languages) Catalan is the official language of Andorra. In Spain, it is co-official with Spanish in Catalonia, the Valencian Community (under the name Valencian, and the Balearic Islands, and it is recognized, but not official, in an area of Aragon known as La Franja. In addition, it is spoken by many residents of Alghero, on the island of Sardinia, and it is co-official in that city. Galician, with more than a million native speakers, is official together with Spanish in Galicia, and has legal recognition in neighbouring territories in Castilla y León. A few other languages have official recognition on a regional or otherwise limited level; for instance, Asturian and Aragonese in Spain; Mirandese in Portugal; Friulian, Sardinian and Franco-Provençal in Italy; and Romansh in Switzerland. The remaining Romance languages survive mostly as spoken languages for informal contact. National governments have historically viewed linguistic diversity as an economic, administrative or
1995, following the 1995 Rugby World Cup in South Africa. The respective world governing bodies are World Rugby (rugby union) and the Rugby League International Federation (rugby league). Rugby football was one of many versions of football played at English public schools in the 19th century. Although rugby league initially used rugby union rules, they are now wholly separate sports. In addition to these two codes, both American and Canadian football evolved from rugby football at the beginning of the 20th century. Forms Following the 1895 split in rugby football, the two forms rugby league and rugby union differed in administration only. Soon the rules of rugby league were modified, resulting in two distinctly different forms of rugby. Rugby union would not become an openly professional sport until 100 years later. The Olympic form of rugby is known as Rugby Sevens. In this form of the game, each team has seven players on the field at one time playing seven-minute halves. The rules and pitch size are the same as rugby union. History Antecedents of rugby Although rugby football was codified at Rugby School, many rugby playing countries had pre-existing football games similar to rugby. Forms of traditional football similar to rugby have been played throughout Europe and beyond. Many of these involved handling of the ball, and scrummaging formations. For example, New Zealand had Ki-o-rahi, Australia marn grook, Japan kemari, Georgia lelo burti, the Scottish Borders Jeddart Ba' and Cornwall Cornish hurling, Central Italy Calcio Fiorentino, South Wales cnapan, East Anglia Campball and Ireland had caid, an ancestor of Gaelic football. Establishment of modern rugby In 1871, English clubs met to form the Rugby Football Union (RFU). In 1892, after charges of professionalism (compensation of team members) were made against some clubs for paying players for missing work, the Northern Rugby Football Union, usually called the Northern Union (NU), was formed. The existing rugby union authorities responded by issuing sanctions against the clubs, players, and officials involved in the new organization. After the schism, the separate clubs were named "rugby league" and "rugby union". Global status of rugby codes Rugby union is both a professional and amateur game, and is dominated by the first tier unions: New Zealand, Ireland, Wales, England, South Africa, Australia, Argentina, Scotland, Italy, France and Japan. Second and third tier unions include Belgium, Brazil, Canada, Chile, Fiji, Georgia, Germany, Hong Kong, Kenya, Namibia, the Netherlands, Portugal, Romania, Russia, Samoa, Spain, Tonga, the United States and Uruguay. Rugby Union is administered by World Rugby (WR), whose headquarters are located in Dublin, Ireland. It is the national sport in New Zealand, Fiji, Samoa, Tonga, Georgia, Wales and Madagascar, and is the most popular form of rugby globally. The Olympic Games have admitted the seven-a-side version of the game, known as Rugby sevens, into the programme from Rio de Janeiro in 2016 onwards. There was a possibility sevens would be a demonstration sport at the 2012 London Olympics but many sports including sevens were dropped. In Canada and the United States, rugby developed into gridiron football. During the late 1800s (and even the early 1900s), the two forms of the game were very similar (to the point where the United States was able to win the gold medal for rugby union at the 1924 Summer Olympics), but numerous rule changes have differentiated the gridiron-based game from its rugby counterpart, introduced by Walter Camp in the United States and John Thrift Meldrum Burnside in Canada. Among unique features of the North American game are the separation of play into downs instead of releasing the ball immediately upon tackling the requirement that the team with the ball set into a set formation for at least one second before resuming play after a tackle (and the allowance of up to 40 seconds to do so) the allowance for one forward pass from behind the site of the last tackle on each down the evolution of hard plastic equipment (particularly the football helmet and shoulder pads) a smaller and pointier ball that is favorable to being passed but makes drop kicks impractical a generally smaller and narrower field measured in customary units instead of metric (in some variants of the American game a field can be as short as 50 yards between end zones) `a distinctive field (shaped like a gridiron, from which the code's nickname is derived) with lines marked in five-yard intervals Rugby league is also both a professional and amateur game, administered on a global level by the Rugby League International Federation. In addition to amateur and semi-professional competitions in the United States, Russia, Lebanon, Serbia, Europe and Australasia, there are two major professional competitions—the Australasian National Rugby League and the Super League. International Rugby League is dominated by Australia, England and New Zealand. In Papua New Guinea, it is the national sport. Other nations from the South Pacific and Europe also play in the Pacific Cup and European Cup respectively. Rules Distinctive features common to both rugby codes include the oval ball and throwing the ball forward is not allowed so that players can gain ground only by running with the ball or by kicking it. As the sport of rugby league moved further away from its union counterpart, rule changes were implemented with the aim of making a faster-paced and more try-oriented game. Unlike American and Canadian football, the players do not wear any sort of protection or armour. The main differences between the two games, besides league having teams of 13 players and union of 15, involve the tackle and its aftermath: Union players contest possession following the tackle: depending on the situation, either a ruck or a maul can occur. League players may not contest possession after making a tackle: play is continued with a play-the-ball. In league, if the team in possession fails to score before a set of six tackles, it surrenders possession. Union has no six-tackle rule; a team can keep the ball for an unlimited number of tackles before scoring as long as it maintains possession and does not commit an offence. Set pieces of the union code include the "scrum", which occurs after a minor infringement of the rules (most often a knock-on, when a player knocks the ball forward), where packs of opposing players push against each other for possession, and the "line-out", in which parallel lines of players from each team, arranged perpendicular to the touch-line, attempt to catch the ball thrown from touch. A rule has been added to line-outs which allows the jumper to be pulled down once a players' feet are on the ground. In the league code, the scrum still exists, but with greatly reduced importance as it involves fewer players and is rarely contested. Set pieces are generally started from the play-the-ball situation. Many of the rugby league positions have names and requirements similar to rugby union positions, but there are no flankers in rugby league. Culture Home countries In England, rugby union is widely regarded as an "establishment" sport, played mostly by members of the upper and middle classes. For example, many pupils at public schools and grammar schools play rugby union, although the game (which had a long history of being played at state schools until the 1980s) is becoming increasingly popular in comprehensive schools. Despite this stereotype, the game, particularly in the West Country is popular amongst all classes. In contrast, rugby league has traditionally been seen as a working-class pursuit. Another exception to rugby union's upper-class stereotype is in Wales, where it has been traditionally associated with small village teams made up of coal miners and other industrial workers who played on their days off. On Ireland, both rugby union and rugby league are unifying forces across the national and sectarian divide, with the Ireland international teams representing both political entities. In Australia, support for both codes is concentrated in New
and Europe also play in the Pacific Cup and European Cup respectively. Rules Distinctive features common to both rugby codes include the oval ball and throwing the ball forward is not allowed so that players can gain ground only by running with the ball or by kicking it. As the sport of rugby league moved further away from its union counterpart, rule changes were implemented with the aim of making a faster-paced and more try-oriented game. Unlike American and Canadian football, the players do not wear any sort of protection or armour. The main differences between the two games, besides league having teams of 13 players and union of 15, involve the tackle and its aftermath: Union players contest possession following the tackle: depending on the situation, either a ruck or a maul can occur. League players may not contest possession after making a tackle: play is continued with a play-the-ball. In league, if the team in possession fails to score before a set of six tackles, it surrenders possession. Union has no six-tackle rule; a team can keep the ball for an unlimited number of tackles before scoring as long as it maintains possession and does not commit an offence. Set pieces of the union code include the "scrum", which occurs after a minor infringement of the rules (most often a knock-on, when a player knocks the ball forward), where packs of opposing players push against each other for possession, and the "line-out", in which parallel lines of players from each team, arranged perpendicular to the touch-line, attempt to catch the ball thrown from touch. A rule has been added to line-outs which allows the jumper to be pulled down once a players' feet are on the ground. In the league code, the scrum still exists, but with greatly reduced importance as it involves fewer players and is rarely contested. Set pieces are generally started from the play-the-ball situation. Many of the rugby league positions have names and requirements similar to rugby union positions, but there are no flankers in rugby league. Culture Home countries In England, rugby union is widely regarded as an "establishment" sport, played mostly by members of the upper and middle classes. For example, many pupils at public schools and grammar schools play rugby union, although the game (which had a long history of being played at state schools until the 1980s) is becoming increasingly popular in comprehensive schools. Despite this stereotype, the game, particularly in the West Country is popular amongst all classes. In contrast, rugby league has traditionally been seen as a working-class pursuit. Another exception to rugby union's upper-class stereotype is in Wales, where it has been traditionally associated with small village teams made up of coal miners and other industrial workers who played on their days off. On Ireland, both rugby union and rugby league are unifying forces across the national and sectarian divide, with the Ireland international teams representing both political entities. In Australia, support for both codes is concentrated in New South Wales, Queensland and the Australian Capital Territory. The same perceived class barrier as exists between the two games in England also occurs in these states, fostered by rugby union's prominence and support at private schools. Exceptions to the above include New Zealand (although rugby league is still considered to be a lower class game by many or a game for 'westies' referring to lower class western suburbs of Auckland and more recently, southern Auckland where the game is also popular), Wales, France (except Paris), Cornwall, Gloucestershire, Somerset, Scottish Borders, County Limerick (see Munster Rugby) and the Pacific Islands, where rugby union is popular in working class communities. Nevertheless, rugby league is perceived as the game of the working-class people in northern England and in the Australian states of New South Wales and Queensland. In the United Kingdom, rugby union fans sometimes used the term "rugger" as an alternative name for the sport (see Oxford '-er'), although this archaic expression has not had currency since the 1950s or earlier. New Zealanders refer to rugby union simply as either "rugby" or "union", or even simply "football", and to rugby league as "rugby league" or "league". In the U.S., people who play rugby are sometimes called "ruggers", a term little used elsewhere except facetiously. Internationally There is a strong tradition of rugby union in France, particularly in the Basque, Occitan and Catalan areas along the border with Spain. The game is very popular in South Africa, having been introduced by English-speaking settlers in the 19th century. British colonists also brought the game with them to Australia and New Zealand, where the game is widely played. It has spread since to much of Polynesia, having particularly strong followings in Fiji, Samoa, and Tonga. Rugby union continues to grow in the Americas and parts of Asia as well. Injuries About a quarter of rugby players are injured in each season. Being a high contact sport, rugby union has the highest announced rates of concussions and outside England also has the highest number of catastrophic injuries out of any team sport. Research finding that during match play, concussion was reported at a higher level, and during training at a lower level, but still at a higher level than most players of another sport to receive. Rugby ball A rugby ball, originally called a quanco, is a diamond shape ball used for easier passing. Richard Lindon and Bernardo Solano started making balls for Rugby school out of hand stitched, four-panel, leather casings and pigs' bladders. The rugby ball's distinctive shape is supposedly due to the pig's bladder, although early balls were more plum-shaped than oval. The balls varied in size in the beginning depending upon how large the pig's bladder was. In rugby union, World Rugby regulates the size and shape of the ball
by Sting "Russian", from the album Tubular Bells 2003 by Mike Oldfield "Russian", from the album <|°_°|> by Caravan Palace Nik Russian, the perpetrator of a con committed in 2002 The South African name for a variety of Kielbasa sausage Something related to Ruthenia Ruthenians Ruthenian language Something related to the Russian Empire or Soviet Union Soviet people East Slavs All-Russian nation See also Russia
people of Russia, regardless of ethnicity Russophone, Russian-speaking person (, russkogovoryashchy, russkoyazychny) Russian language, the most widely spoken of the Slavic languages Russian alphabet Russian cuisine Russian culture Russian studies Russian may also refer to: Russian dressing The Russians, a book by Hedrick Smith Russian (comics), fictional Marvel Comics supervillain from The Punisher series Russian (solitaire), a card game "Russians" (song), from
as the second five-eighth in the Southern Hemisphere. The centres will attempt to tackle attacking players; whilst in attack, they should employ speed and strength to breach opposition defences. The wings are generally positioned on the outside of the backline. Their primary function is to finish off moves and score tries. Wings are usually the fastest players in the team and are elusive runners who use their speed to avoid tackles. Full-back The full-back is normally positioned several metres behind the back line. They often field opposition kicks and are usually the last line of defence should an opponent break through the back line. Two of the most important attributes of a good full-back are dependable catching skills and a good kicking game. Laws and gameplay Scoring Rugby union is played between two teams – the one that scores more points wins the game. Points can be scored in several ways: a try, scored by grounding the ball in the in-goal area (between the goal line and the dead-ball line), is worth 5 points and a subsequent conversion kick scores 2 points; a successful penalty kick or a drop goal each score 3 points. The values of each of these scoring methods have been changed over the years. Playing field According to World Rugby's Laws of the Game, a typical rugby ground, formally known as the "playing enclosure", is formed by two major zones: The "playing area", which includes the "field of play" and the two "in-goals", and The "perimeter area", a clear space, free of obstructions such as fences and other objects which could pose a danger to players and officials (but not including marker flags, which are typically of soft construction). The referee (and their assistants) generally have full authority and responsibility for all players and other officials inside the playing enclosure. Fences or ropes (particularly at amateur clubs) are generally used to mark the extent of this area, although in modern stadia this may include the entire arena floor or other designated space. The Laws, above all, require that the playing enclosure's surface be safe, whilst also permitting grass, sand, clay, snow or conforming artificial turf to be used; the surface would generally be uniform across both the playing area and perimeter area, although depending on how large the perimeter is, other surfaces such as dirt, artificial turf, etc. may be used outside of a "sliding" perimeter from the bounds of the playing area. Playing area For the most part, the "playing area" is where the majority of play occurs. The ball is generally considered live whilst in this area, so long as players do not infringe, with special rules applied to specific zones of the playing area. The playing area consists of: The 'field of play", bounded by (but not including) the sidelines and goal-lines, and One "in-goal" area at each end of the field, each bounded by, but not including, the extensions two parallel sidelines (known in this context as the "touch in-goal" lines) and the dead-ball line, and its other bound being the goal line (or "try line") which is included as part of the "in-goal" area. Field of play A typical "field of play" is generally 100 metres long by 68–70 metres wide for senior rugby, depending on the specific requirements of each ground. The Laws require the field of play to be between 94 metres (103 yards) and 100 metres (109 yards) long, with a width of between 68 metres (75 yards) and 70 metres (77 yards). As other football codes, such as association football and rugby league, have specified a preferred or standard 68 metre width, this is often used unless a ground has been specifically designed to accommodate a 70-metre rugby field. 100 metres is the typical length, with a line (see below) often marked at halfway with "50" on it, representing 50 metres from each goal line. The variations have been allowed in the Laws, possibly to accommodate older grounds (perhaps even pre-metrification when yards and feet were specified) and developing nations. Other lines and markings The field of play is divided by a solid "halfway" line, drawn perpendicular to the sidelines at their midpoint. A 0.5m line is marked perpendicular to the halfway lines at its midpoint, designating the spot where the kickoffs shall be taken. The areas between each goal line and the halfway line are known as "halves" as in other football codes. A pair of solid lines are also drawn perpendicular to the sidelines, 22 metres (formerly 25 yards) from each end of the field of play and called the 22-metre lines, or "22"s. An area at each end, also known as the "22", is bounded by, but does not include, the sidelines, goal line and 22-metre line. In this area, a defensive player who cleanly catches a ball kicked by the other team, without the ball having already touched the ground after the kick, is entitled to claim a free kick, or "mark". Additional broken or dashed lines (of 5 metre dash lengths, according to the Laws) are drawn in each half or on each side of, the field, each with specific purposes under the Laws: "10-metre" lines: Dashed lines 10 metres either side of, and parallel to, the halfway line, designating the minimum distance a receiving team must retreat when receiving a kick-off, and the minimum distance a kick-off must travel to be legal. Equivalent to the 40-metre lines in rugby league but generally marked differently. "5-metre" lines: Dashed lines 5 metres into the field of play, parallel to each goal line. Scrums can be packed no nearer to each goal line than this line, and referees will often penalise scrum and ruck infringements in this area more harshly as defending sides will often try to stifle the attacking side's breakdown play. "Tram tracks/tramlines": Unnamed in the Laws and sometimes also referred to, confusingly, as the "5-metre" and "15-metre" lines, these two pairs of dashed lines are drawn parallel to each sideline, 5 metres and 15 metres, respectively, into the field of play from the nearer sideline, terminating at each of their respective ends' 5-metre line (parallel and adjacent to the goal line). The area between these lines are where players must stand when contesting a lineout throw. Additionally, the area between the two perpendicular sets of "5-metre" lines (i.e. 5 metres from each sideline and 5 metres from each goal line) is designated the "scrum zone". Where an offence occurs outside this area and the non-infringing side wishes to pack a scrum, the mark of the scrum will be moved into the zone by the referee. Generally, points where the dashed lines intersect other lines will be marked with a "T" or cross shape, although the extensions of dashed lines are generally not drawn within 5 metres of the goal lines or sidelines, to allow a clear demarcation of the field of play's boundaries. The Laws require the playing area to be rectangular in shape, however variations may be permitted with the approval of relevant unions. A notable example is Chatswood Oval in Sydney, Australia, an elliptically shaped cricket ground which is the home of Gordon rugby club, that has curved dead-ball lines to maximise the available in-goal space. Where multiple sports share a field (e.g. a rugby league and a rugby union club sharing one field), lines may be overlaid on top of each other, sometimes in different colours. However, particularly for television, rugby union line markings are generally painted white. Some exceptions include the Wallabies (Australia's national team) who often have yellow markings. Local clubs may use black, yellow, or other colours on grass, with other surfaces possibly requiring different marking techniques. Unlike association football, where on-field advertising is strictly forbidden in the laws, World Rugby allows sponsors logos to be painted on the playing surface. This is another way in which clubs can make money in the professional era and is also often used by host nations, professional leagues and tournaments as additional revenue streams, particularly when games are broadcast. In recent years, augmented reality technology has been used to replace painting to protect the surface or save costs on painting fields, producing a similar effect for broadcast albeit sometimes with poorer results. In-goal areas The in-goal areas sit behind the goal lines, equivalent to American football's "end zones". The in-goal areas must be between 6 metres (7 yards) and 22 metres (25 yards) deep and cover the full width of the field. A ball grounded in this area by an attacking player will generally result in a try being awarded, unless there has been a previous infringement or the player has gone out-of-bounds whilst in possession of the ball. Perimeter area The perimeter area is considered "out-of-bounds" for the ball and the players, normally resulting in the non-infringing team receiving possession of the ball at a restart. The perimeter area can be divided into two areas: "Touch": The perimeter area beyond the sidelines of the playing area, but between the goal lines. "Touch-in-goal": The perimeter areas behind each goal line outside of the playing area. Some may refer to a ball which crosses the dead-ball lines as "dead", rather than touch-in-goal. For the purposes of determining if a ball is "out-of-bounds" (i.e. has left the playing area), the perimeter area extends indefinitely away from the playing area. When a ball or player goes into touch, a lineout throw is generally awarded to the opposition at the spot on the sideline where they left the field. Exceptions include a kick out "on the full" (i.e. the ball did not land in the field-of-play before going into touch) in which case the lineout would still take place on the sideline but back in line with where the ball was kicked, or when a team takes a free kick from a penalty where they would retain the right to throw-in. The perimeter area should be clear and free of obstructions and heavy, solid objects which could pose a danger to players for at least 5 metres from the playing area, according to the Laws. Players often leave the playing area whether accidentally or due to being forced off of the field, sometimes sliding or needing to slow down from a sprint. Many venues at elite levels leave larger spaces around the field to accommodate fitter and faster (or heavier) players. Fixed cameras on tripods and advertising hoardings are often the main culprits for injuring players in the perimeter area. Flag posts Also required in the perimeter area are a set of 14 flag posts, each with a minimum height of 1.2 metres, marking the intersections of certain lines or other nominated distances. These are generally a plastic pole on a spring loaded or otherwise soft base, sometimes with a flag on top, covered in foam padding. Others may be moulded plastic or disposable cardboard. At lower levels, these flags may not be used, but are still specified in the Laws. Flags are placed as follows: One flag post at each intersection of the touch-in-goal lines and the goal-lines (4 flags total) One flag post at each intersection of the touch-in-goal lines and the dead-ball lines (4 flags total) One flag post positioned 2 metres outside of both of the sidelines, in line with both of the 22-metre lines (4 flags total) One flag post positioned 2 metres outside of both of the sidelines, in line with the halfway line (2 flags total) Goalposts Rugby goalposts are H-shaped and are situated in the middle of the goal lines at each end of the field. They consist of two vertical poles (known as "uprights"), generally made of steel or other metal but sometimes wood or a plastic, apart, connected by a horizontal "crossbar" above the ground. The minimum height for posts' uprights is , with taller posts generally seen. The bottom parts of each upright are generally wrapped in purpose-made padding to protect players from injury when coming into contact with the posts and creating another opportunity for sponsors. If an attacking player grounds the ball onto the base of the upright or post padding, a try will be awarded as the base of the upright is considered in-goal. Match structure At the beginning of the game, the captains and the referee toss a coin to decide which team will kick off first. Play then starts with a dropkick, with the players chasing the ball into the opposition's territory, and the other side trying to retrieve the ball and advance it. The dropkick must make contact with the ground before kicked. If the ball does not reach the opponent's line 10 meters away, the opposing team has two choices: to have the ball kicked off again, or to have a scrum at the centre of the half-way line. If the player with the ball is tackled, frequently a ruck will result. Games are divided into 40-minute halves, with an intermission of not more than 15 minutes in the middle. The sides exchange ends of the field after the half-time break. Stoppages for injury or to allow the referee to take disciplinary action do not count as part of the playing time, so that the elapsed time is usually longer than 80 minutes. The referee is responsible for keeping time, even when—as in many professional tournaments—he is assisted by an official time-keeper. If time expires while the ball is in play, the game continues until the ball is "dead", and only then will the referee blow the whistle to signal half-time or full-time; but if the referee awards a penalty or free-kick, the game continues. In the knockout stages of rugby competitions, most notably the Rugby World Cup, two extra time periods of 10 minutes periods are played (with an interval of 5 minutes in between) if the game is tied after full-time. If scores are level after 100 minutes then the rules call for 20 minutes of sudden-death extra time to be played. If the sudden-death extra time period results in no scoring a kicking competition is used to determine the winner. However, no match in the history of the Rugby World Cup has ever gone past 100 minutes into a sudden-death extra time period. Passing and kicking Forward passing (throwing the ball ahead to another player) is not allowed; the ball can be passed laterally or backwards. The ball tends to be moved forward in three ways—by kicking, by a player running with it or within a scrum or maul. Only the player with the ball may be tackled or rucked. A "knock-on" is committed when a player knocks the ball forward, and play is restarted with a scrum. Any player may kick the ball forward in an attempt to gain territory. When a player anywhere in the playing area kicks indirectly into touch so that the ball first bounces in the field of play, the throw-in is taken where the ball went into touch. If the player kicks directly into touch (i.e. without bouncing in-field first) from within one's own line, the lineout is taken by the opposition where the ball went into touch, but if the ball is kicked into touch directly by a player outside the line, the lineout is taken level to where the kick was taken. Breakdowns The aim of the defending side is to stop the player with the ball, either by bringing them to ground (a tackle, which is frequently followed by a ruck) or by contesting for possession with the ball-carrier on their feet (a maul). Such a circumstance is called a breakdown and each is governed by a specific law. Tackling A player may tackle an opposing player who has the ball by holding them while bringing them to ground. Tacklers cannot tackle above the shoulder (the neck and head are out of bounds), and the tackler has to attempt to wrap their arms around the player being tackled to complete the tackle. It is illegal to push, shoulder-charge, or to trip a player using feet or legs, but hands may be used (this being referred to as a tap-tackle or ankle-tap). Tacklers may not tackle an opponent who has jumped to catch a ball until the player has landed. Rucking and Mauling Mauls occur after a player with the ball has come into contact with an opponent but the handler remains on his feet; once any combination of at least three players have bound themselves a maul has been set. A ruck is similar to the maul, but in this case the ball has gone to ground with at least three attacking players binding themselves on the ground in an attempt to secure the ball. Set pieces Lineout When the ball leaves the side of the field, a line-out is awarded against the team which last touched the ball. Forward players from each team line up a metre apart, perpendicular to the touchline and between from the touchline. The ball is thrown from the touchline down the centre of the lines of forwards by a player (usually the hooker) from the team that did not play the ball into touch. The exception to this is when the ball went out from a penalty, in which case the side who gained the penalty throws the ball in. Both sides compete for the ball and players may lift their teammates. A jumping player cannot be tackled until they stand and only shoulder-to-shoulder contact is allowed; deliberate infringement of this law is dangerous play, and results in a penalty kick. Scrum A scrum is a way of restarting the game safely and fairly after a minor infringement. It is awarded when the ball has been knocked or passed forward, if a player takes the ball over their own try line and puts the ball down, when a player is accidentally offside or when the ball is trapped in a ruck or maul with no realistic chance of being retrieved. A team may also opt for a scrum if awarded a penalty. A scrum is formed by the eight forwards from each team crouching down and binding together in three rows, before interlocking with the opposing team. For each team, the front row consists of two props (loosehead and tighthead) either side of the hooker. The two props are typically amongst the strongest players on the team. The second row consists of two locks and the two flankers. Behind the second row is the number 8. This formation is known as the 3–4–1 formation. Once a scrum is formed the scrum-half from the team awarded the feed rolls the ball into the gap between the two front-rows known as the tunnel. The two hookers then compete for possession by hooking the ball backwards with their feet, while each pack tries to push the opposing pack backwards to help gain possession. The side that wins possession can either keep the ball under their feet while driving the opposition back, in order to gain ground, or transfer the ball to the back of the scrum where it can be picked up by the number 8 or by the scrum-half. Officials and offences There are three match officials: a referee, and two assistant referees. The referees are commonly addressed as "Sir". The latter, formerly known as touch judges, had the primary function of indicating when the ball had gone into "touch"; their role has been expanded and they are now expected to assist the referee in a number of areas, such as watching for foul play and checking offside lines. In addition, for matches in high level competitions, there is often a television match official (TMO; popularly called the "video referee"), to assist with certain decisions, linked up to the referee by radio. The referees have a system of hand signals to indicate their decisions. Common offences include tackling above the shoulders, collapsing a scrum, ruck or maul, not releasing the ball when on the ground, or being offside. The non-offending team has a number of options when awarded a penalty: a "tap" kick, when the ball is kicked a very short distance from hand, allowing the kicker to regather the ball and run with it; a punt, when the ball is kicked a long distance from hand, for field position; a place-kick, when the kicker will attempt to score a goal; or a scrum. Players may be sent off (signalled by a red card) or temporarily suspended ("sin-binned") for ten minutes (yellow card) for foul play or repeated infringements, and may not be replaced. Occasionally, infringements are not caught by the referee during the match and these may be "cited" by the citing commissioner after the match and have punishments (usually suspension for a number of weeks) imposed on the infringing player. Replacements and substitutions During the match, players may be replaced (for injury) or substituted (for tactical reasons). A player who has been replaced may not rejoin play unless he was temporarily replaced to have bleeding controlled; a player who has been substituted may return temporarily, to replace a player who has a blood injury or has suffered a concussion, or permanently, if he is replacing a front-row forward. In international matches, eight replacements are allowed; in domestic or cross-border tournaments, at the discretion of the responsible national union(s), the number of replacements may be nominated to a maximum of eight, of whom three must be sufficiently trained and experienced to provide cover for the three front row positions. Prior to 2016, all substitutions, no matter the cause, counted against the limit during a match. In 2016, World Rugby changed the law so that substitutions made to replace a player deemed unable to continue due to foul play by the opposition would no longer count against the match limit. This change was introduced in January of that year in the Southern Hemisphere and June in the Northern Hemisphere. Equipment The most basic items of equipment for a game of rugby union are the ball itself, a rugby shirt (also known as a "jersey"), rugby shorts, socks, and boots. The rugby ball is oval in shape (technically a prolate spheroid), and is made up of four panels. The ball was historically made of leather, but in the modern era most games use a ball made from a synthetic material. World Rugby lays out specific dimensions for the ball, in length, in circumference of length and in circumference of width. Rugby boots have soles with studs to allow grip on the turf of the pitch. The studs may be either metal or plastic but must not have any sharp edges or ridges. Protective equipment is optional and strictly regulated. The most common items are mouthguards, which are worn by almost all players, and are compulsory in some rugby-playing nations. Other protective items that are permitted include head gear; thin (not more than 10 mm thick), non-rigid shoulder pads and shin guards; which are worn underneath socks. Bandages or tape can be worn to support or protect injuries; some players wear tape around the head to protect the ears in scrums and rucks. Female players may also wear chest pads. Although not worn for protection, some types of fingerless mitts are allowed to aid grip. It is the responsibility of the match officials to check players' clothing and equipment before a game to ensure that it conforms to the laws of the game. Governing bodies The international governing body of rugby union (and associated games such as sevens) is World Rugby (WR). The WR headquarters are in Dublin, Ireland. WR, founded in 1886, governs the sport worldwide and publishes the game's laws and rankings. As of February 2014, WR (then known as the IRB, for International Rugby Board) recorded 119 unions in its membership, 101 full members and 18 associate member countries. According to WR, rugby union is played by men and women in over 100 countries. WR controls the Rugby World Cup, the Women's Rugby World Cup, Rugby World Cup Sevens, HSBC Sevens Series, HSBC Women's Sevens Series, World Under 20 Championship, World Under 20 Trophy, Nations Cup and the Pacific Nations Cup. WR holds votes to decide where each of these events are to be held, except in the case of the Sevens World Series for men and women, for which WR contracts with several national unions to hold individual events. Six regional associations, which are members of WR, form the next level of administration; these are: Rugby Africa, formerly Confederation of African Rugby (CAR) Asia Rugby, formerly Asian Rugby Football Union (ARFU) Rugby Americas North, formerly North America Caribbean Rugby Association (NACRA) Rugby Europe, previously Fédération Internationale de Rugby Amateur – Association Européenne de Rugby (FIRA-AER) Oceania Rugby, formerly Federation of Oceania Rugby Unions (FORU) Sudamérica Rugby, formerly Confederación Sudamericana de Rugby (South American Rugby Confederation, or CONSUR) SANZAAR (South Africa, New Zealand, Australia and Argentina Rugby) is a joint venture of the South African Rugby Union, New Zealand Rugby, Rugby Australia and the Argentine Rugby Union (UAR) that operates Super Rugby and The Rugby Championship (formerly the Tri Nations before the entry of Argentina). Although UAR initially had no representation on the former SANZAR board, it was granted input into the organisation's issues, especially with regard to The Rugby Championship, and became a full SANZAAR member in 2016 (when the country entered Super Rugby). National unions oversee rugby union within individual countries and are affiliated to WR. Since 2016, the WR Council has 40 seats. A total of 11 unions—the eight foundation unions of England, Scotland, Ireland, Wales, Australia, New Zealand, South Africa and France, plus Argentina, and —have two seats each. In addition, the six regional associations have two seats each. Four more unions—, , and the USA—have one seat each. Finally, the chairman and Vice Chairman, who usually come from one of the eight foundation unions (although the current Vice Chairman, Agustín Pichot, is with the non-foundation Argentine union) have one vote each. Global reach The earliest countries to adopt rugby union were England, the country of inception, and the other three Home Nations, Scotland, Ireland and Wales. The spread of rugby union as a global sport has its roots in the exporting of the game by British expatriates, military personnel, and overseas university students. The first rugby club in France was formed by British residents in Le Havre in 1872, while the next year Argentina recorded its first game: 'Banks' v 'City' in Buenos Aires. Seven countries have adopted rugby union as their de facto national sport; they are Fiji, Georgia, Madagascar, New Zealand, Samoa, Tonga and Wales. Oceania A rugby club was formed in Sydney, New South Wales, Australia in 1864; while the sport was said to have been introduced to New Zealand by Charles Monro in 1870, who played rugby while a student at Christ's College, Finchley. Several island nations have embraced the sport of rugby. Rugby was first played in Fiji circa 1884 by European and Fijian soldiers of the Native Constabulary at Ba on Viti Levu island. Fiji then sent their first overseas team to Samoa in 1924, who in turn set up their own union in 1924. Along with Tonga, other countries to have national rugby teams in Oceania include the Cook Islands, Niue, Papua New Guinea and Solomon Islands. North America and Caribbean In North America a club formed in Montreal in 1868, Canada's first club. The city of Montreal also played its part in the introduction of the sport in the United States, when students of McGill University played against a team from Harvard University in 1874. The two variants of gridiron football — Canadian football and, to a lesser extent, American football — were once considered forms of rugby football but are seldom now referred to as such. In fact, the governing body of Canadian football, Football Canada, was known as the Canadian Rugby Union (CRU) as late as 1967, more than fifty years after the sport parted ways with the established rules of rugby union. The Grey Cup, the trophy awarded to the victorious team playing in the namesake championship of the professional Canadian Football League (CFL), was originally awarded to the champion of the CRU. The two strongest leagues in the CRU, the Interprovincial Rugby Football Union in Eastern Canada and the Western Interprovincial Football Union in Western Canada, evolved into the present day CFL. Although the exact date of arrival of rugby union in Trinidad and Tobago is unknown, their first club Northern RFC was formed in 1923, a national team was playing by 1927 and due to a cancelled tour to British Guiana in 1933, switched their venue to Barbados; introducing rugby to the island. Other Atlantic countries to play rugby union include Jamaica and Bermuda. Rugby union is the fastest growing college sport and sport in general in the USA. Major League Rugby is the professional Rugby union competition in the US and Canada. Europe The growth of rugby union in Europe outside the 6 Nations countries in terms of playing numbers, attendances, and viewership has been sporadic. Historically, British and Irish home teams played the Southern Hemisphere teams of Australia, New Zealand, and South Africa, as well as France. The rest of Europe were left to play amongst themselves. During a period when it had been isolated by the British and Irish Unions, France, lacking international competition, became the only European team from the top tier to regularly play the other European countries; mainly Belgium, the Netherlands, Germany, Spain, Romania, Poland, Italy and Czechoslovakia. In 1934, instigated by the French Rugby Federation, FIRA (Fédération Internationale de Rugby Amateur) was formed to organise rugby union outside the authority of the IRFB. The founding members were , , , , , and . Other European rugby playing nations of note include Russia, whose first officially recorded match is marked by an encounter between Dynamo Moscow and the Moscow Institute of Physical Education in 1933. Rugby union in Portugal also took hold between the First and Second World Wars, with a Portuguese National XV set up in 1922 and an official championship started in 1927. In 1999, FIRA agreed to place itself under the auspices of the IRB, transforming itself into a strictly European organising body. Accordingly, it changed its name to FIRA–AER (Fédération Internationale de Rugby Amateur – Association Européenne de Rugby). It adopted its current name of Rugby Europe in 2014. South America Although Argentina is the best-known rugby playing nation in South America, founding the Argentine Rugby Union in 1899, several other countries on the continent have a long history. Rugby had been played in Brazil since the end of the 19th century, but the game was played regularly only from 1926, when São Paulo beat Santos in an inter-city match. It took Uruguay several aborted attempts to adapt to rugby, led mainly by the efforts of the Montevideo Cricket Club; these efforts succeeded in 1951 with the formation of a national league and four clubs. Other South American countries that formed a rugby union include Chile (1948), and Paraguay (1968). Súper Liga Americana de Rugby is the professional Rugby union competition in South America. Asia Many Asian countries have a tradition of playing rugby dating from the British Empire. India began playing rugby in the early 1870s, the Calcutta Football Club forming in 1873. However, with the departure of a local British army regiment, interest in rugby diminished in the area. In 1878, The Calcutta Football Club was disbanded, and rugby in India faltered. Sri Lanka claims to have founded their union in 1878, and although little official information from the period is available, the team won the All-India cup in Madras in 1920. The first recorded match in Malaysia was in 1892, but the first confirmation of rugby is the existence of the HMS Malaya Cup which was first presented in 1922 and is still awarded to the winners of the Malay sevens. Rugby union was introduced to Japan in 1899 by two Cambridge students: Ginnosuke Tanaka and Edward Bramwell Clarke. The Japan RFU was founded in 1926 and its place in rugby history was cemented when Japan hosted the 2019 World Cup. It was the first country outside the Commonwealth, Ireland and France to host the event, and was viewed by the IRB
and 22-metre line. In this area, a defensive player who cleanly catches a ball kicked by the other team, without the ball having already touched the ground after the kick, is entitled to claim a free kick, or "mark". Additional broken or dashed lines (of 5 metre dash lengths, according to the Laws) are drawn in each half or on each side of, the field, each with specific purposes under the Laws: "10-metre" lines: Dashed lines 10 metres either side of, and parallel to, the halfway line, designating the minimum distance a receiving team must retreat when receiving a kick-off, and the minimum distance a kick-off must travel to be legal. Equivalent to the 40-metre lines in rugby league but generally marked differently. "5-metre" lines: Dashed lines 5 metres into the field of play, parallel to each goal line. Scrums can be packed no nearer to each goal line than this line, and referees will often penalise scrum and ruck infringements in this area more harshly as defending sides will often try to stifle the attacking side's breakdown play. "Tram tracks/tramlines": Unnamed in the Laws and sometimes also referred to, confusingly, as the "5-metre" and "15-metre" lines, these two pairs of dashed lines are drawn parallel to each sideline, 5 metres and 15 metres, respectively, into the field of play from the nearer sideline, terminating at each of their respective ends' 5-metre line (parallel and adjacent to the goal line). The area between these lines are where players must stand when contesting a lineout throw. Additionally, the area between the two perpendicular sets of "5-metre" lines (i.e. 5 metres from each sideline and 5 metres from each goal line) is designated the "scrum zone". Where an offence occurs outside this area and the non-infringing side wishes to pack a scrum, the mark of the scrum will be moved into the zone by the referee. Generally, points where the dashed lines intersect other lines will be marked with a "T" or cross shape, although the extensions of dashed lines are generally not drawn within 5 metres of the goal lines or sidelines, to allow a clear demarcation of the field of play's boundaries. The Laws require the playing area to be rectangular in shape, however variations may be permitted with the approval of relevant unions. A notable example is Chatswood Oval in Sydney, Australia, an elliptically shaped cricket ground which is the home of Gordon rugby club, that has curved dead-ball lines to maximise the available in-goal space. Where multiple sports share a field (e.g. a rugby league and a rugby union club sharing one field), lines may be overlaid on top of each other, sometimes in different colours. However, particularly for television, rugby union line markings are generally painted white. Some exceptions include the Wallabies (Australia's national team) who often have yellow markings. Local clubs may use black, yellow, or other colours on grass, with other surfaces possibly requiring different marking techniques. Unlike association football, where on-field advertising is strictly forbidden in the laws, World Rugby allows sponsors logos to be painted on the playing surface. This is another way in which clubs can make money in the professional era and is also often used by host nations, professional leagues and tournaments as additional revenue streams, particularly when games are broadcast. In recent years, augmented reality technology has been used to replace painting to protect the surface or save costs on painting fields, producing a similar effect for broadcast albeit sometimes with poorer results. In-goal areas The in-goal areas sit behind the goal lines, equivalent to American football's "end zones". The in-goal areas must be between 6 metres (7 yards) and 22 metres (25 yards) deep and cover the full width of the field. A ball grounded in this area by an attacking player will generally result in a try being awarded, unless there has been a previous infringement or the player has gone out-of-bounds whilst in possession of the ball. Perimeter area The perimeter area is considered "out-of-bounds" for the ball and the players, normally resulting in the non-infringing team receiving possession of the ball at a restart. The perimeter area can be divided into two areas: "Touch": The perimeter area beyond the sidelines of the playing area, but between the goal lines. "Touch-in-goal": The perimeter areas behind each goal line outside of the playing area. Some may refer to a ball which crosses the dead-ball lines as "dead", rather than touch-in-goal. For the purposes of determining if a ball is "out-of-bounds" (i.e. has left the playing area), the perimeter area extends indefinitely away from the playing area. When a ball or player goes into touch, a lineout throw is generally awarded to the opposition at the spot on the sideline where they left the field. Exceptions include a kick out "on the full" (i.e. the ball did not land in the field-of-play before going into touch) in which case the lineout would still take place on the sideline but back in line with where the ball was kicked, or when a team takes a free kick from a penalty where they would retain the right to throw-in. The perimeter area should be clear and free of obstructions and heavy, solid objects which could pose a danger to players for at least 5 metres from the playing area, according to the Laws. Players often leave the playing area whether accidentally or due to being forced off of the field, sometimes sliding or needing to slow down from a sprint. Many venues at elite levels leave larger spaces around the field to accommodate fitter and faster (or heavier) players. Fixed cameras on tripods and advertising hoardings are often the main culprits for injuring players in the perimeter area. Flag posts Also required in the perimeter area are a set of 14 flag posts, each with a minimum height of 1.2 metres, marking the intersections of certain lines or other nominated distances. These are generally a plastic pole on a spring loaded or otherwise soft base, sometimes with a flag on top, covered in foam padding. Others may be moulded plastic or disposable cardboard. At lower levels, these flags may not be used, but are still specified in the Laws. Flags are placed as follows: One flag post at each intersection of the touch-in-goal lines and the goal-lines (4 flags total) One flag post at each intersection of the touch-in-goal lines and the dead-ball lines (4 flags total) One flag post positioned 2 metres outside of both of the sidelines, in line with both of the 22-metre lines (4 flags total) One flag post positioned 2 metres outside of both of the sidelines, in line with the halfway line (2 flags total) Goalposts Rugby goalposts are H-shaped and are situated in the middle of the goal lines at each end of the field. They consist of two vertical poles (known as "uprights"), generally made of steel or other metal but sometimes wood or a plastic, apart, connected by a horizontal "crossbar" above the ground. The minimum height for posts' uprights is , with taller posts generally seen. The bottom parts of each upright are generally wrapped in purpose-made padding to protect players from injury when coming into contact with the posts and creating another opportunity for sponsors. If an attacking player grounds the ball onto the base of the upright or post padding, a try will be awarded as the base of the upright is considered in-goal. Match structure At the beginning of the game, the captains and the referee toss a coin to decide which team will kick off first. Play then starts with a dropkick, with the players chasing the ball into the opposition's territory, and the other side trying to retrieve the ball and advance it. The dropkick must make contact with the ground before kicked. If the ball does not reach the opponent's line 10 meters away, the opposing team has two choices: to have the ball kicked off again, or to have a scrum at the centre of the half-way line. If the player with the ball is tackled, frequently a ruck will result. Games are divided into 40-minute halves, with an intermission of not more than 15 minutes in the middle. The sides exchange ends of the field after the half-time break. Stoppages for injury or to allow the referee to take disciplinary action do not count as part of the playing time, so that the elapsed time is usually longer than 80 minutes. The referee is responsible for keeping time, even when—as in many professional tournaments—he is assisted by an official time-keeper. If time expires while the ball is in play, the game continues until the ball is "dead", and only then will the referee blow the whistle to signal half-time or full-time; but if the referee awards a penalty or free-kick, the game continues. In the knockout stages of rugby competitions, most notably the Rugby World Cup, two extra time periods of 10 minutes periods are played (with an interval of 5 minutes in between) if the game is tied after full-time. If scores are level after 100 minutes then the rules call for 20 minutes of sudden-death extra time to be played. If the sudden-death extra time period results in no scoring a kicking competition is used to determine the winner. However, no match in the history of the Rugby World Cup has ever gone past 100 minutes into a sudden-death extra time period. Passing and kicking Forward passing (throwing the ball ahead to another player) is not allowed; the ball can be passed laterally or backwards. The ball tends to be moved forward in three ways—by kicking, by a player running with it or within a scrum or maul. Only the player with the ball may be tackled or rucked. A "knock-on" is committed when a player knocks the ball forward, and play is restarted with a scrum. Any player may kick the ball forward in an attempt to gain territory. When a player anywhere in the playing area kicks indirectly into touch so that the ball first bounces in the field of play, the throw-in is taken where the ball went into touch. If the player kicks directly into touch (i.e. without bouncing in-field first) from within one's own line, the lineout is taken by the opposition where the ball went into touch, but if the ball is kicked into touch directly by a player outside the line, the lineout is taken level to where the kick was taken. Breakdowns The aim of the defending side is to stop the player with the ball, either by bringing them to ground (a tackle, which is frequently followed by a ruck) or by contesting for possession with the ball-carrier on their feet (a maul). Such a circumstance is called a breakdown and each is governed by a specific law. Tackling A player may tackle an opposing player who has the ball by holding them while bringing them to ground. Tacklers cannot tackle above the shoulder (the neck and head are out of bounds), and the tackler has to attempt to wrap their arms around the player being tackled to complete the tackle. It is illegal to push, shoulder-charge, or to trip a player using feet or legs, but hands may be used (this being referred to as a tap-tackle or ankle-tap). Tacklers may not tackle an opponent who has jumped to catch a ball until the player has landed. Rucking and Mauling Mauls occur after a player with the ball has come into contact with an opponent but the handler remains on his feet; once any combination of at least three players have bound themselves a maul has been set. A ruck is similar to the maul, but in this case the ball has gone to ground with at least three attacking players binding themselves on the ground in an attempt to secure the ball. Set pieces Lineout When the ball leaves the side of the field, a line-out is awarded against the team which last touched the ball. Forward players from each team line up a metre apart, perpendicular to the touchline and between from the touchline. The ball is thrown from the touchline down the centre of the lines of forwards by a player (usually the hooker) from the team that did not play the ball into touch. The exception to this is when the ball went out from a penalty, in which case the side who gained the penalty throws the ball in. Both sides compete for the ball and players may lift their teammates. A jumping player cannot be tackled until they stand and only shoulder-to-shoulder contact is allowed; deliberate infringement of this law is dangerous play, and results in a penalty kick. Scrum A scrum is a way of restarting the game safely and fairly after a minor infringement. It is awarded when the ball has been knocked or passed forward, if a player takes the ball over their own try line and puts the ball down, when a player is accidentally offside or when the ball is trapped in a ruck or maul with no realistic chance of being retrieved. A team may also opt for a scrum if awarded a penalty. A scrum is formed by the eight forwards from each team crouching down and binding together in three rows, before interlocking with the opposing team. For each team, the front row consists of two props (loosehead and tighthead) either side of the hooker. The two props are typically amongst the strongest players on the team. The second row consists of two locks and the two flankers. Behind the second row is the number 8. This formation is known as the 3–4–1 formation. Once a scrum is formed the scrum-half from the team awarded the feed rolls the ball into the gap between the two front-rows known as the tunnel. The two hookers then compete for possession by hooking the ball backwards with their feet, while each pack tries to push the opposing pack backwards to help gain possession. The side that wins possession can either keep the ball under their feet while driving the opposition back, in order to gain ground, or transfer the ball to the back of the scrum where it can be picked up by the number 8 or by the scrum-half. Officials and offences There are three match officials: a referee, and two assistant referees. The referees are commonly addressed as "Sir". The latter, formerly known as touch judges, had the primary function of indicating when the ball had gone into "touch"; their role has been expanded and they are now expected to assist the referee in a number of areas, such as watching for foul play and checking offside lines. In addition, for matches in high level competitions, there is often a television match official (TMO; popularly called the "video referee"), to assist with certain decisions, linked up to the referee by radio. The referees have a system of hand signals to indicate their decisions. Common offences include tackling above the shoulders, collapsing a scrum, ruck or maul, not releasing the ball when on the ground, or being offside. The non-offending team has a number of options when awarded a penalty: a "tap" kick, when the ball is kicked a very short distance from hand, allowing the kicker to regather the ball and run with it; a punt, when the ball is kicked a long distance from hand, for field position; a place-kick, when the kicker will attempt to score a goal; or a scrum. Players may be sent off (signalled by a red card) or temporarily suspended ("sin-binned") for ten minutes (yellow card) for foul play or repeated infringements, and may not be replaced. Occasionally, infringements are not caught by the referee during the match and these may be "cited" by the citing commissioner after the match and have punishments (usually suspension for a number of weeks) imposed on the infringing player. Replacements and substitutions During the match, players may be replaced (for injury) or substituted (for tactical reasons). A player who has been replaced may not rejoin play unless he was temporarily replaced to have bleeding controlled; a player who has been substituted may return temporarily, to replace a player who has a blood injury or has suffered a concussion, or permanently, if he is replacing a front-row forward. In international matches, eight replacements are allowed; in domestic or cross-border tournaments, at the discretion of the responsible national union(s), the number of replacements may be nominated to a maximum of eight, of whom three must be sufficiently trained and experienced to provide cover for the three front row positions. Prior to 2016, all substitutions, no matter the cause, counted against the limit during a match. In 2016, World Rugby changed the law so that substitutions made to replace a player deemed unable to continue due to foul play by the opposition would no longer count against the match limit. This change was introduced in January of that year in the Southern Hemisphere and June in the Northern Hemisphere. Equipment The most basic items of equipment for a game of rugby union are the ball itself, a rugby shirt (also known as a "jersey"), rugby shorts, socks, and boots. The rugby ball is oval in shape (technically a prolate spheroid), and is made up of four panels. The ball was historically made of leather, but in the modern era most games use a ball made from a synthetic material. World Rugby lays out specific dimensions for the ball, in length, in circumference of length and in circumference of width. Rugby boots have soles with studs to allow grip on the turf of the pitch. The studs may be either metal or plastic but must not have any sharp edges or ridges. Protective equipment is optional and strictly regulated. The most common items are mouthguards, which are worn by almost all players, and are compulsory in some rugby-playing nations. Other protective items that are permitted include head gear; thin (not more than 10 mm thick), non-rigid shoulder pads and shin guards; which are worn underneath socks. Bandages or tape can be worn to support or protect injuries; some players wear tape around the head to protect the ears in scrums and rucks. Female players may also wear chest pads. Although not worn for protection, some types of fingerless mitts are allowed to aid grip. It is the responsibility of the match officials to check players' clothing and equipment before a game to ensure that it conforms to the laws of the game. Governing bodies The international governing body of rugby union (and associated games such as sevens) is World Rugby (WR). The WR headquarters are in Dublin, Ireland. WR, founded in 1886, governs the sport worldwide and publishes the game's laws and rankings. As of February 2014, WR (then known as the IRB, for International Rugby Board) recorded 119 unions in its membership, 101 full members and 18 associate member countries. According to WR, rugby union is played by men and women in over 100 countries. WR controls the Rugby World Cup, the Women's Rugby World Cup, Rugby World Cup Sevens, HSBC Sevens Series, HSBC Women's Sevens Series, World Under 20 Championship, World Under 20 Trophy, Nations Cup and the Pacific Nations Cup. WR holds votes to decide where each of these events are to be held, except in the case of the Sevens World Series for men and women, for which WR contracts with several national unions to hold individual events. Six regional associations, which are members of WR, form the next level of administration; these are: Rugby Africa, formerly Confederation of African Rugby (CAR) Asia Rugby, formerly Asian Rugby Football Union (ARFU) Rugby Americas North, formerly North America Caribbean Rugby Association (NACRA) Rugby Europe, previously Fédération Internationale de Rugby Amateur – Association Européenne de Rugby (FIRA-AER) Oceania Rugby, formerly Federation of Oceania Rugby Unions (FORU) Sudamérica Rugby, formerly Confederación Sudamericana de Rugby (South American Rugby Confederation, or CONSUR) SANZAAR (South Africa, New Zealand, Australia and Argentina Rugby) is a joint venture of the South African Rugby Union, New Zealand Rugby, Rugby Australia and the Argentine Rugby Union (UAR) that operates Super Rugby and The Rugby Championship (formerly the Tri Nations before the entry of Argentina). Although UAR initially had no representation on the former SANZAR board, it was granted input into the organisation's issues, especially with regard to The Rugby Championship, and became a full SANZAAR member in 2016 (when the country entered Super Rugby). National unions oversee rugby union within individual countries and are affiliated to WR. Since 2016, the WR Council has 40 seats. A total of 11 unions—the eight foundation unions of England, Scotland, Ireland, Wales, Australia, New Zealand, South Africa and France, plus Argentina, and —have two seats each. In addition, the six regional associations have two seats each. Four more unions—, , and the USA—have one seat each. Finally, the chairman and Vice Chairman, who usually come from one of the eight foundation unions (although the current Vice Chairman, Agustín Pichot, is with the non-foundation Argentine union) have one vote each. Global reach The earliest countries to adopt rugby union were England, the country of inception, and the other three Home Nations, Scotland, Ireland and Wales. The spread of rugby union as a global sport has its roots in the exporting of the game by British expatriates, military personnel, and overseas university students. The first rugby club in France was formed by British residents in Le Havre in 1872, while the next year Argentina recorded its first game: 'Banks' v 'City' in Buenos Aires. Seven countries have adopted rugby union as their de facto national sport; they are Fiji, Georgia, Madagascar, New Zealand, Samoa, Tonga and Wales. Oceania A rugby club was formed in Sydney, New South Wales, Australia in 1864; while the sport was said to have been introduced to New Zealand by Charles Monro in 1870, who played rugby while a student at Christ's College, Finchley. Several island nations have embraced the sport of rugby. Rugby was first played in Fiji circa 1884 by European and Fijian soldiers of the Native Constabulary at Ba on Viti Levu island. Fiji then sent their first overseas team to Samoa in 1924, who in turn set up their own union in 1924. Along with Tonga, other countries to have national rugby teams in Oceania include the Cook Islands, Niue, Papua New Guinea and Solomon Islands. North America and Caribbean In North America a club formed in Montreal in 1868, Canada's first club. The city of Montreal also played its part in the introduction of the sport in the United States, when students of McGill University played against a team from Harvard University in 1874. The two variants of gridiron football — Canadian football and, to a lesser extent, American football — were once considered forms of rugby football but are seldom now referred to as such. In fact, the governing body of Canadian football, Football Canada, was known as the Canadian Rugby Union (CRU) as late as 1967, more than fifty years after the sport parted ways with the established rules of rugby union. The Grey Cup, the trophy awarded to the victorious team playing in the namesake championship of the professional Canadian Football League (CFL), was originally awarded to the champion of the CRU. The two strongest leagues in the CRU, the Interprovincial Rugby Football Union in Eastern Canada and the Western Interprovincial Football Union in Western Canada, evolved into the present day CFL. Although the exact date of arrival of rugby union in Trinidad and Tobago is unknown, their first club Northern RFC was formed in 1923, a national team was playing by 1927 and due to a cancelled tour to British Guiana in 1933, switched their venue to Barbados; introducing rugby to the island. Other Atlantic countries to play rugby union include Jamaica and Bermuda. Rugby union is the fastest growing college sport and sport in general in the USA. Major League Rugby is the professional Rugby union competition in the US and Canada. Europe The growth of rugby union in Europe outside the 6 Nations countries in terms of playing numbers, attendances, and viewership has been sporadic. Historically, British and Irish home teams played the Southern Hemisphere teams of Australia, New Zealand, and South Africa, as well as France. The rest of Europe were left to play amongst themselves. During a period when it had been isolated by the British and Irish Unions, France, lacking international competition, became the only European team from the top tier to regularly play the other European countries; mainly Belgium, the Netherlands, Germany, Spain, Romania, Poland, Italy and Czechoslovakia. In 1934, instigated by the French Rugby Federation, FIRA (Fédération Internationale de Rugby Amateur) was formed to organise rugby union outside the authority of the IRFB. The founding members were , , , , , and . Other European rugby playing nations of note include Russia, whose first officially recorded match is marked by an encounter between Dynamo Moscow and the Moscow Institute of Physical Education in 1933. Rugby union in Portugal also took hold between the First and Second World Wars, with a Portuguese National XV set up in 1922 and an official championship started in 1927. In 1999, FIRA agreed to place itself under the auspices of the IRB, transforming itself into a strictly European organising body. Accordingly, it changed its name to FIRA–AER (Fédération Internationale de Rugby Amateur – Association Européenne de Rugby). It adopted its current name of Rugby Europe in 2014. South America Although Argentina is the best-known rugby playing nation in South America, founding the Argentine Rugby Union in 1899, several other countries on the continent have a long history. Rugby had been played in Brazil since the end of the 19th century, but the game was played regularly only from 1926, when São Paulo beat Santos in an inter-city match. It took Uruguay several aborted attempts to adapt to rugby, led mainly by the efforts of the Montevideo Cricket Club; these efforts succeeded in 1951 with the formation of a national league and four clubs. Other South American countries that formed a rugby union include Chile (1948), and Paraguay (1968). Súper Liga Americana de Rugby is the professional Rugby union competition in South America. Asia Many Asian countries have a tradition of playing rugby dating
finish first in the Pool of death and finish third overall in the tournament. The attention from Argentina's performance led to Argentina participating in SANZAAR and the professionalization of rugby in Argentina. The 2011 tournament was awarded to New Zealand in November 2005, ahead of bids from Japan and South Africa. The All Blacks reclaimed their place atop the rugby world with a narrow 8–7 win over France in the 2011 final. The opening weekend of the 2015 tournament, hosted by England, generated the biggest upset in Rugby World Cup history when Japan, who had not won a single World Cup match since 1991, defeated heavily favored South Africa. Overall, New Zealand once again won the final, this time against Australia. In doing so, they became the first team in World Cup history to win three titles, as well as the first to successfully defend a title. Japan's hosting of the 2019 World Cup marked the first time the tournament had been held outside the traditional rugby strongholds; Japan won all four of their pool matches to top their group and qualify to the quarter-finals for the first time. The tournament saw South Africa claim their third trophy to match New Zealand for the most Rugby World Cup titles. South Africa defeated England 32–12 in the final. Starting in 2021, gender designations will be removed from the titles of the men's and women's World Cups. Accordingly, all future World Cups for men and women will officially bear the "Rugby World Cup" name. The first tournament to be affected by the new policy will be the next women's tournament to be held in New Zealand in 2021, which will officially be titled as "Rugby World Cup 2021". Trophy Winners of the Rugby World Cup are presented with the Webb Ellis Cup, named after William Webb Ellis. The trophy is also referred to simply as the Rugby World Cup. The trophy was chosen in 1987 for use in the competition, and was created in 1906 by Garrard's Crown Jewellers. The trophy is restored after each game by fellow Royal Warrant holder Thomas Lyte. The words 'The International Rugby Football Board' and 'The Webb Ellis Cup' are engraved on the face of the cup. It stands thirty-eight centimetres high and is silver gilded in gold, and supported by two cast scroll handles, one with the head of a satyr, and the other a head of a nymph. In Australia the trophy is colloquially known as "Bill" — a reference to William Webb Ellis. Selection of hosts Tournaments are organised by Rugby World Cup Ltd (RWCL), which is itself owned by World Rugby. The selection of host is decided by a vote of World Rugby Council members. The voting procedure is managed by a team of independent auditors, and the voting kept secret. The host nation is generally selected five or six years before the competition. The tournament has been hosted by multiple nations. For example, the 1987 tournament was co-hosted by Australia and New Zealand. World Rugby requires that the hosts must have a venue with a capacity of at least 60,000 spectators for the final. Host nations sometimes construct or upgrade stadia in preparation for the World Cup, such as Millennium Stadium – purpose built for the 1999 tournament – and Eden Park, upgraded for 2011. The first country outside of the traditional rugby nations of SANZAAR or the Six Nations to be awarded the hosting rights was 2019 host Japan. France will host the 2023 tournament. Tournament growth Media coverage Organizers of the Rugby World Cup, as well as the Global Sports Impact, state that the Rugby World Cup is the third largest sporting event in the world, behind only the FIFA World Cup and the Olympics, although other sources question whether this is accurate. Reports emanating from World Rugby and its business partners have frequently touted the tournament's media growth, with cumulative worldwide television audiences of 300 million for the inaugural 1987 tournament, 1.75 billion in 1991, 2.67 billion in 1995, 3 billion in 1999, 3.5 billion in 2003, and 4 billion in 2007. The 4 billion figure was widely dismissed as the global audience for television is estimated to be about 4.2 billion. However, independent reviews have called into question the methodology of those growth estimates, pointing to factual inconsistencies. The event's supposed drawing power outside of a handful of rugby strongholds was also downplayed significantly, with an estimated 97 percent of the 33 million average audience produced by the 2007 final coming from Australasia, South Africa, the British Isles and France. Other sports have been accused of exaggerating their television reach over the years; such claims are not exclusive to the Rugby World Cup. While the event's global popularity remains a matter of dispute, high interest in traditional rugby nations is well documented. The 2003 final, between Australia and England, became the most watched rugby union match in the history of Australian television. Attendance †Typhoon Hagibis caused 3 group stage matches to be cancelled permanently. As a result, only 45 of the scheduled 48 matches were played in the 2019 Rugby World Cup. Revenue Notes: The host union keeps revenue from gate receipts. World Rugby, through RWCL, receive revenue from sources including broadcasting rights, sponsorship and tournament fees. Results Tournaments Performance of nations Twenty-five nations have participated at the Rugby World Cup (excluding qualifying tournaments). The only nations to host and win a tournament are New Zealand (1987 and 2011) and South Africa (1995). The performance of other host nations includes England (1991 final hosts) and Australia (2003 hosts) both finishing runners-up, while France (2007 hosts) finished fourth, and Wales (1999 hosts) and Japan (2019 hosts) reached the quarter-finals. Wales became the first host nation to be eliminated at the pool stages in 1991 while England became the first solo host nation to be eliminated at the pool stages in 2015. Of the twenty-five nations that have participated in at least one tournament, eleven of them have never missed a tournament. Team records a South Africa was excluded from the first two tournaments due to a sporting boycott during the apartheid era. Records and statistics The record for most points overall is held by English player Jonny Wilkinson, who scored 277 during his World Cup career. New Zealand All Black Grant Fox holds the record for most points in one competition, with 126 in 1987; Jason Leonard of England holds the record for most World Cup matches: 22 between 1991 and 2003. All Black Simon Culhane holds the record for most points in a match by one player, 45, as well as the record for most conversions in a match, 20. All Black Marc Ellis holds the record for most tries in a match, six, which he scored against
teams qualify automatically based on their performance in the previous World Cup — the top three teams in each of the four group (pool) stages of the previous tournament qualify for the next tournament as seeded teams. The qualification system for the remaining eight places is region-based, with a total eight teams allocated for Europe, five for Oceania, three for the Americas, two for Africa, and one for Asia. The last place is determined by an intercontinental play-off. Tournament The tournament involves twenty nations competing over six weeks. There are two stages — a pool, followed by a knockout round. Nations are divided into four pools, A through to D, of five nations each. The teams are seeded based on the World Rankings. The four highest-ranked teams are drawn into pools A to D. The next four highest-ranked teams are then drawn into pools A to D, followed by the next four. The remaining positions in each pool are filled by the qualifiers. Nations play four pool games, playing their respective pool members once each. A bonus points system is used during pool play. If two or more teams are level on points, a system of criteria determines the higher ranked. Eight teams — the winner and runner-up from each of the four pools — enter the knockout stage. The knockout stage consists of quarter- and semi-finals, and then the final. The winner of each pool is placed against a runner-up of a different pool in a quarter-final. The winner of each quarter-final goes on to the semi-finals, and the respective winners proceed to the final. Losers of the semi-finals contest for third place, called the 'Bronze Final'. If a match in the knockout stages ends in a draw, the winner is determined through extra time. If that fails, the match goes into sudden death and the next team to score any points is the winner. History Beginnings Prior to the Rugby World Cup, there was no truly global rugby union competition, but there were a number of other tournaments. One of the oldest is the annual Six Nations Championship, which started in 1883 as the Home Nations Championship, a tournament between England, Ireland, Scotland and Wales. It expanded to the Five Nations in 1910, when France joined the tournament. France did not participate from 1931 to 1939, during which period it reverted to a Home Nations championship. In 2000, Italy joined the competition, which became the Six Nations. Rugby union was also played at the Summer Olympic Games, first appearing at the 1900 Paris games and subsequently at London in 1908, Antwerp in 1920, and Paris again in 1924. France won the first gold medal, then Australasia, with the last two being won by the United States. However rugby union ceased to be on Olympic program after 1924. The idea of a Rugby World Cup had been suggested on numerous occasions going back to the 1950s, but met with opposition from most unions in the IRFB. The idea resurfaced several times in the early 1980s, with the Australian Rugby Union (ARU; now known as Rugby Australia) in 1983, and the New Zealand Rugby Union (NZRU; now known as New Zealand Rugby) in 1984 independently proposing the establishment of a world cup. A proposal was again put to the IRFB in 1985 and this time passed 10–6. The delegates from Australia, France, New Zealand and South Africa all voted for the proposal, and the delegates from Ireland and Scotland against; the English and Welsh delegates were split, with one from each country for and one against. The inaugural tournament, jointly hosted by Australia and New Zealand, was held in May and June 1987, with sixteen nations taking part. The inaugural World Cup in 1987, did not involve any qualifying process; instead, the 16 places were automatically filled by seven eligible International Rugby Football Board (IRFB, now World Rugby) member nations, and the rest by invitation. New Zealand became the first-ever champions, defeating France 29–9 in the final. The subsequent 1991 tournament was hosted by England, with matches played throughout Britain, Ireland and France. Qualifying tournaments were introduced for the second tournament, where eight of the sixteen places were contested in a twenty-four-nation tournament. This tournament saw the introduction of a qualifying tournament; eight places were allocated to the quarter-finalists from 1987, and the remaining eight decided by a thirty-five nation qualifying tournament. Australia won the second tournament, defeating England 12–6 in the final. In 1992, eight years after their last official series, South Africa hosted New Zealand in a one-off test match. The resumption of international rugby in South Africa came after the dismantling of the apartheid system, and was only done with permission of the African National Congress. With their return to test rugby, South Africa were selected to host the 1995 Rugby World Cup. After upsetting Australia in the opening match, South Africa continued to advance through the tournament until they met New Zealand in the final. After a tense final that went into extra time, South Africa emerged 15–12 winners, with then President Nelson Mandela, wearing a Springbok jersey, presenting the trophy to South Africa's captain, Francois Pienaar. Professional era The 1999 tournament was hosted by Wales with matches also being held throughout the rest of the United Kingdom, Ireland and France. The tournament included a repechage system, alongside specific regional qualifying places. The number of participating nations was increased from sixteen to twenty — and has remained to date at twenty. Australia claimed their second title, defeating France in the final. The combination of the sport turning professional after 1995 and the increase in teams from sixteen to twenty led to a number of remarkably lopsided results in both the 1999 and 2003 tournaments, with two matches in each tournament resulting in teams scoring over 100 points; Australia's 142–0 win over Namibia in 2003 stands as the most lopsided score in Rugby World Cup history. In 2003 and 2007, the qualifying format allowed for eight of the twenty available positions to be automatically filled by the eight quarter-finalists of the previous tournament. The remaining twelve positions were filled by continental qualifying tournaments. Ten positions were filled by teams qualifying directly through continental competitions. Another two places were allocated for a cross-continental repechage. The 2003 event was hosted by Australia, although it was originally intended to be held jointly with New Zealand. England emerged as champions defeating Australia in extra time. England's win broke the southern hemisphere's dominance in the event. Such was the celebration of England's victory that an estimated 750,000 people gathered in central London to greet the team, making the day the largest sporting celebration of its kind ever in the United Kingdom. The 2007 competition was hosted by France, with matches also being held in Wales and Scotland. South Africa claimed their second title by defeating defending champions England 15–6. The biggest story of the tournament, however, was Argentina who racked up wins against some of the top European teams — France, Ireland, and Scotland — to finish first in the Pool of death and finish third overall in the tournament. The attention from Argentina's performance led to Argentina participating in SANZAAR and the professionalization of rugby in Argentina. The 2011 tournament was awarded to New Zealand in November 2005, ahead of bids from Japan and South Africa. The All Blacks reclaimed their place atop the rugby world with a narrow 8–7 win over
postulates or Dedekind–Peano axioms), are axioms for the natural numbers presented in the 19th century by the German mathematician Richard Dedekind and by the Italian mathematician Giuseppe Peano. The Peano Axioms define the natural numbers referring to a recursive successor function and addition and multiplication as recursive functions. Example: Proof procedure Another interesting example is the set of all "provable" propositions in an axiomatic system that are defined in terms of a proof procedure which is inductively (or recursively) defined as follows: If a proposition is an axiom, it is a provable proposition. If a proposition can be derived from true reachable propositions by means of inference rules, it is a provable proposition. The set of provable propositions is the smallest set of propositions satisfying these conditions. Finite subdivision rules Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdivision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdivided into smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can be iterated. The standard `middle thirds' technique for creating the Cantor set is a subdivision rule, as is barycentric subdivision. Functional recursion A function may be recursively defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) = F(n − 1) + F(n − 2). For such a definition to be useful, it must be reducible to non-recursively defined values: in this case F(0) = 0 and F(1) = 1. A famous recursive function is the Ackermann function, which, unlike the Fibonacci sequence, cannot be expressed without recursion. Proofs involving recursive definitions Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sections, yields structural induction — a powerful generalization of mathematical induction widely used to derive proofs in mathematical logic and computer science. Recursive optimization Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problem in recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of the optimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step). The recursion theorem In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set , an element of and a function , the theorem states that there is a unique function (where denotes the set of natural numbers including zero) such that for any natural number . Proof of uniqueness Take two functions and such that: where is an element of . It can be proved by mathematical induction that for all natural numbers : Base Case: so the equality holds for . Inductive Step: Suppose for some Then . Hence implies . By induction, for all . In computer science A common method of simplification is to divide a problem into subproblems of the same type. As a computer programming technique, this is called divide and conquer and is key to the design of many important algorithms. Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smaller and smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach, where problems are solved by solving larger and larger instances, until the desired size is reached. A classic example of recursion is the definition of the factorial function, given here in C code: unsigned int factorial(unsigned int n) { if (n == 0) { return 1; } else { return n * factorial(n - 1); } } The function calls itself recursively on a smaller version of the input and multiplies the result of the recursive call by , until reaching the base case, analogously to the mathematical definition of factorial. Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smaller versions of itself. The solution to the problem is then devised by combining the solutions obtained from the simpler versions of the problem. One example application of recursion is in parsers for programming languages. The great advantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed or produced by a finite computer program. Recurrence relations are equations which define one or more sequences recursively. Some specific kinds of recurrence relation can be "solved" to obtain a non-recursive definition (e.g., a closed-form expression). Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually the simplicity of instructions. The main disadvantage is that the memory usage of recursive algorithms may grow very quickly, rendering them impractical for larger instances. In biology Shapes that seem to have been created by recursive processes sometimes appear in plants and animals, such as in branching structures in which one large part branches out into two or more similar smaller parts. One example is Romanesco broccoli. In art The Russian Doll or Matryoshka doll is a physical artistic example
recursion." An alternative form is the following, from Andrew Plotkin: "If you already know what recursion is, just remember the answer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her what recursion is." Recursive acronyms are other examples of recursive humor. PHP, for example, stands for "PHP Hypertext Preprocessor", WINE stands for "WINE Is Not an Emulator" GNU stands for "GNU's not Unix", and SPARQL denotes the "SPARQL Protocol and RDF Query Language". In mathematics Recursively defined sets Example: the natural numbers The canonical example of a recursively defined set is given by the natural numbers: 0 is in if n is in , then n + 1 is in The set of natural numbers is the smallest set satisfying the previous two properties. In mathematical logic, the Peano axioms (or Peano postulates or Dedekind–Peano axioms), are axioms for the natural numbers presented in the 19th century by the German mathematician Richard Dedekind and by the Italian mathematician Giuseppe Peano. The Peano Axioms define the natural numbers referring to a recursive successor function and addition and multiplication as recursive functions. Example: Proof procedure Another interesting example is the set of all "provable" propositions in an axiomatic system that are defined in terms of a proof procedure which is inductively (or recursively) defined as follows: If a proposition is an axiom, it is a provable proposition. If a proposition can be derived from true reachable propositions by means of inference rules, it is a provable proposition. The set of provable propositions is the smallest set of propositions satisfying these conditions. Finite subdivision rules Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdivision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdivided into smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can be iterated. The standard `middle thirds' technique for creating the Cantor set is a subdivision rule, as is barycentric subdivision. Functional recursion A function may be recursively defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) = F(n − 1) + F(n − 2). For such a definition to be useful, it must be reducible to non-recursively defined values: in this case F(0) = 0 and F(1) = 1. A famous recursive function is the Ackermann function, which, unlike the Fibonacci sequence, cannot be expressed without recursion. Proofs involving recursive definitions Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sections, yields structural induction — a powerful generalization of mathematical induction widely used to derive proofs in mathematical logic and computer science. Recursive optimization Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problem in recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of the optimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step). The recursion theorem In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set , an element of and a function , the theorem states that there is a unique function (where denotes the set of natural numbers including zero) such that for any natural number . Proof of uniqueness Take two functions and such that: where is an element of . It can be proved by mathematical induction that for all natural numbers : Base Case: so the equality holds for . Inductive Step: Suppose for some Then . Hence implies . By induction, for all . In computer science A common method of simplification is to divide a problem into subproblems of the same type. As a computer programming technique, this is called divide and conquer and is key to the design of many important algorithms. Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smaller and smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach, where problems are solved by solving larger and larger instances, until the desired size is reached. A classic example of recursion is the definition of the factorial function, given here in C code: unsigned int factorial(unsigned int n) { if (n == 0) { return 1; } else { return n * factorial(n - 1); } } The function calls itself recursively on a smaller version of the input and multiplies the result of the recursive call by , until reaching the base case, analogously to the mathematical definition of factorial. Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smaller versions of itself. The solution to the problem is then devised by combining the solutions obtained from the simpler versions of the problem. One example application of recursion is in parsers for programming languages. The great advantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed or produced by a finite computer program. Recurrence relations are equations which define one or more sequences recursively. Some specific kinds of recurrence relation can be "solved" to obtain a non-recursive definition (e.g., a closed-form expression). Use
completely renounced racism and segregation, and spoken in opposition to the Iraq War. Renowned for his knowledge of Senate precedent and parliamentary procedure, Byrd wrote a four-volume history of the Senate in later life. Near the end of his life, Byrd was in declining health and was hospitalized several times. He died in office on June 28, 2010, at the age of 92, and was buried at Columbia Gardens Cemetery in Arlington, Virginia. Background Robert Byrd was born on November 20, 1917, as Cornelius Calvin Sale Jr. in North Wilkesboro, North Carolina, to Cornelius Calvin Sale and his wife Ada Mae (Kirby). When he was ten months old, his mother died on Armistice Day during the 1918 flu pandemic. Byrd was the youngest of four and in accordance with his mother's wishes, his father dispersed the children among relatives. Calvin Jr. was adopted by his biological father's sister and her husband, Vlurma and Titus Byrd, who changed his name to Robert Carlyle Byrd and raised him in the coal mining region of southern West Virginia, primarily in the coal town of Stotesbury, West Virginia. Robert Byrd's biological father Calvin Sale went on to have four more children with his second wife, Ola (Pruitt) Sale. Byrd was educated in the public schools of Stotesbury. Byrd played the violin at the Mark Twain School orchestra and the bass drum in the Mark Twain High School marching band. He was the valedictorian of his 1934 graduating class at Stotesbury's Mark Twain High School. Marriage and children On May 29, 1936, Byrd married Erma Ora James (June 12, 1917 – March 25, 2006) who was born to a coal mining family in Floyd County, Virginia. Her family moved to Raleigh County, West Virginia, where she met Byrd when they attended the same high school. Robert Byrd had two daughters (Mona Byrd Fatemi and Marjorie Byrd Moore), six grandchildren, and seven great-grandchildren. Ku Klux Klan In the early 1940s, Byrd recruited 150 of his friends and associates to create a new chapter of the Ku Klux Klan in Sophia, West Virginia. As a young boy, Byrd had witnessed his adoptive father walk in a Klan parade in Matoaka, West Virginia. While growing up, Byrd had heard that "the Klan defended the American way of life against racemixers and communists". He then wrote to Joel L. Baskin, Grand Dragon of the Realm of Virginia, West Virginia, Maryland, and Delaware, who responded that he would come and organize a chapter when Byrd had recruited 150 people. It was Baskin who told Byrd, "You have a talent for leadership, Bob ... The country needs young men like you in the leadership of the nation." Byrd later recalled, "Suddenly lights flashed in my mind! Someone important had recognized my abilities! I was only 23 or 24 years old, and the thought of a political career had never really hit me. But strike me that night, it did." Byrd became a recruiter and leader of his chapter. When it came time to elect the top officer (Exalted Cyclops) in the local Klan unit, Byrd won unanimously. In December 1944, Byrd wrote to segregationist Mississippi Senator Theodore G. Bilbo: In 1946, Byrd wrote a letter to Samuel Green, the Ku Klux Klan's Grand Wizard, stating, "The Klan is needed today as never before, and I am anxious to see its rebirth here in West Virginia and in every state in the nation." The same year, he was encouraged to run for the West Virginia House of Delegates by the Klan's grand dragon; Byrd won, and took his seat in January 1947. However, during his campaign for the United States House of Representatives in 1952, he announced that, "after about a year, I became disinterested, quit paying my dues, and dropped my membership in the organization", and that during the nine years that have followed, he had never been interested in the Klan. He said he had joined the Klan because he felt it offered excitement and was anti-communist, but also suggested his participation there "reflected the fears and prejudices" of the time. Byrd later called joining the KKK "the greatest mistake I ever made." In 1997, he told an interviewer he would encourage young people to become involved in politics but also warned, "Be sure you avoid the Ku Klux Klan. Don't get that albatross around your neck. Once you've made that mistake, you inhibit your operations in the political arena." In his last autobiography, Byrd explained that he was a KKK member because he "was sorely afflicted with tunnel vision—a jejune and immature outlook—seeing only what I wanted to see because I thought the Klan could provide an outlet for my talents and ambitions." Byrd also said in 2005, "I know now I was wrong. Intolerance had no place in America. I apologized a thousand times ... and I don't mind apologizing over and over again. I can't erase what happened." Early career Byrd worked as a gas station attendant, a grocery store clerk, a shipyard welder during World War II, and a butcher before he won a seat in the West Virginia House of Delegates in 1946, representing Raleigh County from 1947 to 1950. Byrd became a local celebrity after a radio station in Beckley began broadcasting his "fiery fundamentalist lessons." In 1950, he was elected to the West Virginia Senate, where he served from December 1950 to December 1952. In 1951, Byrd was among the official witnesses of the execution of Harry Burdette and Fred Painter, which was the first use of the electric chair in West Virginia. In 1965 the state abolished capital punishment, with the last execution having occurred in 1959. Continued education Early in his career Byrd attended Beckley College, Concord College, Morris Harvey College, Marshall College, and George Washington University Law School, and joined the Tau Kappa Epsilon fraternity. Byrd began night classes at American University Washington College of Law in 1953, while a member of the United States House of Representatives. He earned his JD cum laude a decade later, by which time he was a U.S. Senator. President John F. Kennedy spoke at the commencement ceremony on June 10, 1963, and presented the graduates their diplomas, including Byrd. Byrd completed law school in an era when undergraduate degrees were not a requirement. He later decided to complete his Bachelor of Arts degree in political science, and in 1994 he graduated summa cum laude from Marshall University. Congressional service In 1952, Byrd was elected to the United States House of Representatives for West Virginia's 6th congressional district, succeeding E. H. Hedrick, who retired from the House to make an unsuccessful run for the Democratic nomination for governor. Byrd was re-elected twice from this district, anchored in Charleston and also including his home in Sophia, serving from January 3, 1953, to January 3, 1959. Byrd defeated Republican incumbent W. Chapman Revercomb for the United States Senate in 1958. Revercomb's record supporting civil rights had become an issue, playing in Byrd's favor. Byrd was re-elected to the Senate eight times. He was West Virginia's junior senator for his first four terms; his colleague from 1959 to 1985 was Jennings Randolph, who had been elected on the same day as Byrd's first election in a special election to fill the seat of the late Senator Matthew Neely. While Byrd faced some vigorous Republican opposition in his career, his last serious electoral opposition occurred in 1982 when he was challenged by freshman Congressman Cleve Benedict. Despite his tremendous popularity in the state, Byrd ran unopposed only once, in 1976. On three other occasions—in 1970, 1994 and 2000—he won all 55 of West Virginia's counties. In his re-election bid in 2000, he won all but seven precincts. Congresswoman Shelley Moore Capito, the daughter of one of Byrd's longtime foes, former governor Arch Moore Jr., briefly considered a challenge to Byrd in 2006 but decided against it. Capito's district covered much of the territory Byrd had represented in the U.S. House. In the 1960 Democratic presidential election primaries, Byrd—a close Senate ally of Lyndon B. Johnson – endorsed and campaigned for Hubert Humphrey over front-runner John F. Kennedy in the state's crucial primary. However, Kennedy won the state's primary and eventually the general election. Public service records Byrd was elected to a record ninth consecutive full Senate term on November 7, 2006. He became the longest-serving senator in American history on June 12, 2006, surpassing Strom Thurmond of South Carolina with 17,327 days of service. On November 18, 2009, Byrd became the longest-serving member in congressional history, with 56 years, 320 days of combined service in the House and Senate, passing Carl Hayden of Arizona. Previously, Byrd had held the record for the longest unbroken tenure in the Senate (Thurmond resigned during his first term and was re-elected seven months later). He is the only senator ever to serve more than 50 years. Including his tenure as a state legislator from 1947 to 1953, Byrd's service on the political front exceeded 60 continuous years. Byrd, who never lost an election, cast his 18,000th vote on June 21, 2007, the most of any senator in history. John Dingell broke Byrd's record as longest-serving member of Congress on June 7, 2013. Upon the death of former Florida Senator George Smathers on January 20, 2007, Byrd became the last living United States Senator from the 1950s. Having taken part in the admission of Alaska and Hawaii to the union, Byrd was the last surviving senator to have voted on a bill granting statehood to a U.S. territory. At the time of Byrd's death, 14 sitting or former members of the Senate had not been born when Byrd's tenure in the Senate began, as well as then-President Barack Obama. Committee assignments These are the committee assignments for Sen. Byrd's 9th and final term. Committee on Appropriations Subcommittee on Defense Subcommittee on Energy and Water Development Subcommittee on Homeland Security (chairman) Subcommittee on Interior, Environment, and Related Agencies Subcommittee on Military Construction and Veterans Affairs Subcommittee on Transportation, Housing and Urban Development, and Related Agencies Committee on Armed Services Subcommittee on Emerging Threats and Capabilities Subcommittee on Readiness and Management Support Subcommittee on Strategic Forces Committee on the Budget Committee on Rules and Administration Filibuster of the Civil Rights Act of 1964 Byrd was a member of the wing of the Democratic Party that opposed federally-mandated desegregation and civil rights. However, despite his early career in the KKK, Byrd was linked to such senators as John C. Stennis, J. William Fulbright and George Smathers, who based their segregationist positions on their view of states' rights in contrast to senators like James Eastland, who held a reputation as a committed racist. Byrd joined with Democratic senators to filibuster the Civil Rights Act of 1964, personally filibustering the bill for 14 hours, a move he later said he regretted. Despite an 83-day filibuster in the Senate, both parties in Congress voted overwhelmingly in favor of the Act (Democrats 47–16, Republicans 30–2), and President Johnson signed the bill into law. Byrd cast no vote on the Voting Rights Act of 1965, and voted against the confirmation of Thurgood Marshall to the U.S. Supreme Court. He did not sign the 1956 Southern Manifesto and voted for the Civil Rights Acts of 1957, 1960, and 1968, as well as the 24th Amendment to the U.S. Constitution. In 2005, Byrd told The Washington Post that his membership in the Baptist church led to a change in his views. In the opinion of one reviewer, Byrd, like other Southern and border-state Democrats, came to realize that he would have to temper "his blatantly segregationist views" and move to the Democratic Party mainstream if he wanted to play a role nationally. Vietnam In February 1968, Byrd questioned General Earle Wheeler during the latter's testimony to the Senate Armed Services Committee. During a White House meeting between President Johnson and congressional Democratic leaders on February 6, Byrd stated his concern for the ongoing Vietnam War, citing the U.S.'s lack of intelligence, preparation, underestimating of the morale and vitality of the Viet Cong, and overestimated how backed Americans would be by South Vietnam. President Johnson rejected Byrd's observations. "Anyone can kick a barn down. It takes a good carpenter to build one." 1968 presidential election During the 1968 Democratic Party presidential primaries, Byrd supported the incumbent president Johnson. Of the challenging Robert F. Kennedy, Byrd said, "Bobby-come-lately has made a mistake. I won't even listen to him. There are many who liked his brother—as Bobby will find out—but who don't like him." Byrd praised Chicago Mayor Richard J. Daley's police response to protest activity at that year's Democratic National Convention, stating that the violence that resulted was the fault of the protesters, while the police only tried to restore order. Vice President Hubert Humphrey won the presidential nomination, and Byrd campaigned for him that fall. Leadership roles Byrd served in the Senate Democratic leadership. He succeeded George Smathers as secretary of the Senate Democratic Conference from 1967 to 1971. He unseated Ted Kennedy in 1971 to become Majority Whip, the second highest-ranking Democrat, until 1977. Smathers recalled that, "Ted was off playing. While Ted was away at Christmas, down in the islands, floating around having a good time with some of his friends, male and female, here was Bob up here calling on the phone. 'I want to do this, and would you help me?' He had it all committed so that when Teddy got back to town, Teddy didn't know what hit him, but it was already all over. That was Lyndon Johnson's style. Bob Byrd learned that from watching Lyndon Johnson." Byrd himself had told Smathers that "I have never in my life played a game of cards. I have never in my life had a golf club in my hand. I have never in life hit a tennis ball. I have—believe it or not—never thrown a line over to catch a fish. I don't do any of those things. I have only had to work all my life. And every time you told me about swimming, I don't know how to swim." In 1976, Byrd was the "favorite son" Presidential candidate in West Virginia's primary. His easy victory gave him control of the delegation to the Democratic National Convention. Byrd had the inside track as Majority Whip but focused most of his time running for Majority Leader, more so than for re-election to the Senate, as he was virtually unopposed for his fourth term. By the time the vote for Majority Leader came, his lead was so secure that his lone rival, Minnesota's Hubert Humphrey, withdrew before the balloting took place. From 1977 to 1989 Byrd was the leader of the Senate Democrats, serving as Majority Leader from 1977 to 1981 and 1987 to 1989, and as Minority Leader from 1981 to 1987. Appropriations Committee Byrd was known for steering federal dollars to West Virginia, one of the country's poorest states. He was called the "King of Pork" by Citizens Against Government Waste. After becoming chair of the Appropriations Committee in 1989, Byrd set a goal securing a total of for public works in the state. He passed that mark in 1991, and funds for highways, dams, educational institutions, and federal agency offices flowed unabated over the course of his membership. More than 30 existing or pending federal projects bear his name. He commented on his reputation for attaining funds for projects in West Virginia in August 2006, when he called himself "Big Daddy" at the dedication for the Robert C. Byrd Biotechnology Science Center. Examples of this ability to claim funds and projects for his state include the Federal Bureau of Investigation's repository for computerized fingerprint records as well as several United States Coast Guard computing and office facilities. Parliamentary expertise Byrd was also known for using his knowledge of parliamentary procedure. Byrd frustrated Republicans with his encyclopedic knowledge of the inner workings of the Senate, particularly prior to the Reagan Revolution. From 1977 to 1979 he was described as "performing a procedural tap dance around the minority, outmaneuvering Republicans with his mastery of the Senate's arcane rules." In 1988, majority leader Byrd moved a call of the Senate, which was adopted by the majority present, in order to have the Sergeant-at-Arms arrest members not in attendance. One member (Robert Packwood, R-Oregon) was escorted back to the chamber by the Sergeant-at-Arms in order to obtain a quorum. President pro tempore As the longest-serving Democratic senator, Byrd served as President pro tempore four times when his party was in the majority: from 1989 until the Republicans won control of the Senate in 1995; for 17 days in early 2001, when the Senate was evenly split between parties and outgoing Vice President Al Gore broke the tie in favor of the Democrats; when the Democrats regained the majority in June 2001 after Senator Jim Jeffords of Vermont left the Republican Party to become an independent; and again from 2007 to his death in 2010, as a result of the 2006 Senate elections. In this capacity, Byrd was third in the line of presidential succession at the time of his death, behind Vice President Joe Biden and House Speaker Nancy Pelosi. Scholarships and TAH History Grants In 1969, Byrd launched a Scholastic Recognition Award; he also began to present a savings bond to valedictorians from high schools—public and private—in West Virginia. In 1985 Congress approved the nation's only merit-based scholarship program funded through the U.S. Department of Education, a program which Congress later named in Byrd's honor. The Robert C. Byrd Honors Scholarship Program initially comprised a one-year, $1,500 award to students with "outstanding academic achievement" who had been accepted at a college or university. In 1993, the program began providing four-year scholarships. In 2002 Byrd secured unanimous approval for a major national initiative to strengthen the teaching of "traditional American history" in K-12 public schools. The Department of Education competitively awards $50 to a year to school districts (in amounts of about $500,000 to ). The money goes to teacher training programs that are geared to improving the knowledge of history teachers. The Continuing Appropriations Act, 2011 eliminated funding for the Robert C. Byrd Honors Scholarship Program. Senate historian Television cameras were first introduced to the House of Representatives on March 19, 1979, by C-SPAN. Unsatisfied that Americans only saw Congress as the House of Representatives, Byrd and others pushed to televise Senate proceedings to prevent the Senate from becoming the "invisible branch" of government, succeeding in June 1986. To help introduce the public to the inner workings of the legislative process, Byrd launched a series of one hundred speeches based on his examination of the Roman Republic and the intent of the Framers. Byrd published a four-volume series on Senate history: The Senate: 1789–1989: Addresses on the History of the Senate. The first volume won the Henry Adams Prize of the Society for History in the Federal Government as "an outstanding contribution to research in the history of the Federal Government." He also published The Senate of the Roman Republic: Addresses on the History of Roman Constitutionalism. In 2004, Byrd received the American Historical Association's first Theodore Roosevelt-Woodrow Wilson Award for Civil Service; in 2007, Byrd received the Friend of History Award from the Organization of American Historians. Both awards honor individuals outside the academy who have made a significant contribution to the writing and/or presentation of history. In 2014, The Byrd Center for Legislative Studies began assessing the archiving of Senator Byrd's electronic correspondence and floor speeches in order to preserve these documents and make them available to the wider community. Final-term Senate highlights On July 19, 2007, Byrd gave a 25-minute speech in the Senate against dog fighting in response to the indictment of football player Michael Vick. For 2007, Byrd was deemed the 14th-most powerful senator, as well as the 12th-most powerful Democratic senator. On May 19, 2008, Byrd endorsed then-Senator Barack Obama for president. One week after the West Virginia Democratic primary, in which Hillary Clinton defeated Obama by 67 to 25 percent, Byrd said, "Barack Obama is a noble-hearted patriot and humble Christian, and he has my full faith and support." When asked in October 2008 about the possibility that the issue of race would influence West Virginia voters, as Obama is African American, Byrd replied, "Those days are gone. Gone!" Obama lost West Virginia (by 13%) but won the election. On January 26, 2009, Byrd was one of three Democrats to vote against the confirmation of Timothy Geithner as United States Secretary of the Treasury (along with Russ Feingold of Wisconsin and Tom Harkin of Iowa). On February 26, 2009, Byrd was one of two Democrats to vote against the District of Columbia House Voting Rights Act of 2009, which if it had become law would have added a voting seat in the United States House of Representatives for the District of Columbia and add a seat for Utah, explaining that he supported the intent of the legislation, but regarded it as an attempt to solve with legislation an issue which required resolution with a Constitutional amendment. (Democrat Max Baucus of Montana also cast a "nay" vote.) Although his health was poor, Byrd was present for every crucial vote during the December 2009 senatorial healthcare debate; his vote was necessary so Democrats could obtain cloture to break a Republican filibuster. At the final vote on December 24, 2009, Byrd referenced recently deceased Senator Ted Kennedy, a devoted proponent, when casting his vote: "Mr. President, this is for my friend Ted Kennedy! Aye!" Political views Race Byrd initially compiled a mixed record on the subjects of race relations and desegregation. While he initially voted against civil rights legislation, in 1959 he hired one of the Capitol's first Black congressional aides, and he also took steps to integrate the United States Capitol Police for the first time since Reconstruction. Beginning in the 1970s, Byrd explicitly renounced his earlier views in favor of racial segregation. Byrd said that he regretted filibustering and voting against the Civil Rights Act of 1964 and would change it if he had the opportunity. Byrd also said that his views changed dramatically after his teenage grandson was killed in a 1982 traffic accident, which put him in a deep emotional valley. "The death of my grandson caused me to stop and think," said Byrd, adding he came to realize that African Americans love their children as much as he loved his. During debate in 1983 over the passage of the law creating the Martin Luther King Jr. Day holiday, Byrd grasped the symbolism of the day and its significance to his legacy, telling members of his staff "I'm the only one in the Senate who must vote for this bill". Of the seven U.S. Senators to vote on the confirmations of both Thurgood Marshall and Clarence Thomas to the United States Supreme Court (the others being Daniel Inouye of Hawaii, Ted Kennedy of Massachusetts, Quentin Burdick of North Dakota, Mark Hatfield of Oregon, and Fritz Hollings and Strom Thurmond of South Carolina), Byrd was the only senator to vote against confirming both of the only two African-American nominees to the Court in its history. In Marshall's case, Byrd asked FBI Director J. Edgar Hoover to look into the possibility that Marshall had either connections to communists or a communist past. With respect to Thomas, Byrd stated that he was offended by Thomas's use of the phrase "high-tech lynching of uppity blacks" in his defense and that he was "offended by the injection of racism" into the hearing. He called Thomas's comments a "diversionary tactic" and said, "I thought we were past that stage". Regarding Anita Hill's sexual harassment charges against Thomas, Byrd supported Hill. Byrd joined 45 other Democrats in voting against confirming Thomas to the Supreme Court. On March 29, 1968, Byrd criticized a Memphis, Tennessee, protest: "It was a shameful and totally uncalled for outburst of lawlessness undoubtedly encouraged to some considerable degree, at least, by his [Dr. King's] words and actions, and his presence. There is no reason for us to believe that the same destructive rioting and violence cannot, or that it will not, happen here if King attempts his so-called Poor People's March, for what he plans in Washington appears to be something on a far greater scale than what he had indicated he planned to do in Memphis." In a March 2, 2001, interview with Tony Snow, Byrd said of race relations: Byrd's use of the term "white nigger" created immediate controversy. When asked about it, Byrd's office provided this in a written response, For the 2003–2004 session, the National Association for the Advancement of Colored People (NAACP) rated Byrd's voting record as being 100% in line with the NAACP's position on the thirty-three Senate bills they evaluated. Sixteen other senators received that rating. In June 2005, Byrd proposed an additional $10,000,000 in federal funding for the Martin Luther King Jr. Memorial in Washington, D.C., remarking that, "With the passage of time, we have come to learn that his Dream was the American Dream, and few ever expressed it more eloquently." Upon news of his death, the NAACP released a statement praising Byrd, saying that he "became a champion for civil rights and liberties" and "came to consistently support the NAACP civil rights agenda". Clinton impeachment Byrd initially said that the impeachment proceedings against Clinton should be taken seriously. Although he harshly criticized any attempt to make light of the allegations, he made the motion to dismiss the charges and effectively end the matter. Even though he voted against both articles of impeachment, he was the sole Democrat to vote to censure Clinton. LGBT rights Byrd strongly opposed Clinton's 1993 efforts to allow homosexuals to serve in the military and supported efforts to limit gay marriage. In 1996, before the passage of the Defense of Marriage Act, he said, "The drive for same-sex marriage is, in effect, an effort to make a sneak attack on society by encoding this aberrant behavior in legal form before society itself has decided it should be legal. [...] Let us defend the oldest institution, the institution of marriage between male and female as set forth in the Holy Bible." Despite his previous position, he later stated his opposition to the Federal Marriage Amendment and argued that it was unnecessary because the states already had the power to ban gay marriages. However, when the amendment came to the Senate floor, he was one of the two Democratic senators who voted in favor of cloture. Abortion On March 11, 1982, Byrd voted against a measure sponsored by Senator Orrin Hatch that sought to reverse Roe v. Wade and allow Congress and individual states to adopt laws banning abortions. Its passing was the first time a congressional committee supported an anti-abortion amendment. In 1995, Byrd voted against a ban on intact dilation and extraction, a late-term abortion procedure typically referred to by its opponents as "partial-birth abortion". In 2003, however, he voted for the Partial-Birth Abortion Ban Act, which prohibits intact dilation and extraction. Byrd also voted against the 2004 Unborn Victims of Violence Act, which recognizes a "child in utero" as a legal victim if he or she is injured or killed during the commission of a crime of violence. Richard Nixon era In April 1970, the Senate Judiciary Committee approved a plan to replace the Electoral College with direct elections of presidents. Byrd initially opposed direct elections on the key vote and was one of two senators to switch votes in favor of the proposal during later votes. In April 1970, as the Senate Judiciary Committee delayed a vote on Supreme Court nominee Harry Blackmun, Byrd stated that "no nomination should be voted on within 24 hours after the hearing" after the previous two Supreme Court nominees had delays and was one of the 17 committee members who went on record of assuring Blackmun's nomination would be reported favorably to the full Senate. In October 1970, Byrd sponsored an amendment protecting members of Congress and those elected that have not yet assumed office. Byrd mentioned the 88 political assassinations in the United States and said state law was not adequate to handle the increase in political violence. In February 1971, after Fred R. Harris and Charles Mathias requested the Senate Rules Committee change the rules to permit selection of committee chairmen on a basis aside from seniority, Byrd indicated through his line of questioning that he saw considerable value in the seniority system. In April 1971, after Representative Hale Boggs stated that he had been tapped by the Federal Bureau of Investigation and called on FBI Director J. Edgar Hoover to resign, Byrd opined that Boggs' imagination was involved and called on him to reveal any possible "good, substantial, bona fide evidence". In April 1971, Byrd met with President Nixon, Hugh Scott, and Robert P. Griffin for a briefing that after which Byrd, Scott, and Griffin asserted they had been told by Nixon of his intent to withdraw American forces from Indochina by a specific date. White House Press Secretary Ronald L. Ziegler disputed their claims by stating that the three had not been told anything by Nixon he had not mentioned in his speech the same day as the meeting. In April 1971, Jacob Javits, Fred R. Harris, and Charles H. Percy circulated letters to their fellow senators in an attempt to gain cosponsors for a resolution to appoint the Senate's first girl pages. Byrd maintained that the Senate was ill-equipped for girl pages and was among those that cited the long hours of work, the carrying of sometimes heavy documents and the high crime rate in the Capitol area as among the reasons against it. In September 1971, Representative Richard H. Poff was under consideration by President Nixon for a Supreme Court nomination, Byrd warning Poff that his nomination could be met with opposition by liberal senators and see a filibuster emerge. Within hours, Poff announced his declining of the nomination. In April 1972, Senate Majority Leader Mansfield announced that he had authorized Byrd to present an amendment to the Senate for a fixed deadline for total troop withdrawal that the Nixon administration would be obligated to meet and that the measure would serve as an amendment to the State Department‐United States Information Agency authorization bill. In April 1972, the Senate Judiciary Committee approved the nomination of Richard G. Kleindienst as United States Attorney General, Byrd being one of four Democrats to support the nomination. On June 7, Byrd announced that he would vote against Kleindienst, saying in a news release that this was Nixon's first nomination that he had not voted to confirm and that testimony at hearings investigating Kleindienst's tenure at the International Telephone and Telegraph Corporation displayed "a show of arrogance and deception and insensitivity to the people's right to know." In a May 1972 luncheon speech, Byrd criticized American newspapers for "an increasing tendency toward shoddy technical production" and observed that there was "a greater schism between the Nixon Administration and the media, at least publicly, than at any previous time in our history." In May 1972, Byrd introduced a proposal supported by the Nixon administration that would make cutting off all funding for American hostilities in Indochina conditional upon agreement on an internationally supervised cease‐fire. Byrd and Nixon supporters argued modification would bring the amendment more in line with President Nixon's proposal to withdraw all American forces from Vietnam the previous week and it was approved in the Senate by a vote of 47 to 43. In September 1972, Edward Brooke attempted to reintroduce his war ending amendment that had been defeated earlier in the week as an addendum to a clean drinking water bill when he discovered that Byrd had arranged a unanimous consent free agreement prohibiting amendments that were not relevant to the subject. Brooke charged the Byrd agreements with impairing his senatorial prerogatives to introduce amendments. During the 1972 general election campaign, Democratic nominee George McGovern advocated for partial amnesty for draft dodges. Byrd responded to the position in a November speech the day before the election without mentioning McGovern by name in saying, "How could we keep faith with the thousands of Americans we sent to Vietnam by giving a mere tap on the wrist to those who fled to Canada and Sweden?" Byrd said the welfare proposals were part of "pernicious doctrine that the Federal Government owes a living to people who don't want to work" and chastised individuals that had personal trips to Hanoi rather than official missions as "the Ramsey Clarks in our society who attempt to deal unilaterally with the enemy." In January 1973, the Senate passed legislation containing an amendment Byrd offered requiring President Nixon to give Congress an accounting of all funds that he had impounded and appropriated by February 5. Byrd stated that President Nixon had been required to submit reports to Congress and that he had not done so since June, leaving Congress in the dark on the matter. In February 1973, the Senate approved legislation requiring confirmation of the director and deputy director of the Office of Management and Budget in the White House in what was seen as "another battleground for the dispute between Congress and the White House over cuts in social spending programs in the current Federal budget and in the Nixon Administration's spending request for the fiscal year 1974, which begins next July 1". The legislation contained an amendment sponsored by Byrd limiting the budget officials to a maximum term of four years before having another confirmation proceeding. Byrd introduced another amendment that required all Cabinet officers be required to undergo reconfirmation by the Senate in the event that they are retained from one administration to another. In March 1973, Byrd led Senate efforts to reject a proposal that would have made most critical committee meetings open to the public, arguing that tampering with "the rides of the Senate is to tamper with the Senate itself" and argued against changing "procedures which, over the long past, have contributed to stability and efficiency in the operation of the Senate." The Senate voted down the proposal 47 to 38 on March 7. On May 2, 1973, the anniversary of FBI Director J. Edgar Hoover's death, Byrd called on President Nixon to appoint a permanent successor for Hoover. In June 1973, Byrd sponsored a bill that would impose the first Tuesday in October as the date for all federal elections and mandate that states hold primary elections for federal elections between the first Tuesday in June and the first Tuesday in July. Senate Rules Committee approved the measure on June 13 and it was sent to the Senate floor for consideration. In June 1973, along with Lloyd Bentsen, Mike Mansfield, John Tower, and Jennings Randolph, Byrd was one of five senators to switch their vote on the foreign military aid authorization bill to assure its passage after previously voting against it. In October 1973, President Nixon vetoed the request of the United States Information Agency for $208 million for fiscal year 1974 on the grounds of a provision forcing the agency to provide any document or information demanded. Byrd introduced a bill identical to the one vetoed by Nixon the following month, differing in not containing the information provision as well as a ban on appropriating or spending more money than the annual budget called for, the Senate approving the legislation on November 13. In November 1973, after the Senate rejected an amendment to the National Energy Emergency Act intending to direct President Nixon to put gasoline rationing into effect on January 15, Byrd indicated the final vote not coming for multiple days. In June 1974, the Senate confirmed John C. Sawhill as Federal Energy Administrator only to rescind the confirmation hours later, the direct result of James Abourezk wanting to speak out and vote against the nomination due to the Nixon administration's refusal to roll back crude oil prices. Abourezk confirmed that he had asked Byrd for notice of when he could assume the Senate floor to deliver his remarks. Byrd was absent when present members passed the nomination as part of their efforts to clear the chamber's executive calendar and rescinded the confirmation. Nixon resignation In May 1974, the House Judiciary Committee opened impeachment hearings against President Nixon after the release of 1,200 pages of transcripts of White House conversations between him and his aides and the administration became engulfed in the scandal that would come to be known as Watergate. That month, Byrd delivered a speech on the Senate floor opposing Nixon's potential resignation, saying it would serve only to convince the President's supporters that his enemies had driven him out of office: "The question of guilt or innocence would never be fully resolved. The country would remain polarized — more so than it is today. And confidence in government would remain unrestored." Most of the members of the Senate in attendance for the address were conservatives from both parties that shared opposition to Nixon being removed from office. Byrd was among multiple conservative senators who stated that they would not ask Nixon to resign. Later that month, Republican attorney general Elliot L. Richardson termed Nixon "a law and order President who says subpoenas must he answered by everyone except himself," the comment being echoed by Byrd who additionally charged President Nixon with reneging on his public pledge that the independence of the special prosecutor to pursue the Watergate investigation would not be limited without the prior approval of a majority of Congressional leaders. On July 29, Byrd met with Senate Majority Leader Mike Mansfield, Minority Leader Hugh Scott, and Republican whip Robert P. Griffin in the first formality by Senate leaders on the matter of President Nixon's impeachment. Byrd opposed Nixon being granted immunity. The New York Times noted that as Chairman of the Republican National Committee George H. W. Bush issued a formal statement indicating no chance for the Nixon administration to be salvaged, Byrd was advocating for President Nixon to face some punishment for the illegal activities of the administration and that former vice president Spiro Agnew should have been imprisoned. The Senate leadership met throughout August 7 to discuss Nixon's fate, the topic
a proposal to provide emergency, Byrd confirmed that he had spoken with United States Secretary of the Treasury G. William Miller about what Byrd called "excellent" chances that the Senate would complete work on a federal loans guarantees bill for Chrysler. In August 1980, Byrd stated that Congress was unlikely to pass a tax cut before the November elections despite the Senate being in the mood for passing one. Turkey In July 1978, Byrd introduced and endorsed a proposal by George McGovern for an amendment to repeal the 42‐month‐old embargo on American military assistance for Turkey that also linked any future aid for that country to progress on a negotiated settlement of the Cyprus problem. The Senate approved the amendment in a vote of 57 to 42 as part of a $2.9 billion international security assistance bill. Byrd stated that every government in the NATO alliance except Greece favored repeal of the embargo. In May 1979, Byrd stated that giving Turkey a grant should not be construed as retaliation against Greece and that aid for Turkey would improve Turkey's security in addition to that of Greece, NATO, and of American allies in the Middle East. Byrd mentioned his encouragement from the report on the Greek and Turkish Cypriot communities agreeing to resume negotiations on the island's future as well as reports that progress was also being made on the reintegration of Greece into NATO. Byrd furthered that American military installations in Turkey were "of major importance in the monitoring of Soviet strategic activities" and would have "obvious significance" in the goal of verifying compliance by the Soviet Union with the strategic arms treaty. The Senate approved the Turkey grant, to Byrd's wishes, but against that of both President Carter and the Senate Foreign Relations Committee. Foreign policy On February 2, 1978, Byrd and Minority Leader Baker invited all other senators to join them in sponsoring two amendments to the Panama Canal neutrality treaty, the two party leaders sending copies of amendments recommended by the Foreign Relations Committee the previous week. In January 1979, Byrd met with Deputy Prime Minister of China Deng Xiaoping for assurances by Deng that China hoped to unite Taiwan to the mainland by peaceful means and would fully respect "the present realities" on the island. Byrd afterward stated that his concern on the Taiwan question had been allayed. In June, Byrd opined that a decision by President Carter to not proceed with the new missile system would kill the strategic arms limitation treaty in the Senate. Byrd held meetings with Soviet leaders between July 3 to July 4. Following their conclusion, Byrd said he was still undecided on supporting the arms pact and that there had been talks on "the need on both sides for avoidance of inflammatory rhetoric which can only be counterproductive." On September 23, Byrd stated that it was possible the Senate could complete the strategic arms limitation treaty that year but a delay until the following year could result in its defeat, adding that senators might have to remain in session during Christmas to ensure the treaty was voted on before 1979's end. Byrd noted that he was opposed to the treaty being "held hostage to the Cuban situation" as American interests could be harmed in the event the treaty was defeated solely due to Soviet troops being in Cuba. In November, Byrd admitted to complaining to President Carter about Senate leadership receiving only occasional briefings about the Iranian hostage crisis and that Carter had agreed to daily consultations for Minority Leader Howard Baker, chairman of the Foreign Relations Committee Frank Church, and ranking Republican member of the Foreign Relations Committee Jacob Javits. Byrd added that he did not disagree with the move by the Carter administration to admit Mohammad Reza Pahlavi for hospitalization and that the same action would extend to "Ayatollah Khomeini himself if he were needing medical treatment and had a terminal illness." On December 3, Byrd told reporters that the Iranian hostage crisis was making the Senate uninhabitable for a debate on the strategic arms treaty, noting that a discussion could still occur before the Senate adjourned on December 21 but that he did not believe he would call up the opportunity even if granted the chance. Days later, Byrd announced there was no chance that the Senate would take up debate on the strategic arms treaty that year while speaking to reporters, adding that he would see no harm in having the discussion on the treaty begin in January of the following year. 1980 Presidential election In July 1979, Senators Henry M. Jackson and George McGovern made comments expressing doubt on President Carter being assured as the Democratic nominee in the 1980 Presidential election. When asked about their comments by a reporter, Byrd referred to Jackson and McGovern as "two very strong voices and not at all to be considered men who have little background in politics" but stated it was too early to participate in "writing the political obituary of the President at this point." Byrd added that the powers of the presidency made it possible that Carter could have a comeback and cited the events in November and December as being telling of his prospects of achieving higher popularity. On May 10, 1980, Byrd called for President Carter to debate Senator Ted Kennedy, who he complimented as having done a service for the US by raising key issues in his presidential campaign. On August 2, Byrd advocated for an open Democratic National Convention where the delegates were not bound to a single candidate. The endorsement was seen as a break from President Carter. In September, Byrd said that Republican presidential nominee Ronald Reagan had made comments on the war between Iran and Iraq that were a disservice to the United States and that he was exercising "reckless political posturing" in foreign policy. George H. W. Bush era In early 1990, Byrd proposed an amendment granting special aid to coal miners who would lose their jobs in the event that Congress passed clean air legislation. Byrd was initially confident in the number of votes he needed to secure its passage being made available but this was prevented by a vote from Democrat Joe Biden who said the measure's passage would mean an assured veto by President Bush. Speaking to reporters after its defeat, Byrd stated his content with the results: "I made the supreme effort. I did everything I could and, therefore, I don't feel badly about it." The Senate passed clean air legislation within weeks of the vote on Byrd's amendment with the intent of reduction in acid rain, urban smog and toxic chemicals in the air and meeting the request by President Bush for a measure that was less costly than the initial plan while still performing the same tasks of combating clean air issues. Byrd was one of eleven senators to vote against the bill and said he "cannot vote for legislation that can bring economic ruin to communities throughout the Appalachian region and the Midwest." In August 1990, after the Senate passed its first major campaign finance reform bill since the Watergate era that would prevent political action committees from federal campaigns, lend public money into congressional campaigns and bestow candidates vouchers for television advertising, Byrd stated that he believed the bill would "end the money chase." Byrd authored an amendment to the National Endowment for the Arts that would bar the endowment from funding projects considered obscene such as depictions of sadomasochism, homo-eroticism, the sexual exploitation of children, or individuals engaged in sex acts while also requiring grant recipients to sign a pledge swearing their compliance with the restrictions. The October 1990 measure approved in the Senate was a bipartisan measure loosening government restrictions on art project funding and leaving courts to judge what art could be considered obscene. President Bush nominated Clarence Thomas for the Supreme Court. In October 1991, Byrd stated his support in the credibility of Anita Hill: "I believe what she said. I did not see on that face the knotted brow of satanic revenge. I did not see a face that was contorted with hate. I did not hear a voice that was tremulous with passion. I saw the face of a woman, one of 13 in a family of Southern blacks who grew up on the farm and who belonged to the church." Byrd questioned how members of the Senate could be convinced that Thomas would serve as an objective judge when he could refuse to watch Hill's testimony against him. In February 1992, the Senate turned down a Republican attempt sponsored by John McCain and Dan Coats to grant President Bush line-item veto authority and thereby be authorized to kill projects that he was opposed to, Byrd delivering an address defending congressional power over spending for eight hours afterward. The speech had been written by Byrd two years prior and he had at this point steered $1.5 billion to his state. In 1992, there was an effort made to pass a constitutional amendment to ensure a balanced federal budget. Byrd called the amendment "a smokescreen that will allow lawmakers to claim action against the deficit while still postponing hard budgetary decision" and spoke to reporters on his feelings against the amendment being passed: "Once members are really informed as to the mischief this amendment could do, and the damage it could do to the country and to the Constitution. I just have faith that enough members will take a courageous stand against the amendment." The sponsor of the amendment, Paul Simon, admitted that Byrd's prediction was not off and that other senators speak "when the chairman of the Senate Appropriations Committee talks". In a June 1992 debate, Byrd argued in favor of the United States withdrawing accepting immigrants that did not speak English, the comment being a response to a plan from the Bush administration that would enable former Soviet states to receive American assistance and allow immigrants from a variety of countries to receive welfare benefits. Byrd soon afterward apologized for the comment and said they were due to his frustration over the federal government's inability to afford several essential services. Bill Clinton era In February 1994, the Senate passed a $10 billion spending bill that would mostly be allocated to Los Angeles, California earthquake victims and military operations abroad. Bob Dole, John Kerry, John McCain, and Russ Feingold partnered together to persuade the Senate in favor of cutting back the deficit expense. Byrd raised a procedural point to derail an attempt by Dole that would approve $50 billion in spending cuts over the following five years. McCain proposed killing highway demonstration projects with a $203 million price tag, leading Byrd to produce letters written by McCain that the latter had sent to the Appropriations Committee in 1991 in an attempt to gather highway grants for his home state of Arizona. Byrd said that McCain "is very considerate of the taxpayers when it comes to financing projects in other states, but he supports such projects in his own state." In May 2000, Byrd and John Warner sponsored a provision threatening to withdraw American troops from Kosovo, the legislation if enacted cutting off funds for troops in Kosovo after July 1, 2001, without Congressional consent. The language would have also withheld 25 percent of the money for Kosovo in the bill unless the assertion that European countries were living up to their promises to provide reconstruction money for the province was certified by President Clinton by July 15. Byrd argued that lawmakers had never approved nor debate whether American troops should be stationed in Kosovo. The Senate Appropriations Committee approved the legislation in a vote of 23-to-3 that was said to reflect "widespread concern among lawmakers about an open-ended deployment of American soldiers". In November 2000, Congress passed an amendment sponsored by Byrd diverting tariff revenues from the Treasury Department and instead allocating them to the industry complaining, the amount involved ranging from between $40 million and $200 million a year. The following month, Japan and the European Union led a group of countries in filing a joint complaint with the World Trade Organization to the law. George W. Bush era Byrd praised the nomination of John G. Roberts to fill the vacancy on the Supreme Court created by the death of Chief Justice William Rehnquist. Likewise, Byrd was one of four Democrats who supported the confirmation of Samuel Alito to replace retiring Associate Justice Sandra Day O'Connor. Like most Democrats, Byrd opposed Bush's tax cuts and his proposals to change the Social Security program. Byrd opposed the 2002 Homeland Security Act, which created the Department of Homeland Security, stating that the bill ceded too much authority to the executive branch. On May 2, 2002, Byrd charged the White House with engaging in "sophomoric political antics", citing Homeland Security Advisor Tom Ridge briefing senators in another location instead of the Senate on how safe he felt the U.S. was. He also led the opposition to Bush's bid to win back the power to negotiate trade deals that Congress cannot amend, but lost overwhelmingly. In the 108th Congress, however, Byrd won his party's top seat on the new Homeland Security Appropriations Subcommittee. In July 2004, Byrd released The New York Times best-selling book Losing America: Confronting a Reckless and Arrogant Presidency, which criticized the Bush presidency and the war in Iraq. Iraq War Byrd led a filibuster against the resolution granting President George W. Bush broad power to wage a "preemptive" war against Iraq, but he could not get even a majority of his own party to vote against cloture. Byrd was one of the Senate's most outspoken critics of the 2003 invasion of Iraq. Byrd anticipated the difficulty of fighting an insurgency in Iraq, stating on March 13, 2003, On March 19, 2003, when Bush ordered the invasion after receiving congressional approval, Byrd said, Byrd also criticized Bush for his speech declaring the "end of major combat operations" in Iraq, which Bush made on the USS Abraham Lincoln. Byrd stated on the Senate floor, On October 17, 2003, Byrd delivered a speech expressing his concerns about the future of the nation and his unequivocal antipathy to Bush's policies. Referencing the Hans Christian Andersen children's tale The Emperor's New Clothes, Byrd said of the president: "the emperor has no clothes." Byrd further lamented the "sheep-like" behavior of the "cowed Members of this Senate" and called on them to oppose the continuation of a "war based on falsehoods." In April 2004, Byrd mentioned the possibility of the Bush administration violating law by its failure to inform leadership in Congress midway through 2002 about its use of emergency anti-terror dollars to begin preparations for an invasion of Iraq. Byrd stated that he had never been told of a shift in money, a charge reported in the Bob Woodward book Plan of Attack, and its validation would mean "the administration failed to abide by the law to consult with and fully inform Congress." Byrd accused the Bush administration of stifling dissent: Of the more than 18,000 votes he cast as a senator, Byrd said he was proudest of his vote against the Iraq war resolution. Byrd also voted to tie a timetable for troop withdrawal to war funding. Gang of 14 On May 23, 2005, Byrd was one of 14 senators (who became known as the "Gang of 14") to forge a compromise on the judicial filibuster, thus securing up and down votes for many judicial nominees and ending the threat of the so-called nuclear option that would have eliminated the filibuster entirely. Under the agreement, the senators retained the power to filibuster a judicial nominee in only an "extraordinary circumstance." It ensured that the appellate court nominees (Janice Rogers Brown, Priscilla Owen and William Pryor) would receive votes by the full Senate. Other votes In 1977, Byrd was one of five Democrats to vote against the nomination of F. Ray Marshall as United States Secretary of Labor. Marshall was opposed by conservatives in both parties because of his pro-labor positions, including support for repealing right to work laws. Marshall was confirmed and served until the end of Carter's term in 1981. In February 1981, as the Senate voted on giving final approval to the $50 billion increase in the debt limit, Democrats initially opposed the measure as part of an effort to elicit the highest number of Republicans in support of the measure. Byrd proceeded to give a signal for Democrats that saw caucus members switch their votes in support of the increase. President Reagan was injured during an assassination attempt in March 1981. Following the shooting, Byrd opined that the aftermath of the attempt had proven there were "holes that need to be plugged" in the constitution's handling of the presidential line of succession after a president's disability and stated his intent to introduce legislation calling for a mandatory life sentence for anyone attempting to assassinate a president, vice president, or member of Congress. In March 1981, during a Capitol Hill interview, Byrd stated that the Reagan administration was promoting an economic package with assumptions for the national economy that might take a year for the public to see its difficulties and thereby lead to a political backlash. Byrd contented that President Reagan would win approval by Congress of $35 to $40 billion of the $48 billion in proposed budget cuts while having more difficulty in passing his tax-cut package, asserting Democratic opposition and some Republicans having misgivings about the approach as the reason Congress would block the plan and furthering that he would be surprised if a one-year cut in rates lasted more than year. Byrd opined that it was time for "some tax reform" that would see loopholes closed for the rich dropped to bring in revenues and expressed belief in the likelihood of the administration dismantling existing energy programs: "Energy programs are not as catchy now as budget cuts. But if the gas lines begin to form again, or the overseas oil gets cut off, we will have lost the time, the momentum, the money. Basically, they have a wholesale dismantlement of the energy programs we spent several years creating around here." In March 1981, during a news conference, Byrd stated that the Reagan administration had not established a coherent foreign policy. He credited conflicting statements from administration officials with having contributed to confusion in Western European capitals. Byrd also said, "We've seen these statements, and backing and filling, and the secretary of state has been kept pretty busy explaining and denying assertions and pronouncements by others, which indeed indicate that the administration has not yet got its foreign policy act together." In May 1981, Byrd announced his support for the Reagan administration's proposed budget for the fiscal year 1982 during a weekly news conference, citing that the "people want the President to be given a chance with his budget." Byrd added that he did not believe a balanced budget would be achieved by 1984, calling the budget "a balanced budget on paper only, made up of juggled figures produced out of thin air", and charged the administration with making assumptions, his comments being seen as an indication that little opposition would amount from the Democrats to the Reagan budget. In November 1981, as Senate leaders rejected the request of Senator Harrison A. Williams Jr. to introduce new evidence during the Senate's consideration of whether to expel him for his involvement in the Abscam case, Byrd and Majority Leader Baker informed Williams that he could have a lawyer that would have to remain wordless. On December 2, 1981, Byrd voted in favor of an amendment to President Reagan's MX missiles proposal that would divert the silo system by $334 million as well as earmark further research for other methods that would allow giant missiles to be based. The vote was seen as a rebuff of the Reagan administration. In February 1982, Byrd wrote a letter to President Reagan urging him to "withdraw the Administration's proposed fiscal 1983 budget, and resubmit a budget that provides for much lower deficits and makes use of more realistic assumptions", recalling his previous appeal to President Carter in 1980 amid the rise of soaring inflation rates and Carter afterward consulting with Democrats in Congress. Byrd stated that he was in favor of "a document we in Congress can work with, one based on realistic assumptions, one which shows a much clearer trend toward a balanced budget." Byrd had cautious praise for a proposal by Democrat Fritz Hollings called for a freeze on all benefit programs with the exception of food stamps, Medicare and Medicaid in addition to a freeze on military spending while eliminating a pay increase for federal employees. In March 1982, Byrd announced he would introduce an amendment to the War Powers Act that would bar the president from being able to send combat troops to El Salvador without the approval of Congress. Byrd described the proposal as only allowing the president to act with independence in the event that Americans needed to evacuate El Salvador or if the United States was attacked. "It is my view that if Americans are to be asked to shed their blood in the jungles of El Salvador, all Americans should first have an opportunity to debate and carefully evaluate that action." By March 1982, along with Alan Cranston, Byrd was one of two senators supporting both the measure sponsored by Henry M. Jackson and John W. Warner calling upon the United States and the Soviet Union to freeze their nuclear arsenals at "equal and sharply reduced levels" and the bill sponsored by Ted Kennedy and Mark Hatfield calling upon the two countries first to negotiate a freeze on nuclear forces at existing levels before following atomic arms reduction. In January 1983, after President Reagan said during his 1983 State of the Union Address that he hoped for the same bipartisan support that had produced the Social Security recommendations would lead Congress during the year on other issues, Byrd and House Majority Leader Jim Wright assailed the unfairness of a six-month delay in the cost-of-living increases for Social Security recipients during a period of letting the wealthy reap the benefits of the general income tax cut for a third year. Byrd stated that he did not "want a six-month delay in Social Security while leaving in place the third year of the tax cut for upper-income people" and stated that Reagan's speech had been "'rhetorically good, but substantively lacking in measures that would deal now with the crises that millions of people are experiencing." At the beginning of February 1983, House Democrats committed themselves "to an emergency economic assistance program that would create public service jobs, provide shelter and soup kitchens for the destitute and avert foreclosures of homes and farms." Concurrently, Byrd pledged to work with the House Democrats in developing legislation concerning jobs, proposing $5 to $10 billion be spent and introducing legislation intended to form a national investment corporation that would assist with underwriting faltering basic industries and starting new ones in areas of high unemployment. In March 1984, Byrd voted against a proposed constitutional amendment authorizing periods in public school for silent prayer, and in favor of President Reagan's unsuccessful proposal for a constitutional amendment permitting organized school prayer in public schools. In June 1984, Byrd was one of five Democrats to vote against the Lawton Chiles proposal to cease MX production for a year during study in search of a smaller and single-warhead missile. The 48 to 48 tie was broken by then-Vice President George H. W. Bush. In September 1986, Byrd endorsed the death penalty for some drug pushers in anti-drug legislation that would order President Reagan to end drug trafficking within 45 days through using the military as a means of intercepting smugglers, and imposing the death penalty on those pushers who intentionally cause a death as part of their operations while providing funding for prevention, drug abuse treatment, and anti-drug laws enforcement that was estimated to cost $3 to $4 billion over three years. Byrd admitted that calling for the death penalty seemed harsh, but cautioned that children in some cases had their entire lives destroyed through using drugs and that Congress had been soft for too long without seeing a change in results. In December 1986, Byrd announced that the Senate would convene a Watergate-type select committee to investigate the Iran-Contra affair the following year and that he had reached an agreement with Bob Dole for the committee to have six Democrats and five Republicans. Byrd and Dole disagreed on whether it was a necessity for Congress to be launched into a special session that month for the purpose of getting the investigative process moving. Naming members during December enabled participants to informally move ahead by selecting the staff and be prepared before the 100th United States Congress began. In September 1988, in response to charges by Vice President Bush's presidential campaign that Democratic nominee Michael Dukakis was weak on defense, Byrd delivered a Senate speech in which he said that the Reagan administration "is living in a glass house when it throws a stone at the Democratic Party for its so-called Disneyland defense policies" and that the U.S. land-based missiles had grown in vulnerability due to the administration being "unable to produce an acceptable solution to make our missiles survivable." Byrd furthered, "Indeed, the Fantasyland exhibits of this White House's Defense Disneyland are loaded with the rejected systems that have been developed and discarded. If anything deserves the names 'Goofy' and 'Daffy' and 'Mickey Mouse,' it is those' basing proposals." In October 1990, Byrd and James A. McClure served as floor managers for the appropriation bill for the National Endowment of the Arts, accepting an amendment by Jesse Helms prohibiting NEA support of work denigrating objects or beliefs of religions. In November 1993, when the Senate voted to seek federal court enforcement of a subpoena for the diaries of Bob Packwood, Byrd stated the possibility of Americans becoming convinced that the Senate was delaying taking action to protect one of its own members. Byrd also called for Packwood to resign. "None of us is without flaws. But when those flaws damage the institution of the Senate, it is time to have the grace to go!" Packwood resigned in 1995. In October 1999, Byrd was the only senator to vote present on the Comprehensive Nuclear-Test-Ban Treaty. The treaty was designed to ban underground nuclear testing and was the first major international security pact to be defeated in the Senate since the Treaty of Versailles. Byrd opposed the Flag Desecration Amendment, saying that, while he wanted to protect the American flag, he believed that amending the Constitution "is not the most expeditious way to protect this revered symbol of our Republic." As an alternative, Byrd cosponsored the Flag Protection Act of 2005 (S. 1370), a bill to prohibit destruction or desecration of the flag by anyone trying to incite violence or causing a breach of the peace, or who steals, damages, or destroys a flag on federal property, whether owned by the federal government or a private group or individual—can be imprisoned, fined or both. The bill did not pass. In 2009, Byrd was one of three Democrats to oppose the confirmation of Secretary of the Treasury Timothy Geithner. After missing nearly
with a bony base (osteoderms), forming armor. In lepidosaurians, such as lizards and snakes, the whole skin is covered in overlapping epidermal scales. Such scales were once thought to be typical of the class Reptilia as a whole, but are now known to occur only in lepidosaurians. The scales found in turtles and crocodiles are of dermal, rather than epidermal, origin and are properly termed scutes. In turtles, the body is hidden inside a hard shell composed of fused scutes. Lacking a thick dermis, reptilian leather is not as strong as mammalian leather. It is used in leather-wares for decorative purposes for shoes, belts and handbags, particularly crocodile skin. Shedding Reptiles shed their skin through a process called ecdysis which occurs continuously throughout their lifetime. In particular, younger reptiles tend to shed once every 5–6 weeks while adults shed 3–4 times a year. Younger reptiles shed more because of their rapid growth rate. Once full size, the frequency of shedding drastically decreases. The process of ecdysis involves forming a new layer of skin under the old one. Proteolytic enzymes and lymphatic fluid is secreted between the old and new layers of skin. Consequently, this lifts the old skin from the new one allowing shedding to occur. Snakes will shed from the head to the tail while lizards shed in a "patchy pattern". Dysecdysis, a common skin disease in snakes and lizards, will occur when ecdysis, or shedding, fails. There are numerous reasons why shedding fails and can be related to inadequate humidity and temperature, nutritional deficiencies, dehydration and traumatic injuries. Nutritional deficiencies decrease proteolytic enzymes while dehydration reduces lymphatic fluids to separate the skin layers. Traumatic injuries on the other hand, form scars that will not allow new scales to form and disrupt the process of ecdysis. Excretion Excretion is performed mainly by two small kidneys. In diapsids, uric acid is the main nitrogenous waste product; turtles, like mammals, excrete mainly urea. Unlike the kidneys of mammals and birds, reptile kidneys are unable to produce liquid urine more concentrated than their body fluid. This is because they lack a specialized structure called a loop of Henle, which is present in the nephrons of birds and mammals. Because of this, many reptiles use the colon to aid in the reabsorption of water. Some are also able to take up water stored in the bladder. Excess salts are also excreted by nasal and lingual salt glands in some reptiles. In all reptiles the urinogenital ducts and the anus both empty into an organ called a cloaca. In some reptiles, a midventral wall in the cloaca may open into a urinary bladder, but not all. It is present in all turtles and tortoises as well as most lizards, but is lacking in the monitor lizard, the legless lizards. It is absent in the snakes, alligators, and crocodiles. Many turtles, tortoises, and lizards have proportionally very large bladders. Charles Darwin noted that the Galapagos tortoise had a bladder which could store up to 20% of its body weight. Such adaptations are the result of environments such as remote islands and deserts where water is very scarce. Other desert-dwelling reptiles have large bladders that can store a long-term reservoir of water for up to several months and aid in osmoregulation. Turtles have two or more accessory urinary bladders, located lateral to the neck of the urinary bladder and dorsal to the pubis, occupying a significant portion of their body cavity. Their bladder is also usually bilobed with a left and right section. The right section is located under the liver, which prevents large stones from remaining in that side while the left section is more likely to have calculi. Digestion Most reptiles are insectivorous or carnivorous and have simple and comparatively short digestive tracts due to meat being fairly simple to break down and digest. Digestion is slower than in mammals, reflecting their lower resting metabolism and their inability to divide and masticate their food. Their poikilotherm metabolism has very low energy requirements, allowing large reptiles like crocodiles and large constrictors to live from a single large meal for months, digesting it slowly. While modern reptiles are predominantly carnivorous, during the early history of reptiles several groups produced some herbivorous megafauna: in the Paleozoic, the pareiasaurs; and in the Mesozoic several lines of dinosaurs. Today, turtles are the only predominantly herbivorous reptile group, but several lines of agamas and iguanas have evolved to live wholly or partly on plants. Herbivorous reptiles face the same problems of mastication as herbivorous mammals but, lacking the complex teeth of mammals, many species swallow rocks and pebbles (so called gastroliths) to aid in digestion: The rocks are washed around in the stomach, helping to grind up plant matter. Fossil gastroliths have been found associated with both ornithopods and sauropods, though whether they actually functioned as a gastric mill in the latter is disputed. Salt water crocodiles also use gastroliths as ballast, stabilizing them in the water or helping them to dive. A dual function as both stabilizing ballast and digestion aid has been suggested for gastroliths found in plesiosaurs. Nerves The reptilian nervous system contains the same basic part of the amphibian brain, but the reptile cerebrum and cerebellum are slightly larger. Most typical sense organs are well developed with certain exceptions, most notably the snake's lack of external ears (middle and inner ears are present). There are twelve pairs of cranial nerves. Due to their short cochlea, reptiles use electrical tuning to expand their range of audible frequencies. Intelligence Reptiles are generally considered less intelligent than mammals and birds. The size of their brain relative to their body is much less than that of mammals, the encephalization quotient being about one tenth of that of mammals, though larger reptiles can show more complex brain development. Larger lizards, like the monitors, are known to exhibit complex behavior, including cooperation and cognitive abilities allowing them to optimize their foraging and territoriality over time. Crocodiles have relatively larger brains and show a fairly complex social structure. The Komodo dragon is even known to engage in play, as are turtles, which are also considered to be social creatures, and sometimes switch between monogamy and promiscuity in their sexual behavior. One study found that wood turtles were better than white rats at learning to navigate mazes. Another study found that giant tortoises are capable of learning through operant conditioning, visual discrimination and retained learned behaviors with long-term memory. Sea turtles have been regarded as having simple brains, but their flippers are used for a variety of foraging tasks (holding, bracing, corralling) in common with marine mammals. Vision Most reptiles are diurnal animals. The vision is typically adapted to daylight conditions, with color vision and more advanced visual depth perception than in amphibians and most mammals. Reptiles usually have excellent vision, allowing them to detect shapes and motions at long distances. They often have only a few Rod cells and have poor vision in low-light conditions. At the same time they have cells called "double cones" which give them sharp color vision and enable them to see ultraviolet wavelengths. In some species, such as blind snakes, vision is reduced. Many lepidosaurs have a photosensory organ on the top of their heads called the parietal eye, which are also called third eye, pineal eye or pineal gland. This "eye" does not work the same way as a normal eye does as it has only a rudimentary retina and lens and thus, cannot form images. It is however sensitive to changes in light and dark and can detect movement. Some snakes have extra sets of visual organs (in the loosest sense of the word) in the form of pits sensitive to infrared radiation (heat). Such heat-sensitive pits are particularly well developed in the pit vipers, but are also found in boas and pythons. These pits allow the snakes to sense the body heat of birds and mammals, enabling pit vipers to hunt rodents in the dark. Most reptiles including birds possess a nictitating membrane, a translucent third eyelid which is drawn over the eye from the inner corner. Notably, it protects a crocodilian's eyeball surface while allowing a degree of vision underwater. However, many squamates, geckos and snakes in particular, lack eyelids, which are replaced by a transparent scale. This is called the brille, spectacle, or eyecap. The brille is usually not visible, except for when the snake molts, and it protects the eyes from dust and dirt. Reproduction Reptiles generally reproduce sexually, though some are capable of asexual reproduction. All reproductive activity occurs through the cloaca, the single exit/entrance at the base of the tail where waste is also eliminated. Most reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis, while squamates, including snakes and lizards, possess a pair of hemipenes, only one of which is typically used in each session. Tuatara, however, lack copulatory organs, and so the male and female simply press their cloacas together as the male discharges sperm. Most reptiles lay amniotic eggs covered with leathery or calcareous shells. An amnion, chorion, and allantois are present during embryonic life. The eggshell (1) protects the crocodile embryo (11) and keeps it from drying out, but it is flexible to allow gas exchange. The chorion (6) aids in gas exchange between the inside and outside of the egg. It allows carbon dioxide to exit the egg and oxygen gas to enter the egg. The albumin (9) further protects the embryo and serves as a reservoir for water and protein. The allantois (8) is a sac that collects the metabolic waste produced by the embryo. The amniotic sac (10) contains amniotic fluid (12) which protects and cushions the embryo. The amnion (5) aids in osmoregulation and serves as a saltwater reservoir. The yolk sac (2) surrounding the yolk (3) contains protein and fat rich nutrients that are absorbed by the embryo via vessels (4) that allow the embryo to grow and metabolize. The air space (7) provides the embryo with oxygen while it is hatching. This ensures that the embryo will not suffocate while it is hatching. There are no larval stages of development. Viviparity and ovoviviparity have evolved in many extinct clades of reptiles and in squamates. In the latter group, many species, including all boas and most vipers, utilize this mode of reproduction. The degree of viviparity varies; some species simply retain the eggs until just before hatching, others provide maternal nourishment to supplement the yolk, and yet others lack any yolk and provide all nutrients via a structure similar to the mammalian placenta. The earliest documented case of viviparity in reptiles is the Early Permian mesosaurs, although some individuals or taxa in that clade may also have been oviparous because a putative isolated egg has also been found. Several groups of Mesozoic marine reptiles also exhibited viviparity, such as mosasaurs, ichthyosaurs, and Sauropterygia, a group that include pachypleurosaurs and Plesiosauria. Asexual reproduction has been identified in squamates in six families of lizards and one snake. In some species of squamates, a population of females is able to produce a unisexual diploid clone of the mother. This form of asexual reproduction, called parthenogenesis, occurs in several species of gecko, and is particularly widespread in the teiids (especially Aspidocelis) and lacertids (Lacerta). In captivity, Komodo dragons (Varanidae) have reproduced by parthenogenesis. Parthenogenetic species are suspected to occur among chameleons, agamids, xantusiids, and typhlopids. Some reptiles exhibit temperature-dependent sex determination (TDSD), in which the incubation temperature determines whether a particular egg hatches as male or female. TDSD is most common in turtles and crocodiles, but also occurs in lizards and tuatara. To date, there has been no confirmation of whether TDSD occurs in snakes. Defense mechanisms Many small reptiles, such as snakes and lizards that live on the ground or in the water, are vulnerable to being preyed on by all kinds of carnivorous animals. Thus avoidance is the most common form of defense in reptiles. At the first sign of danger, most snakes and lizards crawl away into the undergrowth, and turtles and crocodiles will plunge into water and sink out of sight. Camouflage and warning Reptiles tend to avoid confrontation through camouflage. Two major groups of reptile predators are birds and other reptiles, both of which have well developed color vision. Thus the skins of many reptiles have cryptic coloration of plain or mottled gray, green, and brown to allow them to blend into the background of their natural environment. Aided by the reptiles' capacity for remaining motionless for long periods, the camouflage of many snakes is so effective that people or domestic animals are most typically bitten because they accidentally step on them. When camouflage fails to protect them, blue-tongued skinks will try to ward off attackers by displaying their blue tongues, and the frill-necked lizard will display its brightly colored frill. These same displays are used in territorial disputes and during courtship. If danger arises so suddenly that flight is useless, crocodiles, turtles, some lizards, and some snakes hiss loudly when confronted by an enemy. Rattlesnakes rapidly vibrate the tip of the tail, which is composed of a series of nested, hollow beads to ward off approaching danger. In contrast to the normal drab coloration of most reptiles, the lizards of the genus Heloderma (the Gila monster and the beaded lizard) and many of the coral snakes have high-contrast warning coloration, warning potential predators they are venomous. A number of non-venomous North American snake species have colorful markings similar to those of the coral snake, an oft cited example of Batesian mimicry. Alternative defense in snakes Camouflage does not always fool a predator. When caught out, snake species adopt different defensive tactics and use a complicated set of behaviors when attacked. Some first elevate their head and spread out the skin of their neck in an effort to look large and threatening. Failure of this strategy may lead to other measures practiced particularly by cobras, vipers, and closely related species, which use venom to attack. The venom is modified saliva, delivered through fangs from a venom gland. Some non-venomous snakes, such as American hognose snakes or European grass snake, play dead when in danger; some, including the grass snake, exude a foul-smelling liquid to deter attackers. Defense in crocodilians When a crocodilian is concerned about its safety, it will gape to expose the teeth and yellow tongue. If this does not work, the crocodilian gets a little more agitated and typically begins to make hissing sounds. After this, the crocodilian will start to change its posture dramatically to make itself look more intimidating. The body is inflated to increase apparent size. If absolutely necessary it may decide to attack an enemy. Some species try to bite immediately. Some will use their heads as sledgehammers and literally smash an opponent, some will rush or swim toward the threat from a distance, even chasing the opponent onto land or galloping after it. The main weapon in all crocodiles is the bite, which can generate very high bite force. Many species also possess canine-like teeth. These are used primarily for seizing prey, but are also used in fighting and display. Shedding and regenerating tails Geckos, skinks, and other lizards that are captured by the tail will shed part of the tail structure through a process called autotomy and thus be able to flee. The detached tail will continue to wiggle, creating a deceptive sense of continued struggle and distracting the predator's attention from the fleeing prey animal. The detached tails of
base (osteoderms), forming armor. In lepidosaurians, such as lizards and snakes, the whole skin is covered in overlapping epidermal scales. Such scales were once thought to be typical of the class Reptilia as a whole, but are now known to occur only in lepidosaurians. The scales found in turtles and crocodiles are of dermal, rather than epidermal, origin and are properly termed scutes. In turtles, the body is hidden inside a hard shell composed of fused scutes. Lacking a thick dermis, reptilian leather is not as strong as mammalian leather. It is used in leather-wares for decorative purposes for shoes, belts and handbags, particularly crocodile skin. Shedding Reptiles shed their skin through a process called ecdysis which occurs continuously throughout their lifetime. In particular, younger reptiles tend to shed once every 5–6 weeks while adults shed 3–4 times a year. Younger reptiles shed more because of their rapid growth rate. Once full size, the frequency of shedding drastically decreases. The process of ecdysis involves forming a new layer of skin under the old one. Proteolytic enzymes and lymphatic fluid is secreted between the old and new layers of skin. Consequently, this lifts the old skin from the new one allowing shedding to occur. Snakes will shed from the head to the tail while lizards shed in a "patchy pattern". Dysecdysis, a common skin disease in snakes and lizards, will occur when ecdysis, or shedding, fails. There are numerous reasons why shedding fails and can be related to inadequate humidity and temperature, nutritional deficiencies, dehydration and traumatic injuries. Nutritional deficiencies decrease proteolytic enzymes while dehydration reduces lymphatic fluids to separate the skin layers. Traumatic injuries on the other hand, form scars that will not allow new scales to form and disrupt the process of ecdysis. Excretion Excretion is performed mainly by two small kidneys. In diapsids, uric acid is the main nitrogenous waste product; turtles, like mammals, excrete mainly urea. Unlike the kidneys of mammals and birds, reptile kidneys are unable to produce liquid urine more concentrated than their body fluid. This is because they lack a specialized structure called a loop of Henle, which is present in the nephrons of birds and mammals. Because of this, many reptiles use the colon to aid in the reabsorption of water. Some are also able to take up water stored in the bladder. Excess salts are also excreted by nasal and lingual salt glands in some reptiles. In all reptiles the urinogenital ducts and the anus both empty into an organ called a cloaca. In some reptiles, a midventral wall in the cloaca may open into a urinary bladder, but not all. It is present in all turtles and tortoises as well as most lizards, but is lacking in the monitor lizard, the legless lizards. It is absent in the snakes, alligators, and crocodiles. Many turtles, tortoises, and lizards have proportionally very large bladders. Charles Darwin noted that the Galapagos tortoise had a bladder which could store up to 20% of its body weight. Such adaptations are the result of environments such as remote islands and deserts where water is very scarce. Other desert-dwelling reptiles have large bladders that can store a long-term reservoir of water for up to several months and aid in osmoregulation. Turtles have two or more accessory urinary bladders, located lateral to the neck of the urinary bladder and dorsal to the pubis, occupying a significant portion of their body cavity. Their bladder is also usually bilobed with a left and right section. The right section is located under the liver, which prevents large stones from remaining in that side while the left section is more likely to have calculi. Digestion Most reptiles are insectivorous or carnivorous and have simple and comparatively short digestive tracts due to meat being fairly simple to break down and digest. Digestion is slower than in mammals, reflecting their lower resting metabolism and their inability to divide and masticate their food. Their poikilotherm metabolism has very low energy requirements, allowing large reptiles like crocodiles and large constrictors to live from a single large meal for months, digesting it slowly. While modern reptiles are predominantly carnivorous, during the early history of reptiles several groups produced some herbivorous megafauna: in the Paleozoic, the pareiasaurs; and in the Mesozoic several lines of dinosaurs. Today, turtles are the only predominantly herbivorous reptile group, but several lines of agamas and iguanas have evolved to live wholly or partly on plants. Herbivorous reptiles face the same problems of mastication as herbivorous mammals but, lacking the complex teeth of mammals, many species swallow rocks and pebbles (so called gastroliths) to aid in digestion: The rocks are washed around in the stomach, helping to grind up plant matter. Fossil gastroliths have been found associated with both ornithopods and sauropods, though whether they actually functioned as a gastric mill in the latter is disputed. Salt water crocodiles also use gastroliths as ballast, stabilizing them in the water or helping them to dive. A dual function as both stabilizing ballast and digestion aid has been suggested for gastroliths found in plesiosaurs. Nerves The reptilian nervous system contains the same basic part of the amphibian brain, but the reptile cerebrum and cerebellum are slightly larger. Most typical sense organs are well developed with certain exceptions, most notably the snake's lack of external ears (middle and inner ears are present). There are twelve pairs of cranial nerves. Due to their short cochlea, reptiles use electrical tuning to expand their range of audible frequencies. Intelligence Reptiles are generally considered less intelligent than mammals and birds. The size of their brain relative to their body is much less than that of mammals, the encephalization quotient being about one tenth of that of mammals, though larger reptiles can show more complex brain development. Larger lizards, like the monitors, are known to exhibit complex behavior, including cooperation and cognitive abilities allowing them to optimize their foraging and territoriality over time. Crocodiles have relatively larger brains and show a fairly complex social structure. The Komodo dragon is even known to engage in play, as are turtles, which are also considered to be social creatures, and sometimes switch between monogamy and promiscuity in their sexual behavior. One study found that wood turtles were better than white rats at learning to navigate mazes. Another study found that giant tortoises are capable of learning through operant conditioning, visual discrimination and retained learned behaviors with long-term memory. Sea turtles have been regarded as having simple brains, but their flippers are used for a variety of foraging tasks (holding, bracing, corralling) in common with marine mammals. Vision Most reptiles are diurnal animals. The vision is typically adapted to daylight conditions, with color vision and more advanced visual depth perception than in amphibians and most mammals. Reptiles usually have excellent vision, allowing them to detect shapes and motions at long distances. They often have only a few Rod cells and have poor vision in low-light conditions. At the same time they have cells called "double cones" which give them sharp color vision and enable them to see ultraviolet wavelengths. In some species, such as blind snakes, vision is reduced. Many lepidosaurs have a photosensory organ on the top of their heads called the parietal eye, which are also called third eye, pineal eye or pineal gland. This "eye" does not work the same way as a normal eye does as it has only a rudimentary retina and lens and thus, cannot form images. It is however sensitive to changes in light and dark and can detect movement. Some snakes have extra sets of visual organs (in the loosest sense of the word) in the form of pits sensitive to infrared radiation (heat). Such heat-sensitive pits are particularly well developed in the pit vipers, but are also found in boas and pythons. These pits allow the snakes to sense the body heat of birds and mammals, enabling pit vipers to hunt rodents in the dark. Most reptiles including birds possess a nictitating membrane, a translucent third eyelid which is drawn over the eye from the inner corner. Notably, it protects a crocodilian's eyeball surface while allowing a degree of vision underwater. However, many squamates, geckos and snakes in particular, lack eyelids, which are replaced by a transparent scale. This is called the brille, spectacle, or eyecap. The brille is usually not visible, except for when the snake molts, and it protects the eyes from dust and dirt. Reproduction Reptiles generally reproduce sexually, though some are capable of asexual reproduction. All reproductive activity occurs through the cloaca, the single exit/entrance at the base of the tail where waste is also eliminated. Most reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis, while squamates, including snakes and lizards, possess a pair of hemipenes, only one of which is typically used in each session. Tuatara, however, lack copulatory organs, and so the male and female simply press their cloacas together as the male discharges sperm. Most reptiles lay amniotic eggs covered with leathery or calcareous shells. An amnion, chorion, and allantois are present during embryonic life. The eggshell (1) protects the crocodile embryo (11) and keeps it from drying out, but it is flexible to allow gas exchange. The chorion (6) aids in gas exchange between the inside and outside of the egg. It allows carbon dioxide to exit the egg and oxygen gas to enter the egg. The albumin (9) further protects the embryo and serves as a reservoir for water and protein. The allantois (8) is a sac that collects the metabolic waste produced by the embryo. The amniotic sac (10) contains amniotic fluid (12) which protects and cushions the embryo. The amnion (5) aids in osmoregulation and serves as a saltwater reservoir. The yolk sac (2) surrounding the yolk (3) contains protein and fat rich nutrients that are absorbed by the embryo via vessels (4) that allow the embryo to grow and metabolize. The air space (7) provides the embryo with oxygen while it is hatching. This ensures that the embryo will not suffocate while it is hatching. There are no larval stages of development. Viviparity and ovoviviparity have evolved in many extinct clades of reptiles and in squamates. In the latter group, many species, including all boas and most vipers, utilize this mode of reproduction. The degree of viviparity varies; some species simply retain the eggs until just before hatching, others provide maternal nourishment to supplement the yolk, and yet others lack any yolk and provide all nutrients via a structure similar to the mammalian placenta. The earliest documented case of viviparity in reptiles is the Early Permian mesosaurs, although some individuals or taxa in that clade may also have been oviparous because a putative isolated egg has also been found. Several groups of Mesozoic marine reptiles also exhibited viviparity, such as mosasaurs, ichthyosaurs, and Sauropterygia, a group that include pachypleurosaurs and Plesiosauria. Asexual reproduction has been identified in squamates in six families of lizards and one snake. In some species of squamates, a population of females is able to produce a unisexual diploid clone of the mother. This form of asexual reproduction, called parthenogenesis, occurs in several species of gecko, and is particularly widespread in the teiids (especially Aspidocelis) and lacertids (Lacerta). In captivity, Komodo dragons (Varanidae) have reproduced by parthenogenesis. Parthenogenetic species are suspected to occur among chameleons, agamids, xantusiids, and typhlopids. Some reptiles exhibit temperature-dependent sex determination (TDSD), in which the incubation temperature determines whether a particular egg hatches as male or female. TDSD is most common in turtles and crocodiles, but also occurs in lizards and tuatara. To date, there has been no confirmation of whether TDSD occurs in snakes. Defense mechanisms Many small reptiles, such as snakes and lizards that live on the ground or in the water, are vulnerable to being preyed on by all kinds of carnivorous animals. Thus avoidance is the most common form of defense in reptiles. At the first sign of danger, most snakes and lizards crawl away into the undergrowth, and turtles and crocodiles will plunge into water and sink out of sight. Camouflage and warning Reptiles tend to avoid confrontation through camouflage. Two major groups of reptile predators are birds and other reptiles, both of which have well developed color vision. Thus the skins of many reptiles have cryptic coloration of plain or mottled gray, green, and brown to allow them to blend into the background of their natural environment. Aided by the reptiles' capacity for remaining motionless for long periods, the camouflage of many snakes is so effective that people or domestic animals are most typically bitten because they accidentally step on them. When camouflage fails to protect them, blue-tongued skinks will try to ward off attackers by displaying their blue tongues, and the frill-necked lizard will display its brightly colored frill. These same displays are used in territorial disputes and during courtship. If danger arises so suddenly that flight is useless, crocodiles, turtles, some lizards, and some snakes hiss loudly when confronted by an enemy. Rattlesnakes rapidly vibrate the tip of the tail, which is composed of a series of nested, hollow beads to ward off approaching danger. In contrast to the normal drab coloration of most reptiles, the lizards of the genus Heloderma (the Gila monster and the beaded lizard) and many of the coral snakes have high-contrast warning coloration, warning potential predators they are venomous. A number of non-venomous North American snake species have colorful markings similar to those of the coral snake, an oft cited example of Batesian mimicry. Alternative defense in snakes Camouflage does not always fool a predator. When caught out, snake species adopt different defensive tactics and use a complicated set of behaviors when attacked. Some first elevate their head and spread out the skin of their neck in an effort to look large and threatening. Failure of this strategy may lead to other measures practiced particularly by cobras, vipers, and closely related species, which use venom to attack. The venom is modified saliva, delivered through fangs from a venom gland. Some non-venomous snakes, such as American hognose snakes or European grass snake, play dead when in danger; some, including the grass snake, exude a foul-smelling liquid to deter attackers. Defense in crocodilians When a crocodilian is concerned about its safety, it will gape to expose the teeth and yellow tongue. If this does not work, the crocodilian gets a little more agitated and typically begins to make hissing sounds. After this, the crocodilian will start to change its posture dramatically to make itself look more intimidating. The body is inflated to increase apparent size. If absolutely necessary it may decide to attack an enemy. Some species try to bite immediately. Some will use their heads as sledgehammers and literally smash an opponent, some will rush or swim toward the threat from a distance, even chasing the opponent onto land or galloping after it. The main weapon in all crocodiles is the bite, which can generate very high bite force. Many species also possess canine-like teeth. These are used primarily for seizing prey, but are also used in fighting and display. Shedding and regenerating tails Geckos, skinks, and other lizards that are captured by the tail will shed part of the tail structure through a process called autotomy and thus be able to flee. The detached tail will continue to wiggle, creating a deceptive sense of continued struggle and distracting the predator's attention from the fleeing prey animal. The detached tails of leopard geckos can wiggle for up to 20 minutes. In many species the tails are of a separate and dramatically more intense color than the rest of the body so as to encourage potential predators to strike for the tail first. In the shingleback skink and some species of geckos, the tail is short and broad and resembles the head, so that the predators may attack it rather than the more vulnerable front part. Reptiles that are capable of shedding their tails can partially regenerate them over a period of weeks. The new section will however contain cartilage rather than bone, and will never grow to the same length as the original tail. It is often also distinctly discolored compared to the rest of the body and may lack some of the external sculpting features seen in the original tail. Relations with humans In cultures and religions Dinosaurs have been widely depicted in culture since the English palaeontologist Richard Owen coined the name dinosaur in 1842. As soon as 1854, the Crystal Palace Dinosaurs were on display to the public in south London. One dinosaur appeared in literature even earlier, as Charles Dickens placed a Megalosaurus in the first chapter of his novel Bleak House in 1852. The dinosaurs featured in books, films, television programs, artwork, and other media have been used for both education and entertainment. The depictions range from the realistic, as in the television documentaries of the 1990s and first decade of the 21st century, or the fantastic, as in the monster movies of the 1950s and 1960s. The snake or serpent has played a powerful symbolic role in different cultures. In Egyptian history, the Nile cobra adorned the crown of the pharaoh. It was worshipped as one of the gods and was also used for sinister purposes: murder of an adversary and ritual suicide (Cleopatra). In Greek mythology snakes are associated with deadly antagonists, as a chthonic symbol, roughly translated as earthbound. The nine-headed Lernaean Hydra that Hercules defeated and the three Gorgon sisters are children of Gaia, the earth. Medusa was one of the three Gorgon sisters who Perseus defeated. Medusa is described as a hideous mortal, with snakes instead of hair and the power to turn men to stone with her gaze. After killing her, Perseus gave her head to Athena who fixed it to her shield called the Aegis. The Titans are depicted in art with their legs replaced by bodies of snakes for the same reason: They are children of Gaia, so they are bound to the earth. In Hinduism, snakes are worshipped as gods, with many women pouring milk on snake pits. The cobra is seen on the neck of Shiva, while Vishnu is depicted often as sleeping on a seven-headed snake or within the coils of a serpent. There are temples in India solely for cobras sometimes called Nagraj (King of Snakes), and it is believed that snakes are symbols of fertility. In the annual Hindu festival of Nag Panchami, snakes are venerated and prayed to. In religious terms, the snake and jaguar are arguably the most important animals in ancient Mesoamerica. "In states of ecstasy, lords dance a serpent dance; great descending snakes adorn and support buildings from Chichen Itza to Tenochtitlan, and the Nahuatl word coatl meaning serpent or twin, forms part of primary deities such as Mixcoatl, Quetzalcoatl, and Coatlicue." In Christianity and Judaism, a serpent appears in Genesis to tempt Adam and Eve with the forbidden fruit from the Tree of Knowledge of Good and Evil. The turtle has a prominent position as a symbol of steadfastness and tranquility in religion, mythology, and folklore from around the world. A tortoise's longevity is suggested by its long lifespan and its shell, which was thought to protect it from any foe. In the cosmological myths of several cultures a World Turtle carries the world upon its back or supports the heavens. Medicine Deaths from snakebites are uncommon in many parts of the world, but are still counted in tens of thousands per year in India. Snakebite can be treated with antivenom made from the venom of the snake. To produce antivenom, a mixture of the venoms of different species of snake is injected into the body of a horse in ever-increasing dosages until the horse is immunized. Blood is then extracted; the serum is separated, purified and freeze-dried. The cytotoxic effect of snake venom is being researched as a potential treatment for cancers. Lizards such as the Gila monster produce toxins with medical applications. Gila toxin reduces plasma glucose; the substance is now synthesised for use in the anti-diabetes drug exenatide (Byetta). Another toxin from Gila monster saliva has been studied for use as an anti-Alzheimer's drug. Geckos have also been used as medicine, especially in China. Turtles have been used in Chinese traditional medicine for thousands of years, with every part of the turtle believed to have medical benefits. There is a lack of scientific evidence that would correlate claimed medical benefits to turtle consumption. Growing demand for turtle meat has placed pressure on vulnerable wild populations of turtles. Commercial farming Crocodiles are protected in many parts of the world, and are farmed commercially. Their hides are tanned and used to make leather goods such as shoes and handbags; crocodile meat is also considered a delicacy. The most commonly farmed species are the saltwater and Nile crocodiles. Farming has resulted in an increase in the saltwater crocodile population in Australia, as eggs are usually harvested from the wild, so landowners have an incentive to conserve their habitat. Crocodile leather is made into wallets, briefcases, purses, handbags, belts, hats, and shoes. Crocodile oil has been used for various purposes. Snakes are also farmed, primarily in East and Southeast Asia, and their production has become more intensive in the last decade. Snake farming has been troubling for conservation in the past as it can lead to overexploitation of wild snakes and their natural prey to supply the farms. However, farming snakes can limit the hunting of wild snakes, while reducing the slaughter of higher-order vertebrates like cows. The energy efficiency of snakes is higher than expected for carnivores, due to their ectothermy and low metabolism. Waste protein from the poultry and pig industries is used as feed in snake farms. Snake farms produce meat, snake skin, and antivenom. Turtle farming is another known but controversial practice. Turtles have been farmed for a variety of reasons, ranging from food to traditional medicine, the pet trade, and scientific conservation. Demand for turtle meat and medicinal products is one of the main threats to turtle conservation in Asia. Though commercial breeding would seem to insulate wild populations, it can stoke the demand for them and increase wild captures. Even the potentially appealing concept of raising turtles at a farm to release into the wild is questioned by some veterinarians who have had some experience with farm operations. They caution that this may introduce into the wild populations infectious diseases that occur on the farm, but have not (yet) been occurring in the wild. Reptiles in captivity In the Western world, some snakes (especially docile species such as the ball python and corn snake) are kept as pets. Numerous species of lizard are kept as pets, including bearded dragons, iguanas, anoles, and
plan promotes zero-emissions vehicles and investments in the infrastructure to support them. In 2014, Rhode Island received grants of $2,711,685 from the Environmental Protection Agency to clean up Brownfield sites in eight locations. The grants provided communities with funding to assess, clean up, and redevelop contaminated properties, boost local economies, and leverage jobs while protecting public health and the environment. In 2013, the "Lots of Hope" program was established in the City of Providence to focus on increasing the city's green space and local food production, improve urban neighborhoods, promote healthy lifestyles and improve environmental sustainability. Supported by a $100,000 grant, the program will partner with the City of Providence, the Southside Community Land Trust and the Rhode Island Foundation to convert city-owned vacant lots into productive urban farms. In 2012, Rhode Island passed bill S2277/H7412, "An act relating to Health and Safety – Environmental Cleanup Objectives for Schools", informally known as the School Siting Bill. Sponsored by Senator Juan Pichardo and Representative Scott Slater, and signed into law by the governor, it made Rhode Island the first US State to prohibit school construction on Brownfield sites where toxic vapors can potentially affect indoor air quality. It also creates a public participation process whenever a city or town considers building a school on any other kind of contaminated site. Demographics The United States Census Bureau estimated Rhode Island's population was 1,059,361 on July 1, 2019, a 0.65% increase since the 2010 United States census. At the 2020 U.S. census, its population was 1,097,379. The center of population of Rhode Island is in Providence County, in the city of Cranston. A corridor of population can be seen from the Providence area, stretching northwest following the Blackstone River to Woonsocket, where 19th-century mills drove industry and development. According to the 2010 census, 81.4% of the population was White (76.4% non-Hispanic white), 5.7% was Black or African American, 0.6% American Indian and Alaska Native, 2.9% Asian, 0.1% Native Hawaiian and other Pacific Islander, 3.3% from two or more races. 12.4% of the total population was of Hispanic or Latino origin (they may be of any race). Of the people residing in Rhode Island, 58.7% were born in Rhode Island, 26.6% were born in a different state, 2.0% were born in Puerto Rico, U.S. Island areas or born abroad to American parent(s), and 12.6% were foreign born. According to the U.S. Census Bureau, , Rhode Island had an estimated population of 1,056,298, which is an increase of 1,125, or 0.10%, from the prior year and an increase of 3,731, or 0.35%, since the year 2010. This includes a natural increase since the last census of 15,220 people (that is 66,973 births minus 51,753 deaths) and an increase due to net migration of 14,001 people into the state. Immigration from outside the United States resulted in a net increase of 18,965 people, and migration within the country produced a net decrease of 4,964 people. Hispanics in the state make up 12.8% of the population, predominantly Dominican, Puerto Rican, and Guatemalan populations. According to the 2000 U.S. census, 84% of the population aged 5 and older spoke only American English, while 8.07% spoke Spanish at home, 3.80% Portuguese, 1.96% French, 1.39% Italian and 0.78% speak other languages at home accordingly. The state's most populous ethnic group, non-Hispanic white, has declined from 96.1% in 1970 to 76.5% in 2011. In 2011, 40.3% of Rhode Island's children under the age of one belonged to racial or ethnic minority groups, meaning they had at least one parent who was not non-Hispanic white. 6.1% of Rhode Island's population were reported as under 5, 23.6% under 18, and 14.5% were 65 or older. Females made up approximately 52% of the population. According to the 2010–2015 American Community Survey, the largest ancestry groups were Irish (18.3%), Italian (18.0%), English (10.5%), French (10.4%), and Portuguese (9.3%). Rhode Island has a higher percentage of Americans of Portuguese ancestry, including Portuguese Americans and Cape Verdean Americans than any other state in the nation. Additionally, the state also has the highest percentage of Liberian immigrants, with more than 15,000 residing in the state. Italian Americans make up a plurality in central and southern Providence County and French-Canadian Americans form a large part of northern Providence County. Irish Americans have a strong presence in Newport and Kent counties. Americans of English ancestry still have a presence in the state as well, especially in Washington County, and are often referred to as "Swamp Yankees." African immigrants, including Cape Verdean Americans, Liberian Americans, Nigerian Americans and Ghanaian Americans, form significant and growing communities in Rhode Island. Although Rhode Island has the smallest land area of all 50 states, it has the second highest population density of any state in the Union, second to that of New Jersey. Birth data Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. Religion A Pew survey of Rhode Island residents' religious self-identification showed the following distribution of affiliations: Catholic 42%, Protestant 30%, Jewish 1%, Jehovah's Witnesses 2%, Buddhism 1%, Mormonism 1%, Hinduism 1%, and Non-religious 20%. The largest denominations are the Catholic Church with 456,598 adherents, the Episcopal Church with 19,377, the American Baptist Churches USA with 15,220, and the United Methodist Church with 6,901 adherents. Rhode Island has the highest proportion of Catholic residents of any state, mainly due to large Irish, Italian, and French-Canadian immigration in the past; recently, significant Portuguese and various Hispanic communities have also been established in the state. Though it has the highest overall Catholic percentage of any state, none of Rhode Island's individual counties ranks among the 10 most Catholic in the United States, as Catholics are evenly spread throughout the state. Rhode Island's Jewish community, centered in the Providence area, emerged during a wave of Jewish immigration predominantly from Eastern Europeans shtetls between 1880 and 1920. The presence of the Touro Synagogue in Newport, the oldest existing synagogue in the United States, emphasizes that these second-wave immigrants did not create Rhode Island's first Jewish community; a comparatively smaller wave of Spanish and Portuguese Jews immigrated to Newport during the colonial era. Economy The Rhode Island economy had a colonial base in fishing. The Blackstone River Valley was a major contributor to the American Industrial Revolution. It was in Pawtucket that Samuel Slater set up Slater Mill in 1793, using the waterpower of the Blackstone River to power his cotton mill. For a while, Rhode Island was one of the leaders in textiles. However, with the Great Depression, most textile factories relocated to southern U.S. states. The textile industry still constitutes a part of the Rhode Island economy but does not have the same power. Other important industries in Rhode Island's past included toolmaking, costume jewelry, and silverware. An interesting by-product of Rhode Island's industrial history is the number of abandoned factories, many of which are now condominiums, museums, offices, and low-income and elderly housing. Today, much of Rhode Island's economy is based on services, particularly healthcare and education, and still manufacturing to some extent. The state's nautical history continues in the 21st century in the form of nuclear submarine construction. Per the 2013 American Communities Survey, Rhode Island has the highest paid elementary school teachers in the country, with an average salary of $75,028 (adjusted to inflation). The headquarters of Citizens Financial Group, the 14th largest bank in the United States, is in Providence. The Fortune 500 companies CVS Caremark and Textron are based in Woonsocket and Providence, respectively. FM Global, GTECH Corporation, Hasbro, American Power Conversion, Nortek, and Amica Mutual Insurance are all Fortune 1000 companies based in Rhode Island. Rhode Island's 2000 total gross state production was $46.18 billion (adjusted to inflation), placing it 45th in the nation. Its 2000 per capita personal income was $41,484 (adjusted to inflation), 16th in the nation. Rhode Island has the lowest level of energy consumption per capita of any state. Additionally, Rhode Island is rated as the 5th most energy efficient state in the country. In December 2012, the state's unemployment rate was 10.2%. This has gradually reduced to 3.5% in November 2019, however, the coronavirus pandemic brought the unemployment rate to a high of 18.1% in April 2020. This has since reduced to 10.5% in September 2020 and is projected to further decrease to 7% in October 2020. Health services are Rhode Island's largest industry. Second is tourism, supporting 39,000 jobs, with tourism-related sales at $4.56 billion (adjusted to inflation) in the year 2000. The third-largest industry is manufacturing. Its industrial outputs are submarine construction, shipbuilding, costume jewelry, fabricated metal products, electrical equipment, machinery, and boatbuilding. Rhode Island's agricultural outputs are nursery stock, vegetables, dairy products, and eggs. Rhode Island's taxes were appreciably higher than neighboring states, because Rhode Island's income tax was based on 25% of the payer's federal income tax payment. Former Governor Donald Carcieri claimed the higher tax rate had an inhibitory effect on business growth in the state and called for reductions to increase the competitiveness of the state's business environment. In 2010, the Rhode Island General Assembly passed a new state income tax structure that Governor Carcieri signed into law on June 9, 2010. The income tax overhaul has made Rhode Island competitive with other New England states by lowering its maximum tax rate to 5.99% and reducing the number of tax brackets to three. The state's first income tax was enacted in 1971. Largest employers , Rhode Island's largest employers (excluding employees of municipalities) are: Transportation Bus The Rhode Island Public Transit Authority (RIPTA) operates statewide intra- and intercity bus transport from its hubs at Kennedy Plaza in Providence, Pawtucket, and Newport. RIPTA bus routes serve 38 of Rhode Island's 39 cities and towns. (New Shoreham on Block Island is not served). RIPTA operates 58 routes, including daytime trolley service (using trolley-style replica buses) in Providence and Newport. Ferry From 2000 through 2008, RIPTA offered seasonal ferry service linking Providence and Newport (already connected by highway) funded by grant money from the United States Department of Transportation. Though the service was popular with residents and tourists, RIPTA was unable to continue after the federal funding ended. Service was discontinued . The service resumed in 2016 and has been successful. The privately run Block Island Ferry links Block Island with Newport and Narragansett with traditional and fast-ferry service, while the Prudence Island Ferry connects Bristol with Prudence Island. Private ferry services also link several Rhode Island communities with ports in Connecticut, Massachusetts, and New York. Rail The MBTA Commuter Rail's Providence/Stoughton Line links Providence and T. F. Green Airport with Boston's South Station. The line was later extended southward to Wickford Junction, with service beginning April 23, 2012. The state hopes to extend the MBTA line to Kingston and Westerly, as well as explore the possibility of extending Connecticut's Shore Line East to T.F. Green Airport. Amtrak's Acela Express stops at Providence Station (the only Acela stop in Rhode Island), linking Providence to other cities in the Northeast Corridor. Amtrak's Northeast Regional service makes stops at Providence Station, Kingston, and Westerly. Aviation Rhode Island's primary airport for passenger and cargo transport is T. F. Green Airport in Warwick, though Rhode Islanders who wish to travel internationally on direct flights and those who seek a greater availability of flights and destinations often fly through Logan International Airport in Boston. Limited access highways Interstate 95 (I-95) runs southwest to northeast across the state, linking Rhode Island with other states along the East Coast. I-295 functions as a partial beltway encircling Providence to the west. I-195 provides a limited-access highway connection from Providence (and Connecticut and New York via I-95) to Cape Cod. Initially built as the easternmost link in the (now cancelled) extension of I-84 from Hartford, Connecticut, a portion of U.S. Route 6 (US 6) through northern Rhode Island is limited-access and links I-295 with downtown Providence. Several Rhode Island highways extend the state's limited-access highway network. Route 4 is a major north–south freeway linking Providence and Warwick (via I-95) with suburban and beach communities along Narragansett Bay. Route 10 is an urban connector linking downtown Providence with Cranston and Johnston. Route 37 is an important east–west freeway through Cranston and Warwick and links I-95 with I-295. Route 99 links Woonsocket with Providence (via Route 146). Route 146 travels through the Blackstone Valley, linking Providence and I-95 with Worcester, Massachusetts and the Massachusetts Turnpike. Route 403 links Route 4 with Quonset Point. Several bridges cross Narragansett Bay connecting Aquidneck Island and Conanicut Island to the mainland, most notably the Claiborne Pell Newport Bridge and the Jamestown-Verrazano Bridge. Bicycle paths The East Bay Bike Path stretches from Providence to Bristol along the eastern shore of Narragansett Bay, while the Blackstone River Bikeway will eventually link Providence and Worcester. In 2011, Rhode Island completed work on a marked on-road bicycle path through Pawtucket and Providence, connecting the East Bay Bike Path with the Blackstone River Bikeway, completing a bicycle route through the eastern side of the state. The William C. O'Neill Bike Path (commonly known as the South County Bike Path) is an path through South Kingstown and Narragansett. The Washington Secondary Bike Path stretches from Cranston to Coventry, and the Ten Mile River Greenway path runs through East Providence and Pawtucket. Future In late 2019, the Rhode Island Public Transit Authority released a draft of the Rhode Island Transit Master Plan, documenting and describing a variety of proposed improvements and additions to be made to the state's public transit network by 2040. Several different proposals were offered and still under consideration as of December 2020, including implementation of a bus rapid transit system, express bus routes, expansion of Amtrak and MBTA services throughout the state, and construction of a new light rail network through downtown Providence. Media Education Primary and secondary schools Colleges and universities Rhode Island has several colleges and universities: Brown University Bryant University Community College of Rhode Island Johnson & Wales University Naval War College New England Institute of Technology Providence College Rhode Island College Rhode Island School of Design Roger Williams University Salve Regina University of Newport University of Rhode Island Culture Local accent Some Rhode Islanders speak with the distinctive, non-rhotic, traditional Rhode Island accent linguists describe as a cross between New York City and Boston accents (e.g., "water" sounds like "watuh" ). Many Rhode Islanders distinguish a strong aw sound (i.e., resist the cot–caught merger of Boston) much like one might hear in New Jersey or New York City; for example, the word coffee is pronounced . Rhode Islanders sometimes refer to drinking fountains as "bubblers", milkshakes as "cabinets", and overstuffed foot-long sandwiches (of whatever kind) as "grinders". Food and beverages Rhode Island, like the rest of New England, has a tradition of clam chowder. Both the white New England and the red Manhattan varieties are popular, but there is also a unique clear-broth chowder known as Rhode Island Clam Chowder available in many restaurants. A culinary tradition in Rhode Island is the clam cake (also known as a clam fritter outside of Rhode Island), a deep fried ball of buttery dough with chopped bits of clam inside. They are sold by the half-dozen or dozen in most seafood restaurants around the state, and the quintessential summer meal in Rhode Island is chowder and clam cakes. The quahog is a large local clam usually used in a chowder. It is also ground and mixed with stuffing or spicy minced sausage, and then baked in its shell to form a stuffie. Calamari (squid) is sliced into rings and fried as an appetizer in most Italian restaurants, typically served Sicilian-style with sliced banana peppers and marinara sauce on the side. (In 2014, calamari became the official state appetizer.) Clams Casino originated in Rhode Island, invented by Julius Keller, the maitre d' in the original Casino next to the seaside Towers in Narragansett. Clams Casino resemble the beloved stuffed quahog but are generally made with the smaller littleneck or cherrystone clam and are unique in their use of bacon as a topping. The official state drink of Rhode Island is coffee milk, a beverage created by mixing milk with coffee syrup. This unique syrup was invented in the state and is sold in almost all Rhode Island supermarkets, as well as its bordering states. Johnnycakes have been a Rhode Island staple since Colonial times, made with corn meal and water then pan-fried much like pancakes. Submarine sandwiches are called grinders throughout Rhode Island, and the Italian grinder, made with cold cuts such as ham, prosciutto, capicola, salami, and Provolone cheese, is especially popular. Linguiça or chouriço is a spicy Portuguese sausage that the state's large Portuguese community often serves with peppers and eats with hearty bread. Rhode Island state symbols In popular culture The Farrelly brothers and Seth MacFarlane depict Rhode Island in popular culture, often making comedic parodies of the state. MacFarlane's television series Family Guy is based in a fictional Rhode Island city named Quahog, and notable local events and celebrities are regularly lampooned. Peter Griffin is seen working at the Pawtucket brewery, and other state locations are mentioned. The 1956 film High Society (starring Bing Crosby, Grace Kelly, and Frank Sinatra) was set in Newport, Rhode Island. The 1974 film adaptation of The Great Gatsby was also filmed in Newport. Jacqueline Bouvier and John F. Kennedy were married at St. Mary's church in Newport. Their reception took place at Hammersmith Farm, the Bouvier summer home in Newport. Cartoonist Don Bousquet, a state icon, has made a career out of Rhode Island culture, drawing Rhode Island-themed gags in The Providence Journal and Yankee magazine. These cartoons have been reprinted in the Quahog series of paperbacks (I Brake for Quahogs, Beware of the Quahog, and The Quahog Walks Among Us.) Bousquet has also collaborated with humorist and Providence Journal columnist Mark Patinkin on two books: The Rhode Island Dictionary and The Rhode Island Handbook. The 1998 film Meet Joe Black was filmed at Aldrich Mansion in the Warwick Neck area of Warwick. Body of Proofs first season was filmed entirely in Rhode Island. The show premiered on March 29, 2011. The 2007 Steve Carell and Dane Cook film Dan in Real Life was filmed in various coastal towns in the state. The sunset scene with the entire family on the beach takes place at Napatree Point. Jersey Shore star Pauly D filmed part of his spin-off The Pauly D Project in his hometown of Johnston. The Comedy Central cable television series Another Period is set in Newport during the Gilded Age. Notable firsts in Rhode Island Rhode Island has been the first in a number of initiatives. The Colony of Rhode Island and Providence Plantations enacted the first law prohibiting slavery in America on May 18, 1652. The first act of armed rebellion in America against the British Crown was the boarding and burning of the Revenue Schooner Gaspee in Narragansett Bay on June 10, 1772. The idea of a Continental Congress was first proposed at a town meeting in Providence on May 17, 1774. Rhode Island elected the first delegates (Stephen Hopkins and Samuel Ward) to the Continental Congress on June 15, 1774. The Rhode Island General Assembly created the first standing army in the colonies (1,500 men) on April 22, 1775. On June 15, 1775, the first naval engagement took place in the American Revolution between an American sloop commanded by Capt. Abraham Whipple and an armed tender of the British Frigate Rose. The tender was chased aground and captured. Later in June, the General Assembly created the American Navy when it commissioned the sloops Katy and , armed with 24 guns and commanded by Abraham Whipple who was promoted to Commodore. Rhode Island was the first Colony to declare independence from Britain on May 4, 1776. Slater Mill in Pawtucket was the first commercially successful cotton-spinning mill with a fully mechanized power system in America and was the birthplace of the Industrial Revolution in the US. The oldest Fourth of July parade in the country is still held annually in Bristol, Rhode Island. The first Baptist church in America was founded in Providence in 1638. Ann Smith Franklin of the Newport Mercury was the first female newspaper editor in America (August 22, 1762). Touro Synagogue was the first synagogue in America, founded in Newport in 1763. Pelham Street in Newport was the first in America to be illuminated by gaslight in 1806. The first strike in the United States in which women participated occurred in Pawtucket in 1824. Watch Hill has the nation's oldest flying horses carousel that has been in continuous operation since 1850. The motion picture machine was patented in Providence on April 23, 1867. The first lunch wagon in America was introduced in Providence in 1872. The first nine-hole golf course in America was completed in Newport in 1890. The first state health laboratory was established in Providence on September 1, 1894. The Rhode Island State House was the first building with an all-marble dome to be built in the United States (1895–1901). The first automobile race on a track was held in Cranston on September 7, 1896. The first automobile parade was held in Newport on September 7, 1899, on the grounds of Belcourt Castle. Miscellaneous local culture Rhode Island is nicknamed "The Ocean State", and the nautical nature of Rhode Island's geography pervades its culture. Newport Harbor, in particular, holds many pleasure boats. In the lobby of T. F. Green, the state's main airport, is a large life-sized sailboat, and the state's license plates depict an ocean wave or a sailboat. The large number of beaches in Washington County lures many Rhode Islanders south for summer vacation. The state constitution protects shore access, including swimming and gathering of seaweed. The 1982 Rhode Island Supreme Court decision in State v. Ibbison defines the end of private land as the mean high tide line, which is difficult to determine in day-to-day activities, and has resulted in beach access conflicts. Underfunding of the Rhode Island Coastal Resources Management Council has resulted in lax enforcement against encroachment on public access and building of illegal structures. The state was notorious for organized crime activity from the 1950s into the 1990s when the Patriarca crime family held sway over most of New England from its Providence headquarters. Rhode Islanders developed a unique style of architecture in the 17th century called the stone-ender. Rhode Island is the only state to still celebrate Victory over Japan Day which is officially named "Victory Day" but is sometimes referred to as "VJ Day." It is celebrated on the second Monday in August. Nibbles Woodaway, more commonly referred to as "The Big Blue Bug", is a 58-foot-long termite mascot for a Providence extermination business. Since its construction in 1980, it has been featured in several movies and television shows, and has come to be recognized as a cultural landmark by many locals. In more recent times, the Big Blue Bug has been given a mask to remind locals and visitors to mask-up during the COVID-19 pandemic. Sports Professional Rhode Island's only professional minor league team is the Providence Bruins ice hockey team of the American Hockey League, who are a top-level minor league affiliate of the Boston Bruins. They play in the Dunkin' Donuts Center in Providence and won the AHL's Calder Cup during the 1998–99 AHL season. The Pawtucket Red Sox baseball team was a Triple-A International League affiliate of the Boston Red Sox from 1973 to 2020. They played
his religious views, and he settled at the top of Narragansett Bay on land sold or given to him by Narragansett sachem Canonicus. He named the site Providence, "having a sense of God's merciful providence unto me in my distress", and it became a place of religious freedom where all were welcome. In 1638 (after conferring with Williams), Anne Hutchinson, William Coddington, John Clarke, Philip Sherman, and other religious dissenters settled on Aquidneck Island (also known as Rhode Island), which was purchased from the local tribes who called it Pocasset. This settlement was called Portsmouth and was governed by the Portsmouth Compact. The island's southern part became the separate settlement of Newport after disagreements among the founders. Samuel Gorton purchased lands at Shawomet in 1642 from the Narragansetts, precipitating a dispute with the Massachusetts Bay Colony. In 1644, Providence, Portsmouth, and Newport united for their common independence as the Colony of Rhode Island and Providence Plantations, governed by an elected council and "president". Gorton received a separate charter for his settlement in 1648 which he named Warwick after his patron. Metacomet was the Wampanoag tribe's war leader, whom the colonists called King Philip. They invaded and burned down several of the towns in the area during King Philip's War (1675–1676), including Providence which was attacked twice. A force of Massachusetts, Connecticut, and Plymouth militia under General Josiah Winslow invaded and destroyed the fortified Narragansett Indian village in the Great Swamp in South Kingstown, Rhode Island on December 19, 1675. In one of the final actions of the war, an Indian associated with Benjamin Church killed King Philip in Bristol, Rhode Island. The colony was amalgamated into the Dominion of New England in 1686, as King James II attempted to enforce royal authority over the autonomous colonies in British North America, but the colony regained its independence under the Royal Charter after the Glorious Revolution of 1688. Slaves were introduced in Rhode Island at this time, although there is no record of any law legalizing slave-holding. The colony later prospered under the slave trade, distilling rum to sell in Africa as part of a profitable triangular trade in slaves and sugar with the Caribbean. Rhode Island's legislative body passed an act in 1652 abolishing the holding of slaves (the first British colony to do so), but this edict was never enforced and Rhode Island continued to be heavily involved in the slave trade during the post-revolution era. In 1774, the slave population of Rhode Island was 6.3% of the total (nearly twice the ratio of other New England colonies). Brown University was founded in 1764 as the College in the British Colony of Rhode Island and Providence Plantations. It was one of nine Colonial colleges granted charters before the American Revolution but was the first college in America to accept students regardless of religious affiliation. Revolutionary to Civil War period: 1770–1860 Rhode Island's tradition of independence and dissent gave it a prominent role in the American Revolution. At approximately 2 a.m. on June 10, 1772, a band of Providence residents attacked the grounded revenue schooner HMS Gaspee, burning it to the waterline for enforcing unpopular trade regulations within Narragansett Bay. Rhode Island was the first of the thirteen colonies to renounce its allegiance to the British Crown on May 4, 1776. It was also the last of the thirteen colonies to ratify the United States Constitution on May 29, 1790, and only under threat of heavy trade tariffs from the other former colonies and after assurances were made that a Bill of Rights would become part of the Constitution. During the Revolution, the British occupied Newport in December 1776. A combined Franco-American force fought to drive them off Aquidneck Island. Portsmouth was the site of the first African-American military unit, the 1st Rhode Island Regiment, to fight for the U.S. in the unsuccessful Battle of Rhode Island of August 29, 1778. A month earlier, the appearance of a French fleet off Newport caused the British to scuttle some of their own ships in an attempt to block the harbor. The British abandoned Newport in October 1779, concentrating their forces in New York City. An expedition of 5,500 French troops under Count Rochambeau arrived in Newport by sea on July 10, 1780. The celebrated march to Yorktown, Virginia, in 1781 ended with the defeat of the British at the Siege of Yorktown and the Battle of the Chesapeake. Rhode Island was also heavily involved in the Industrial Revolution, which began in America in 1787 when Thomas Somers reproduced textile machine plans which he imported from England. He helped to produce the Beverly Cotton Manufactory, in which Moses Brown of Providence took an interest. Moses Brown teamed up with Samuel Slater and helped to create the second cotton mill in America, a water-powered textile mill. The Industrial Revolution moved large numbers of workers into the cities, creating a permanently landless class who were, therefore, by the law of the time, also voteless. By 1829, 60% of the state's free white males were ineligible to vote. Several attempts were unsuccessfully made to address this problem, and a new state constitution was passed in 1843 allowing landless men to vote if they could pay a $1 poll tax. For the first several decades of statehood, Rhode Island was governed in accordance with the 1663 colonial charter. Voting rights were restricted to landowners holding at least $134 in property, disenfranchising well over half of the state's male citizens. The charter apportioned legislative seats equally among the state's towns, over-representing rural areas and under-representing the growing industrial centers. Additionally, the charter disallowed landless citizens from filing civil suits without endorsement from a landowner. Bills were periodically introduced in the legislature to expand suffrage, but they were invariably defeated. In 1841, activists led by Thomas W. Dorr organized an extralegal convention to draft a state constitution, arguing the charter government violated the Guarantee Clause in Article Four, Section Four of the United States Constitution. In 1842, the charter government and Dorr's supporters held separate elections, and two rival governments claimed sovereignty over the state. Dorr's supporters led an armed rebellion against the charter government, and Dorr was arrested and imprisoned for treason against the state. Later that year, the legislature drafted a state constitution, removing property requirements for American-born citizens but keeping them in place for immigrants, and retaining urban under-representation in the legislature. In the early 19th century, Rhode Island was subject to a tuberculosis outbreak which led to public hysteria about vampirism. Civil War During the American Civil War, Rhode Island was the first Union state to send troops in response to President Lincoln's request for help from the states. Rhode Island furnished 25,236 fighting men, of whom 1,685 died. On the home front, Rhode Island and the other northern states used their industrial capacity to supply the Union Army with the materials it needed to win the war. The United States Naval Academy moved to Rhode Island temporarily during the war. In 1866, Rhode Island abolished racial segregation in the public schools throughout the state. Gilded Age The 50 years following the Civil War were a time of prosperity and affluence that author William G. McLoughlin calls "Rhode Island's halcyon era." Rhode Island was a center of the Gilded Age and provided a home or summer home to many of the country's most prominent industrialists. This was a time of growth in textile mills and manufacturing and brought an influx of immigrants to fill those jobs, bringing population growth and urbanization. In Newport, New York's wealthiest industrialists created a summer haven to socialize and build grand mansions. Thousands of French-Canadian, Italian, Irish, and Portuguese immigrants arrived to fill jobs in the textile and manufacturing mills in Providence, Pawtucket, Central Falls, and Woonsocket. World War I During World War I, Rhode Island furnished 28,817 soldiers, of whom 612 died. After the war, the state was hit hard by the Spanish Influenza. In the 1920s and 1930s, rural Rhode Island saw a surge in Ku Klux Klan membership, largely in reaction to large waves of immigrants moving to the state. The Klan is believed to be responsible for burning the Watchman Industrial School in Scituate, which was a school for African-American children. Growth in the modern era: 1929–present Since the Great Depression, the Rhode Island Democratic Party has dominated local politics. Rhode Island has comprehensive health insurance for low-income children and a large social safety net. Many urban areas still have a high rate of children in poverty. Due to an influx of residents from Boston, increasing housing costs have resulted in more homelessness in Rhode Island. The 350th Anniversary of the founding of Rhode Island was celebrated with a free concert held on the tarmac of the Quonset State Airport on August 31, 1986. Performers included Chuck Berry, Tommy James, and headliner Bob Hope. In 2003, a nightclub fire in West Warwick claimed 100 lives and resulted in nearly twice as many injured, catching national attention. The fire resulted in criminal sentences. In March 2010, areas of the state received record flooding due to rising rivers from heavy rain. The first period of rainy weather in mid-March caused localized flooding and, two weeks later, more rain caused more widespread flooding in many towns, especially south of Providence. Rain totals on March 29–30, 2010 exceeded 14 inches (35.5 cm) in many locales, resulting in the inundation of area rivers—especially the Pawtuxet River which runs through central Rhode Island. The overflow of the Pawtuxet River, nearly above flood stage, submerged a sewage treatment plant and closed a five-mile (8 km) stretch of Interstate 95. In addition, it flooded two shopping malls, numerous businesses, and many homes in the towns of Warwick, West Warwick, Cranston, and Westerly. Amtrak service was also suspended between New York and Boston during this period. Following the flood, Rhode Island was in a state of emergency for two days. The Federal Emergency Management Agency (FEMA) was called in to help flood victims. Geography Rhode Island covers an area of within the New England region of the Northeastern United States and is bordered on the north and east by Massachusetts, on the west by Connecticut, and on the south by Rhode Island Sound and the Atlantic Ocean. It shares a narrow maritime border with New York State between Block Island and Long Island. The state's mean elevation is . It is only wide and long, yet the state has a tidal shoreline on Narragansett Bay and the Atlantic Ocean of . Rhode Island is nicknamed the Ocean State and has a number of oceanfront beaches. It is mostly flat with no real mountains, and the state's highest natural point is Jerimoth Hill, above sea level. The state has two distinct natural regions. Eastern Rhode Island contains the lowlands of the Narragansett Bay, while Western Rhode Island forms part of the New England upland. Rhode Island's forests are part of the Northeastern coastal forests ecoregion. Narragansett Bay is a major feature of the state's topography. There are more than 30 islands within the bay; the largest is Aquidneck Island, which holds the municipalities of Newport, Middletown, and Portsmouth. The second-largest island is Conanicut, and the third is Prudence. Block Island lies about off the southern coast of the mainland and separates Block Island Sound and the Atlantic Ocean proper. A rare type of rock called Cumberlandite is found only in Rhode Island (specifically, in the town of Cumberland) and is the state rock. There were initially two known deposits of the mineral, but it is an ore of iron, and one of the deposits was extensively mined for its ferrous content. Climate Most of Rhode Island has a humid continental climate, with warm summers and cold winters. The state's southern coastal portions are the broad transition zone into subtropical climates, with hot summers and cool winters with a mix of rain and snow. Block Island has an oceanic climate. The highest temperature recorded in Rhode Island was , recorded on August 2, 1975, in Providence. The lowest recorded temperature in Rhode Island was on February 5, 1996, in Greene. Monthly average temperatures range from a high of to a low of . Rhode Island is vulnerable to tropical storms and hurricanes due to its location in New England, catching the brunt of many storms that blow up the eastern seaboard. Hurricanes that have done significant damage in the state include the 1938 New England hurricane, Hurricane Carol (1954), Hurricane Donna (1960), and Hurricane Bob (1991). Cities and towns Rhode Island is divided into five counties but it has no county governments. The entire state is divided into municipalities, which handle all local government affairs. There are 39 cities and towns in Rhode Island. Major population centers today result from historical factors; development took place predominantly along the Blackstone, Seekonk, and Providence Rivers with the advent of the water-powered mill. Providence is the base of a large metropolitan area. The state's 19 largest municipalities ranked by population are : Providence (190,934) Cranston (82,934) Warwick (82,823) Pawtucket (75,604) East Providence (47,139) Woonsocket (43,240) Cumberland (36,405) Coventry (35,688) North Providence (34,114) South Kingstown (31,931) West Warwick (31,012) Johnston (29,568) North Kingstown (27,732) Newport (25,163) Westerly (23,359) Central Falls (22,583) Lincoln (22,529) Bristol (22,493) Smithfield (22,118) Some of Rhode Island's cities and towns are further partitioned into villages, in common with many other New England states. Notable villages include Kingston in the town of South Kingstown, which houses the University of Rhode Island; Wickford in the town of North Kingstown, the site of an annual international art festival; and Wakefield where the Town Hall is for the Town of South Kingstown. Landmarks The state capitol building is made of white Georgian marble. On top is the world's fourth largest self-supported marble dome. It houses the Rhode Island Charter granted by King Charles II in 1663, the Brown University charter, and other state treasures. The First Baptist Church of Providence is the oldest Baptist church in the Americas, founded by Roger Williams in 1638. The first fully automated post office in the country is in Providence. There are many historic mansions in the seaside city of Newport, including The Breakers, Marble House, and Belcourt Castle. Also there is the Touro Synagogue, dedicated on December 2, 1763, considered by locals to be the first synagogue within the United States (see below for information on New York City's claim), and still serving. The synagogue showcases the religious freedoms established by Roger Williams, as well as impressive architecture in a mix of the classic colonial and Sephardic style. The Newport Casino is a National Historic Landmark building complex that houses the International Tennis Hall of Fame and features an active grass-court tennis club. Scenic Route 1A (known locally as Ocean Road) is in Narragansett. "The Towers" is also in Narragansett featuring a large stone arch. It was once the entrance to a famous Narragansett casino that burned down in 1900. The Towers now serve as an event venue and host the local Chamber of Commerce, which operates a tourist information center. The Newport Tower has been hypothesized to be of Viking origin, although most experts believe it was a Colonial-era windmill. Environmental legislation On May 29, 2014, Governor Lincoln D. Chafee announced that Rhode Island was one of eight states to release a collaborative Action Plan to put 3.3 million zero-emission vehicles on its roads by 2025. The plan's purpose is to reduce greenhouse gas and smog-causing emissions. The plan promotes zero-emissions vehicles and investments in the infrastructure to support them. In 2014, Rhode Island received grants of $2,711,685 from the Environmental Protection Agency to clean up Brownfield sites in eight locations. The grants provided communities with funding to assess, clean up, and redevelop contaminated properties, boost local economies, and leverage jobs while protecting public health and the environment. In 2013, the "Lots of Hope" program was established in the City of
Some music historians have also pointed to important and innovative developments that built on rock and roll in this period, including multitrack recording, developed by Les Paul, the electronic treatment of sound by such innovators as Joe Meek, and the "Wall of Sound" productions of Phil Spector, continued desegregation of the charts, the rise of surf music, garage rock and the Twist dance craze. Surf rock in particular, noted for the use of reverb-drenched guitars, became one of the most popular forms of American rock of the 1960s. British rock and roll In the 1950s, Britain was well placed to receive American rock and roll music and culture. It shared a common language, had been exposed to American culture through the stationing of troops in the country, and shared many social developments, including the emergence of distinct youth sub-cultures, which in Britain included the Teddy Boys and the rockers. Trad jazz became popular in the UK, and many of its musicians were influenced by related American styles, including boogie woogie and the blues. The skiffle craze, led by Lonnie Donegan, utilised amateurish versions of American folk songs and encouraged many of the subsequent generation of rock and roll, folk, R&B and beat musicians to start performing. At the same time British audiences were beginning to encounter American rock and roll, initially through films including Blackboard Jungle (1955) and Rock Around the Clock (1956). Both movies featured the Bill Haley & His Comets hit "Rock Around the Clock", which first entered the British charts in early 1955 – four months before it reached the US pop charts – topped the British charts later that year and again in 1956, and helped identify rock and roll with teenage delinquency. The initial response of the British music industry was to attempt to produce copies of American records, recorded with session musicians and often fronted by teen idols. More grassroots British rock and rollers soon began to appear, including Wee Willie Harris and Tommy Steele. During this period American Rock and Roll remained dominant; however, in 1958 Britain produced its first "authentic" rock and roll song and star, when Cliff Richard reached number 2 in the charts with "Move It". At the same time, TV shows such as Six-Five Special and Oh Boy! promoted the careers of British rock and rollers like Marty Wilde and Adam Faith. Cliff Richard and his backing band, the Shadows, were the most successful home grown rock and roll based acts of the era. Other leading acts included Billy Fury, Joe Brown, and Johnny Kidd & the Pirates, whose 1960 hit song "Shakin' All Over" became a rock and roll standard. As interest in rock and roll was beginning to subside in America in the late 1950s and early 1960s, it was taken up by groups in major British urban centers like Liverpool, Manchester, Birmingham, and London. About the same time, a British blues scene developed, initially led by purist blues followers such as Alexis Korner and Cyril Davies who were directly inspired by American musicians such as Robert Johnson, Muddy Waters and Howlin' Wolf. Many groups moved towards the beat music of rock and roll and rhythm and blues from skiffle, like the Quarrymen who became the Beatles, producing a form of rock and roll revivalism that carried them and many other groups to national success from about 1963 and to international success from 1964, known in America as the British Invasion. Groups that followed the Beatles included the beat-influenced Freddie and the Dreamers, Wayne Fontana and the Mindbenders, Herman's Hermits and the Dave Clark Five. Early British rhythm and blues groups with more blues influences include the Animals, the Rolling Stones, and the Yardbirds. Cultural impact Rock and roll influenced lifestyles, fashion, attitudes, and language. In addition, rock and roll may have contributed to the civil rights movement because both African-American and white American teens enjoyed the music. Many early rock and roll songs dealt with issues of cars, school, dating, and clothing. The lyrics of rock and roll songs described events and conflicts that most listeners could relate to through personal experience. Topics such as sex that had generally been considered taboo began to appear in rock and roll lyrics. This new music tried to break boundaries and express emotions that people were actually feeling but had not talked about. An awakening began to take place in American youth culture. Race In the crossover of African-American "race music" to a growing white youth audience, the popularization of rock and roll involved both black performers reaching a white audience and white musicians performing African-American music. Rock and roll appeared at a time when racial tensions in the United States were entering a new phase, with the beginnings of the civil rights movement for desegregation, leading to the U.S. Supreme Court ruling that abolished the policy of "separate but equal" in 1954, but leaving a policy which would be extremely difficult to enforce in parts of the United States. The coming together of white youth audiences and black music in rock and roll inevitably provoked strong white racist reactions within the US, with many whites condemning its breaking down of barriers based on color. Many observers saw rock and roll as heralding the way for desegregation, in creating a new form of music that encouraged racial cooperation and shared experience. Many authors have argued that early rock and roll was instrumental in the way both white and black teenagers identified themselves. Teen culture Several rock historians have claimed that rock and roll was one of the first music genres to define an age group. It gave teenagers a sense of belonging, even when they were alone. Rock and roll is often identified with the emergence of teen culture among the first baby boomer generation, who had greater relative affluence and leisure time and adopted rock and roll as part of a distinct subculture. This involved not just music, absorbed via radio, record buying, jukeboxes and TV programs like American Bandstand, but also extended to film, clothes, hair, cars and motorbikes, and distinctive language. The youth culture exemplified by rock and roll was a recurring source of concern for older generations, who worried about juvenile delinquency and social rebellion, particularly because to a large extent rock and roll culture was shared by different racial and social groups. In America, that concern was conveyed even in youth cultural artifacts such as comic books. In "There's No Romance in Rock and Roll" from True Life Romance (1956), a defiant teen dates a rock and roll-loving boy but drops him for one who likes traditional adult music—to her parents' relief. In Britain, where postwar prosperity was more limited, rock and roll culture became attached to the pre-existing Teddy Boy movement, largely working class in origin, and eventually to the rockers. Rock and roll has been seen as reorienting popular music toward a youth market, as in Dion and the Belmonts' "A Teenager in Love" (1960). Dance styles From its early 1950s beginnings through the early 1960s, rock and roll spawned new dance crazes including the twist. Teenagers found the syncopated backbeat rhythm especially suited to reviving Big Band-era jitterbug dancing. Sock hops, school and church gym dances, and home basement dance parties became the rage, and American teens watched Dick Clark's American Bandstand to keep up on the latest dance and fashion styles. From the mid-1960s on, as "rock and roll" was rebranded as "rock," later dance genres followed, leading to funk, disco, house, techno, and hip hop. References Sources Rock and Roll: A Social History, by Paul Friedlander (1996), Westview Press () "The Rock Window: A Way of Understanding Rock Music" by Paul Friedlander, in Tracking: Popular Music Studies , Volume I, number 1, Spring, 1988 The Rolling Stone Encyclopedia of Rock & Roll by Holly George-Warren, Patricia Romanowski, Jon Pareles (2001), Fireside Press () The Sound of the City: the Rise of Rock and Roll, by Charlie Gillett (1970), E.P. Dutton The Fifties by David Halberstam (1996), Random House () The Rolling Stone Illustrated History of Rock and Roll : The Definitive History of the Most Important Artists and Their Music by editors James Henke, Holly George-Warren, Anthony Decurtis, Jim Miller (1992), Random House () External links The Camp Meeting Jubilee 1910 recording
neglects the black guitarists who did the same thing before Berry, such as Goree Carter, Gatemouth Brown, and the originator of the style, T-Bone Walker. Country boogie and Chicago electric blues supplied many of the elements that would be seen as characteristic of rock and roll. Inspired by electric blues, Chuck Berry introduced an aggressive guitar sound to rock and roll, and established the electric guitar as its centerpiece, adapting his rock band instrumentation from the basic blues band instrumentation of a lead guitar, second chord instrument, bass and drums. In 2017, Robert Christgau declared that "Chuck Berry did in fact invent rock 'n' roll", explaining that this artist "came the closest of any single figure to being the one who put all the essential pieces together". Rock and roll arrived at a time of considerable technological change, soon after the development of the electric guitar, amplifier and microphone, and the 45 rpm record. There were also changes in the record industry, with the rise of independent labels like Atlantic, Sun and Chess servicing niche audiences and a similar rise of radio stations that played their music. It was the realization that relatively affluent white teenagers were listening to this music that led to the development of what was to be defined as rock and roll as a distinct genre. Because the development of rock and roll was an evolutionary process, no single record can be identified as unambiguously "the first" rock and roll record. Contenders for the title of "first rock and roll record" include Sister Rosetta Tharpe's "Strange Things Happening Every Day" (1944), "That's All Right" by Arthur Crudup (1946), "Move It On Over" by Hank Williams (1947), "The Fat Man" by Fats Domino (1949), Goree Carter's "Rock Awhile" (1949), Jimmy Preston's "Rock the Joint" (1949), which was later covered by Bill Haley & His Comets in 1952, "Rocket 88" by Jackie Brenston and his Delta Cats (Ike Turner and his band The Kings of Rhythm), recorded by Sam Phillips for Sun Records in March 1951. In terms of its wide cultural impact across society in the US and elsewhere, Bill Haley's "Rock Around the Clock", recorded in April 1954 but not a commercial success until the following year, is generally recognized as an important milestone, but it was preceded by many recordings from earlier decades in which elements of rock and roll can be clearly discerned. Other artists with early rock and roll hits included Chuck Berry, Bo Diddley, Little Richard, Jerry Lee Lewis, and Gene Vincent. Chuck Berry's 1955 classic "Maybellene" in particular features a distorted electric guitar solo with warm overtones created by his small valve amplifier. However, the use of distortion was predated by electric blues guitarists such as Joe Hill Louis, Guitar Slim, Willie Johnson of Howlin' Wolf's band, and Pat Hare; the latter two also made use of distorted power chords in the early 1950s. Also in 1955, Bo Diddley introduced the "Bo Diddley beat" and a unique electric guitar style, influenced by African and Afro-Cuban music and in turn influencing many later artists. Rhythm and blues Rock and roll was strongly influenced by R&B, according to many sources, including an article in the Wall Street Journal in 1985 titled, "Rock! It's Still Rhythm and Blues". In fact, the author stated that the "two terms were used interchangeably", until about 1957. The other sources quoted in the article said that rock and roll combined R&B with pop and country music. Fats Domino was one of the biggest stars of rock and roll in the early 1950s and he was not convinced that this was a new genre. In 1957, he said: "What they call rock 'n' roll now is rhythm and blues. I’ve been playing it for 15 years in New Orleans". According to Rolling Stone, "this is a valid statement ... all Fifties rockers, black and white, country born and city-bred, were fundamentally influenced by R&B, the black popular music of the late Forties and early Fifties". Further, Little Richard built his ground-breaking sound of the same era with an uptempo blend of boogie-woogie, New Orleans rhythm and blues, and the soul and fervor of gospel music vocalization. Rockabilly "Rockabilly" usually (but not exclusively) refers to the type of rock and roll music which was played and recorded in the mid-1950s primarily by white singers such as Elvis Presley, Carl Perkins, Johnny Cash, and Jerry Lee Lewis, who drew mainly on the country roots of the music. Presley was greatly influenced and incorporated his style of music with some of the greatest African American musicians like BB King, Arthur Crudup and Fats Domino. His style of music combined with black influences created controversy during a turbulent time in history. Many other popular rock and roll singers of the time, such as Fats Domino and Little Richard, came out of the black rhythm and blues tradition, making the music attractive to white audiences, and are not usually classed as "rockabilly". Presley popularized rock and roll on a wider scale than any other single performer and by 1956, he had emerged as the singing sensation of the nation. Bill Flagg who is a Connecticut resident, began referring to his mix of hillbilly and rock 'n' roll music as rockabilly around 1953. In July 1954, Presley recorded the regional hit "That's All Right" at Sam Phillips' Sun Studio in Memphis. Three months earlier, on April 12, 1954, Bill Haley & His Comets recorded "Rock Around the Clock". Although only a minor hit when first released, when used in the opening sequence of the movie Blackboard Jungle a year later, it set the rock and roll boom in motion. The song became one of the biggest hits in history, and frenzied teens flocked to see Haley and the Comets perform it, causing riots in some cities. "Rock Around the Clock" was a breakthrough for both the group and for all of rock and roll music. If everything that came before laid the groundwork, "Rock Around the Clock" introduced the music to a global audience. In 1956, the arrival of rockabilly was underlined by the success of songs like "Folsom Prison Blues" by Johnny Cash, "Blue Suede Shoes" by Perkins and the No. 1 hit "Heartbreak Hotel" by Presley. For a few years it became the most commercially successful form of rock and roll. Later rockabilly acts, particularly performing songwriters like Buddy Holly, would be a major influence on British Invasion acts and particularly on the song writing of the Beatles and through them on the nature of later rock music. Doo wop Doo-wop was one of the most popular forms of 1950s rhythm and blues, often compared with rock and roll, with an emphasis on multi-part vocal harmonies and meaningless backing lyrics (from which the genre later gained its name), which were usually supported with light instrumentation. Its origins were in African-American vocal groups of the 1930s and 40s, such as the Ink Spots and the Mills Brothers, who had enjoyed considerable commercial success with arrangements based on close harmonies. They were followed by 1940s R&B vocal acts such as the Orioles, the Ravens and the Clovers, who injected a strong element of traditional gospel and, increasingly, the energy of jump blues. By 1954, as rock and roll was beginning to emerge, a number of similar acts began to cross over from the R&B charts to mainstream success, often with added honking brass and saxophone, with the Crows, the Penguins, the El Dorados and the Turbans all scoring major hits. Despite the subsequent explosion in records from doo wop acts in the later '50s, many failed to chart or were one-hit wonders. Exceptions included the Platters, with songs including "The Great Pretender" (1955) and the Coasters with humorous songs like "Yakety Yak" (1958), both of which ranked among the most successful rock and roll acts of the era. Towards the end of the decade there were increasing numbers of white, particularly Italian-American, singers taking up Doo Wop, creating
and culture, however, indicates that religion and culture can be seen as two separate systems, though not without some interplay. Social constructionism One modern academic theory of religion, social constructionism, says that religion is a modern concept that suggests all spiritual practice and worship follows a model similar to the Abrahamic religions as an orientation system that helps to interpret reality and define human beings. Among the main proponents of this theory of religion are Daniel Dubuisson, Timothy Fitzgerald, Talal Asad, and Jason Ānanda Josephson. The social constructionists argue that religion is a modern concept that developed from Christianity and was then applied inappropriately to non-Western cultures. Cognitive science Cognitive science of religion is the study of religious thought and behavior from the perspective of the cognitive and evolutionary sciences. The field employs methods and theories from a very broad range of disciplines, including: cognitive psychology, evolutionary psychology, cognitive anthropology, artificial intelligence, cognitive neuroscience, neurobiology, zoology, and ethology. Scholars in this field seek to explain how human minds acquire, generate, and transmit religious thoughts, practices, and schemas by means of ordinary cognitive capacities. Hallucinations and delusions related to religious content occurs in about 60% of people with schizophrenia. While this number varies across cultures, this had led to theories about a number of influential religious phenomenon and possible relation to psychotic disorders. A number of prophetic experiences are consistent with psychotic symptoms, although retrospective diagnoses are practically impossible. Schizophrenic episodes are also experienced by people who do not have belief in gods. Religious content is also common in temporal lobe epilepsy, and obsessive-compulsive disorder. Atheistic content is also found to be common with temporal lobe epilepsy. Comparativism Comparative religion is the branch of the study of religions concerned with the systematic comparison of the doctrines and practices of the world's religions. In general, the comparative study of religion yields a deeper understanding of the fundamental philosophical concerns of religion such as ethics, metaphysics, and the nature and form of salvation. Studying such material is meant to give one a richer and more sophisticated understanding of human beliefs and practices regarding the sacred, numinous, spiritual and divine. In the field of comparative religion, a common geographical classification of the main world religions includes Middle Eastern religions (including Zoroastrianism and Iranian religions), Indian religions, East Asian religions, African religions, American religions, Oceanic religions, and classical Hellenistic religions. Classification In the 19th and 20th centuries, the academic practice of comparative religion divided religious belief into philosophically defined categories called world religions. Some academics studying the subject have divided religions into three broad categories: world religions, a term which refers to transcultural, international religions; indigenous religions, which refers to smaller, culture-specific or nation-specific religious groups; and new religious movements, which refers to recently developed religions. Some recent scholarship has argued that not all types of religion are necessarily separated by mutually exclusive philosophies, and furthermore that the utility of ascribing a practice to a certain philosophy, or even calling a given practice religious, rather than cultural, political, or social in nature, is limited. The current state of psychological study about the nature of religiousness suggests that it is better to refer to religion as a largely invariant phenomenon that should be distinguished from cultural norms (i.e. religions). Morphological classification Some scholars classify religions as either universal religions that seek worldwide acceptance and actively look for new converts, such as Christianity, Islam, Buddhism and Jainism, while ethnic religions are identified with a particular ethnic group and do not seek converts. Others reject the distinction, pointing out that all religious practices, whatever their philosophical origin, are ethnic because they come from a particular culture. Demographical classification The five largest religious groups by world population, estimated to account for 5.8 billion people and 84% of the population, are Christianity, Islam, Buddhism, Hinduism (with the relative numbers for Buddhism and Hinduism dependent on the extent of syncretism) and traditional folk religion. A global poll in 2012 surveyed 57 countries and reported that 59% of the world's population identified as religious, 23% as not religious, 13% as convinced atheists, and also a 9% decrease in identification as religious when compared to the 2005 average from 39 countries. A follow-up poll in 2015 found that 63% of the globe identified as religious, 22% as not religious, and 11% as convinced atheists. On average, women are more religious than men. Some people follow multiple religions or multiple religious principles at the same time, regardless of whether or not the religious principles they follow traditionally allow for syncretism. A 2017 Pew projection suggests that Islam will overtake Christianity as the plurality religion by 2075. Unaffiliated populations are projected to drop, even when taking disaffiliation rates into account, due to differences in birth rates. Specific religions Abrahamic Abrahamic religions are monotheistic religions which believe they descend from Abraham. Judaism Judaism is the oldest Abrahamic religion, originating in the people of ancient Israel and Judea. The Torah is its foundational text, and is part of the larger text known as the Tanakh or Hebrew Bible. It is supplemented by oral tradition, set down in written form in later texts such as the Midrash and the Talmud. Judaism includes a wide corpus of texts, practices, theological positions, and forms of organization. Within Judaism there are a variety of movements, most of which emerged from Rabbinic Judaism, which holds that God revealed his laws and commandments to Moses on Mount Sinai in the form of both the Written and Oral Torah; historically, this assertion was challenged by various groups. The Jewish people were scattered after the destruction of the Temple in Jerusalem in 70 CE. Today there are about 13 million Jews, about 40 per cent living in Israel and 40 per cent in the United States. The largest Jewish religious movements are Orthodox Judaism (Haredi Judaism and Modern Orthodox Judaism), Conservative Judaism and Reform Judaism. Christianity Christianity is based on the life and teachings of Jesus of Nazareth (1st century) as presented in the New Testament. The Christian faith is essentially faith in Jesus as the Christ, the Son of God, and as Savior and Lord. Almost all Christians believe in the Trinity, which teaches the unity of Father, Son (Jesus Christ), and Holy Spirit as three persons in one Godhead. Most Christians can describe their faith with the Nicene Creed. As the religion of Byzantine Empire in the first millennium and of Western Europe during the time of colonization, Christianity has been propagated throughout the world via missionary work. It is the world's largest religion, with about 2.3 billion followers as of 2015. The main divisions of Christianity are, according to the number of adherents: The Catholic Church, led by the Bishop of Rome and the bishops worldwide in communion with him, is a communion of 24 Churches sui iuris, including the Latin Church and 23 Eastern Catholic churches, such as the Maronite Catholic Church. Eastern Christianity, which include Eastern Orthodoxy, Oriental Orthodoxy, and the Church of the East. Protestantism, separated from the Catholic Church in the 16th-century Protestant Reformation and is split into thousands of denominations. Major branches of Protestantism include Anglicanism, Baptists, Calvinism, Lutheranism, and Methodism, though each of these contain many different denominations or groups. There are also smaller groups, including: Restorationism, the belief that Christianity should be restored (as opposed to reformed) along the lines of what is known about the apostolic early church. Latter-day Saint movement, founded by Joseph Smith in the late 1820s. Jehovah's Witnesses, founded in the late 1870s by Charles Taze Russell. Islam Islam is a monotheistic religion based on the Quran, one of the holy books considered by Muslims to be revealed by God, and on the teachings (hadith) of the Islamic prophet Muhammad, a major political and religious figure of the 7th century CE. Islam is based on the unity of all religious philosophies and accepts all of the Abrahamic prophets of Judaism, Christianity and other Abrahamic religions before Muhammad. It is the most widely practiced religion of Southeast Asia, North Africa, Western Asia, and Central Asia, while Muslim-majority countries also exist in parts of South Asia, Sub-Saharan Africa, and Southeast Europe. There are also several Islamic republics, including Iran, Pakistan, Mauritania, and Afghanistan. Sunni Islam is the largest denomination within Islam and follows the Qur'an, the ahadith (ar: plural of Hadith) which record the sunnah, whilst placing emphasis on the sahabah. Shia Islam is the second largest denomination of Islam and its adherents believe that Ali succeeded Muhammad and further places emphasis on Muhammad's family. There are also Muslim revivalist movements such as Muwahhidism and Salafism. Other denominations of Islam include Nation of Islam, Ibadi, Sufism, Quranism, Mahdavia, and non-denominational Muslims. Wahhabism is the dominant Muslim schools of thought in the Kingdom of Saudi Arabia. Other Whilst Judaism, Christianity and Islam are commonly seen as the only three Abrahamic faiths, there are smaller and newer traditions which lay claim to the designation as well. For example, the Baháʼí Faith is a new religious movement that has links to the major Abrahamic religions as well as other religions (e.g. of Eastern philosophy). Founded in 19th-century Iran, it teaches the unity of all religious philosophies and accepts all of the prophets of Judaism, Christianity, and Islam as well as additional prophets (Buddha, Mahavira), including its founder Bahá'u'lláh. It is an offshoot of Bábism. One of its divisions is the Orthodox Baháʼí Faith. Even smaller regional Abrahamic groups also exist, including Samaritanism (primarily in Israel and the West Bank), the Rastafari movement (primarily in Jamaica), and Druze (primarily in Syria, Lebanon, and Israel). The Druze faith originally developed out of Isma'ilism, and it has sometimes been considered an Islamic school by some Islamic authorities, but Druze themselves do not identify as Muslims. Mandaeism, also known as Sabianism, is a Gnostic, monotheistic and ethnic religion. Its adherents, the Mandaeans, consider John the Baptist to be their chief prophet. Mandaeans are the last surviving Gnostics from antiquity. East Asian East Asian religions (also known as Far Eastern religions or Taoic religions) consist of several religions of East Asia which make use of the concept of Tao (in Chinese), Dō (in Japanese or Korean) or Đạo (in Vietnamese). They include: Taoism and Confucianism Taoism and Confucianism, as well as Korean, Vietnamese, and Japanese religion influenced by Chinese thought. Folk religions Chinese folk religion: the indigenous religions of the Han Chinese, or, by metonymy, of all the populations of the Chinese cultural sphere. It includes the syncretism of Confucianism, Taoism and Buddhism, Wuism, as well as many new religious movements such as Chen Tao, Falun Gong and Yiguandao. Other folk and new religions of East Asia and Southeast Asia such as Korean shamanism, Chondogyo, and Jeung San Do in Korea; indigenous Philippine folk religions in the Philippines; Shinto, Shugendo, Ryukyuan religion, and Japanese new religions in Japan; Satsana Phi in Laos; Cao Đài, Hòa Hảo, and Vietnamese folk religion in Vietnam. Indian religions Indian religions are practiced or were founded in the Indian subcontinent. They are sometimes classified as the dharmic religions, as they all feature dharma, the specific law of reality and duties expected according to the religion. Hinduism Hinduism is also called Vaidika Dharma, the dharma of the Vedas. It is a synecdoche describing the similar philosophies of Vaishnavism, Shaivism, and related groups practiced or founded in the Indian subcontinent. Concepts most of them share in common include karma, caste, reincarnation, mantras, yantras, and darśana. Hinduism is one of the most ancient of still-active religions, with origins perhaps as far back as prehistoric times. Hinduism is not a monolithic religion but a religious category containing dozens of separate philosophies amalgamated as Sanātana Dharma, which is the name by which Hinduism has been known throughout history by its followers. Jainism Jainism, taught primarily by Rishabhanatha (the founder of ahimsa) is an ancient Indian religion that prescribes a path of non-violence, truth and anekantavada for all forms of living beings in this universe; which helps them to eliminate all the Karmas, and hence to attain freedom from the cycle of birth and death (saṃsāra), that is, achieving nirvana. Jains are found mostly in India. According to Dundas, outside of the Jain tradition, historians date the Mahavira as about contemporaneous with the Buddha in the 5th-century BCE, and accordingly the historical Parshvanatha, based on the c. 250-year gap, is placed in 8th or 7th century BCE. Digambara Jainism (or sky-clad) is mainly practiced in South India. Their holy books are Pravachanasara and Samayasara written by their Prophets Kundakunda and Amritchandra as their original canon is lost. Shwetambara Jainism (or white-clad) is mainly practiced in Western India. Their holy books are Jain Agamas, written by their Prophet Sthulibhadra. Buddhism Buddhism was founded by Siddhartha Gautama in the 5th century BCE. Buddhists generally agree that Gotama aimed to help sentient beings end their suffering (dukkha) by understanding the true nature of phenomena, thereby escaping the cycle of suffering and rebirth (saṃsāra), that is, achieving nirvana. Theravada Buddhism, which is practiced mainly in Sri Lanka and Southeast Asia alongside folk religion, shares some characteristics of Indian religions. It is based in a large collection of texts called the Pali Canon. Mahayana Buddhism (or the Great Vehicle) under which are a multitude of doctrines that became prominent in China and are still relevant in Vietnam, Korea, Japan and to a lesser extent in Europe and the United States. Mahayana Buddhism includes such disparate teachings as Zen, Pure Land, and Soka Gakkai. Vajrayana Buddhism first appeared in India in the 3rd century CE. It is currently most prominent in the Himalaya regions and extends across all of Asia (cf. Mikkyō). Two notable new Buddhist sects are Hòa Hảo and the Navayana (Dalit Buddhist movement), which were developed separately in the 20th century. Sikhism Sikhism is a panentheistic religion founded on the teachings of Guru Nanak and ten successive Sikh gurus in 15th-century Punjab. It is the fifth-largest organized religion in the world, with approximately 30 million Sikhs. Sikhs are expected to embody the qualities of a Sant-Sipāhī—a saint-soldier, have control over one's internal vices and be able to be constantly immersed in virtues clarified in the Guru Granth Sahib. The principal beliefs of Sikhi are faith in Waheguru—represented by the phrase ik ōaṅkār, meaning one God, who prevails in everything, along with a praxis in which the Sikh is enjoined to engage in social reform through the pursuit of justice for all human beings. Indigenous and folk Indigenous religions or folk religions refers to a broad category of traditional religions that can be characterised by shamanism, animism and ancestor worship, where traditional means "indigenous, that which is aboriginal or foundational, handed down from generation to generation…". These are religions that are closely associated with a particular group of people, ethnicity or tribe; they often have no formal creeds or sacred texts. Some faiths are syncretic, fusing diverse religious beliefs and practices. Australian Aboriginal religions. Folk religions of the Americas: Native American religions Folk religions are often omitted as a category in surveys even in countries where they are widely practiced, e.g. in China. Traditional African African traditional religion encompasses the traditional religious beliefs of people in Africa. In West Africa, these religions include the Akan religion, Dahomey (Fon) mythology, Efik mythology, Odinani, Serer religion (A ƭat Roog), and Yoruba religion, while Bushongo mythology, Mbuti (Pygmy) mythology, Lugbara mythology, Dinka religion, and Lotuko mythology come from central Africa. Southern African traditions include Akamba mythology, Masai mythology, Malagasy mythology, San religion, Lozi mythology, Tumbuka mythology, and Zulu mythology. Bantu mythology is found throughout central, southeast, and southern Africa. In north Africa, these traditions include Berber and ancient Egyptian. There are also notable African diasporic religions practiced in the Americas, such as Santeria, Candomble, Vodun, Lucumi, Umbanda, and Macumba. Iranian Iranian religions are ancient religions whose roots predate the Islamization of Greater Iran. Nowadays these religions are practiced only by minorities. Zoroastrianism is based on the teachings of prophet Zoroaster in the 6th century BCE. Zoroastrians worship the creator Ahura Mazda. In Zoroastrianism, good and evil have distinct sources, with evil trying to destroy the creation of Mazda, and good trying to sustain it. Kurdish religions include the traditional beliefs of the Yazidi, Alevi, and Ahl-e Haqq. Sometimes these are labeled Yazdânism. New religious movements The Baháʼí Faith teaches the unity of all religious philosophies. Cao Đài is a syncretistic, monotheistic religion, established in Vietnam in 1926. Eckankar is a pantheistic religion with the purpose of making God an everyday reality in one's life. Epicureanism is a Hellenistic philosophy that is considered by many of its practitioners as a type of (sometimes non-theistic) religious identity. It has its own scriptures, a monthly "feast of reason" on the Twentieth, and considers friendship to be holy. Hindu reform movements, such as Ayyavazhi, Swaminarayan Faith and Ananda Marga, are examples of new religious movements within Indian religions. Japanese new religions (shinshukyo) is a general category for a wide variety of religious movements founded in Japan since the 19th century. These
frequently used in the writings of Josephus in the first century CE. It was used in mundane contexts and could mean multiple things from respectful fear to excessive or harmfully distracting practices of others; to cultic practices. It was often contrasted with the Greek word deisidaimonia which meant too much fear. Religion and religions The modern concept of religion, as an abstraction that entails distinct sets of beliefs or doctrines, is a recent invention in the English language. Such usage began with texts from the 17th century due to events such as the splitting of Christendom during the Protestant Reformation and globalization in the age of exploration, which involved contact with numerous foreign cultures with non-European languages. Some argue that regardless of its definition, it is not appropriate to apply the term religion to non-Western cultures. Others argue that using religion on non-Western cultures distorts what people do and believe. The concept of religion was formed in the 16th and 17th centuries, despite the fact that ancient sacred texts like the Bible, the Quran, and others did not have a word or even a concept of religion in the original languages and neither did the people or the cultures in which these sacred texts were written. For example, there is no precise equivalent of religion in Hebrew, and Judaism does not distinguish clearly between religious, national, racial, or ethnic identities. One of its central concepts is halakha, meaning the walk or path sometimes translated as law, which guides religious practice and belief and many aspects of daily life. Even though the beliefs and traditions of Judaism are found in the ancient world, ancient Jews saw Jewish identity as being about an ethnic or national identity and did not entail a compulsory belief system or regulated rituals. In the 1st century CE Josephus had used the Greek term ioudaismos (Judaism) as an ethnic term and was not linked to modern abstract concepts of religion or a set of beliefs. The very concept of "Judaism" was invented by the Christian Church. and it was in the 19th century that Jews began to see their ancestral culture as a religion analogous to Christianity. The Greek word threskeia, which was used by Greek writers such as Herodotus and Josephus, is found in the New Testament. Threskeia is sometimes translated as "religion" in today's translations, however, the term was understood as generic "worship" well into the medieval period. In the Quran, the Arabic word din is often translated as religion in modern translations, but up to the mid-1600s translators expressed din as "law". The Sanskrit word dharma, sometimes translated as religion, also means law. Throughout classical South Asia, the study of law consisted of concepts such as penance through piety and ceremonial as well as practical traditions. Medieval Japan at first had a similar union between imperial law and universal or Buddha law, but these later became independent sources of power. Though traditions, sacred texts, and practices have existed throughout time, most cultures did not align with Western conceptions of religion since they did not separate everyday life from the sacred. In the 18th and 19th centuries, the terms Buddhism, Hinduism, Taoism, Confucianism, and world religions first entered the English language. Native Americans were also thought of as not having religions and also had no word for religion in their languages either. No one self-identified as a Hindu or Buddhist or other similar terms before the 1800s. "Hindu" has historically been used as a geographical, cultural, and later religious identifier for people indigenous to the Indian subcontinent. Throughout its long history, Japan had no concept of religion since there was no corresponding Japanese word, nor anything close to its meaning, but when American warships appeared off the coast of Japan in 1853 and forced the Japanese government to sign treaties demanding, among other things, freedom of religion, the country had to contend with this idea. According to the philologist Max Müller in the 19th century, the root of the English word religion, the Latin religio, was originally used to mean only reverence for God or the gods, careful pondering of divine things, piety (which Cicero further derived to mean diligence). Max Müller characterized many other cultures around the world, including Egypt, Persia, and India, as having a similar power structure at this point in history. What is called ancient religion today, they would have only called law. Definition Scholars have failed to agree on a definition of religion. There are, however, two general definition systems: the sociological/functional and the phenomenological/philosophical. Modern Western The concept of religion originated in the modern era in the West. Parallel concepts are not found in many current and past cultures; there is no equivalent term for religion in many languages. Scholars have found it difficult to develop a consistent definition, with some giving up on the possibility of a definition. Others argue that regardless of its definition, it is not appropriate to apply it to non-Western cultures. An increasing number of scholars have expressed reservations about ever defining the essence of religion. They observe that the way the concept today is used is a particularly modern construct that would not have been understood through much of history and in many cultures outside the West (or even in the West until after the Peace of Westphalia). The MacMillan Encyclopedia of Religions states: The anthropologist Clifford Geertz defined religion as a Alluding perhaps to Tylor's "deeper motive", Geertz remarked that The theologian Antoine Vergote took the term supernatural simply to mean whatever transcends the powers of nature or human agency. He also emphasized the cultural reality of religion, which he defined as Peter Mandaville and Paul James intended to get away from the modernist dualisms or dichotomous understandings of immanence/transcendence, spirituality/materialism, and sacredness/secularity. They define religion as According to the MacMillan Encyclopedia of Religions, there is an experiential aspect to religion which can be found in almost every culture: Classical Friedrich Schleiermacher in the late 18th century defined religion as das schlechthinnige Abhängigkeitsgefühl, commonly translated as "the feeling of absolute dependence". His contemporary Georg Wilhelm Friedrich Hegel disagreed thoroughly, defining religion as "the Divine Spirit becoming conscious of Himself through the finite spirit." Edward Burnett Tylor defined religion in 1871 as "the belief in spiritual beings". He argued that narrowing the definition to mean the belief in a supreme deity or judgment after death or idolatry and so on, would exclude many peoples from the category of religious, and thus "has the fault of identifying religion rather with particular developments than with the deeper motive which underlies them". He also argued that the belief in spiritual beings exists in all known societies. In his book The Varieties of Religious Experience, the psychologist William James defined religion as "the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine". By the term divine James meant "any object that is godlike, whether it be a concrete deity or not" to which the individual feels impelled to respond with solemnity and gravity. The sociologist Émile Durkheim, in his seminal book The Elementary Forms of the Religious Life, defined religion as a "unified system of beliefs and practices relative to sacred things". By sacred things he meant things "set apart and forbidden—beliefs and practices which unite into one single moral community called a Church, all those who adhere to them". Sacred things are not, however, limited to gods or spirits. On the contrary, a sacred thing can be "a rock, a tree, a spring, a pebble, a piece of wood, a house, in a word, anything can be sacred". Religious beliefs, myths, dogmas and legends are the representations that express the nature of these sacred things, and the virtues and powers which are attributed to them. Echoes of James' and Durkheim's definitions are to be found in the writings of, for example, Frederick Ferré who defined religion as "one's way of valuing most comprehensively and intensively". Similarly, for the theologian Paul Tillich, faith is "the state of being ultimately concerned", which "is itself religion. Religion is the substance, the ground, and the depth of man's spiritual life." When religion is seen in terms of sacred, divine, intensive valuing, or ultimate concern, then it is possible to understand why scientific findings and philosophical criticisms (e.g., those made by Richard Dawkins) do not necessarily disturb its adherents. Aspects Beliefs Traditionally, faith, in addition to reason, has been considered a source of religious beliefs. The interplay between faith and reason, and their use as perceived support for religious beliefs, have been a subject of interest to philosophers and theologians. The origin of religious belief as such is an open question, with possible explanations including awareness of individual death, a sense of community, and dreams. Mythology The word myth has several meanings. A traditional story of ostensibly historical events that serves to unfold part of the world view of a people or explain a practice, belief, or natural phenomenon; A person or thing having only an imaginary or unverifiable existence; or A metaphor for the spiritual potentiality in the human being. Ancient polytheistic religions, such as those of Greece, Rome, and Scandinavia, are usually categorized under the heading of mythology. Religions of pre-industrial peoples, or cultures in development, are similarly called myths in the anthropology of religion. The term myth can be used pejoratively by both religious and non-religious people. By defining another person's religious stories and beliefs as mythology, one implies that they are less real or true than one's own religious stories and beliefs. Joseph Campbell remarked, "Mythology is often thought of as other people's religions, and religion can be defined as mis-interpreted mythology." In sociology, however, the term myth has a non-pejorative meaning. There, myth is defined as a story that is important for the group whether or not it is objectively or provably true. Examples include the resurrection of their real-life founder Jesus, which, to Christians, explains the means by which they are freed from sin, is symbolic of the power of life over death, and is also said to be a historical event. But from a mythological outlook, whether or not the event actually occurred is unimportant. Instead, the symbolism of the death of an old life and the start of a new life is what is most significant. Religious believers may or may not accept such symbolic interpretations. Practices The practices of a religion may include rituals, sermons, commemoration or veneration of a deity (god or goddess), sacrifices, festivals, feasts, trances, initiations, funerary services, matrimonial services, meditation, prayer, religious music, religious art, sacred dance, public service, or other aspects of human culture. Social organisation Religions have a societal basis, either as a living tradition which is carried by lay participants, or with an organized clergy, and a definition of what constitutes adherence or membership. Academic study A number of disciplines study the phenomenon of religion: theology, comparative religion, history of religion, evolutionary origin of religions, anthropology of religion, psychology of religion (including neuroscience of religion and evolutionary psychology of religion), law and religion, and sociology of religion. Daniel L. Pals mentions eight classical theories of religion, focusing on various aspects of religion: animism and magic, by E.B. Tylor and J.G. Frazer; the psycho-analytic approach of Sigmund Freud; and further Émile Durkheim, Karl Marx, Max Weber, Mircea Eliade, E.E. Evans-Pritchard, and Clifford Geertz. Michael Stausberg gives an overview of contemporary theories of religion, including cognitive and biological approaches. Theories Sociological and anthropological theories of religion generally attempt to explain the origin and function of religion. These theories define what they present as universal characteristics of religious belief and practice. Origins and development The origin of religion is uncertain. There are a number of theories regarding the subsequent origins of religious practices. According to anthropologists John Monaghan and Peter Just, "Many of the great world religions appear to have begun as revitalization movements of some sort, as the vision of a charismatic prophet fires the imaginations of people seeking a more comprehensive answer to their problems than they feel is provided by everyday beliefs. Charismatic individuals have emerged at many times and places in the world. It seems that the key to long-term success—and many movements come and go with little long-term effect—has relatively little to do with the prophets, who appear with surprising regularity, but more to do with the development of a group of supporters who are able to institutionalize the movement." The development of religion has taken different forms in different cultures. Some religions place an emphasis on belief, while others emphasize practice. Some religions focus on the subjective experience of the religious individual, while others consider the activities of the religious community to be most important. Some religions claim to be universal, believing their laws and cosmology to be binding for everyone, while others are intended to be practiced only by a closely defined or localized group. In many places, religion has been associated with public institutions such as education, hospitals, the family, government, and political hierarchies. Anthropologists John Monoghan and Peter Just state that, "it seems apparent that one thing religion or belief helps us do is deal with problems of human life that are significant, persistent, and intolerable. One important way in which religious beliefs accomplish this is by providing a set of ideas about how and why the world is put together that allows people to accommodate anxieties and deal with misfortune." Cultural system While religion is difficult to define, one standard model of religion, used in religious studies courses, was proposed by Clifford Geertz, who simply called it a "cultural system". A critique of Geertz's model by Talal Asad categorized religion as "an anthropological category". Richard Niebuhr's (1894–1962) five-fold classification of the relationship between Christ and culture, however, indicates that religion and culture can be seen as two separate systems, though not without some interplay. Social constructionism One modern academic theory of religion, social constructionism, says that religion is a modern concept that suggests all spiritual practice and worship follows a model similar to the Abrahamic religions as an orientation system that helps to interpret reality and define human beings. Among the main proponents of this theory of religion are Daniel Dubuisson, Timothy Fitzgerald, Talal Asad, and Jason Ānanda Josephson. The social constructionists argue that religion is a modern concept that developed from Christianity and was then applied inappropriately to non-Western cultures. Cognitive science Cognitive science of religion is the study of religious thought and behavior from the perspective of the cognitive and evolutionary sciences. The field employs methods and theories from a very broad range of disciplines, including: cognitive psychology, evolutionary psychology, cognitive anthropology, artificial intelligence, cognitive neuroscience, neurobiology, zoology, and ethology. Scholars in this field seek to explain how human minds acquire, generate, and transmit religious thoughts, practices, and schemas by means of ordinary cognitive capacities. Hallucinations and delusions related to religious content occurs in about 60% of people with schizophrenia. While this number varies across cultures, this had led to theories about a number of influential religious phenomenon and possible relation to psychotic disorders. A number of prophetic experiences are consistent with psychotic symptoms, although retrospective diagnoses are practically impossible. Schizophrenic episodes are also experienced by people who do not have belief in gods. Religious content is also common in temporal lobe epilepsy, and obsessive-compulsive disorder. Atheistic content is also found to be common with temporal lobe epilepsy. Comparativism Comparative religion is the branch of the study of religions concerned with the systematic comparison of the doctrines and practices of the world's religions. In general, the comparative study of religion yields a deeper understanding of the fundamental philosophical concerns of religion such as ethics, metaphysics, and the nature and form of salvation. Studying such material is meant to give one a richer and more sophisticated understanding of human beliefs and practices regarding the sacred, numinous, spiritual and divine. In the field of comparative religion, a common geographical classification of the main world religions includes Middle Eastern religions (including Zoroastrianism and Iranian religions), Indian religions, East Asian religions, African religions, American religions, Oceanic religions, and classical Hellenistic religions. Classification In the 19th and 20th centuries, the academic practice of comparative religion divided religious belief into philosophically defined categories called world religions. Some academics studying the subject have divided religions into three broad categories: world religions, a term which refers to transcultural, international religions; indigenous religions, which refers to smaller, culture-specific or nation-specific religious groups; and new religious movements, which refers to recently developed religions. Some recent scholarship has argued that not all types of religion are necessarily separated by mutually exclusive philosophies, and furthermore that the utility of ascribing a practice to a certain philosophy, or even calling a given practice religious, rather than cultural, political, or social in nature, is limited. The current state of psychological study about the nature of religiousness suggests that it is better to refer to religion as a largely invariant phenomenon that should be distinguished from cultural norms (i.e. religions). Morphological classification Some scholars classify religions as either universal religions that seek worldwide acceptance and actively look for new converts, such as Christianity, Islam, Buddhism and Jainism, while ethnic religions are identified with a particular ethnic group and do not seek converts. Others reject the distinction, pointing out that all religious practices, whatever their philosophical origin, are ethnic because they come from a particular culture. Demographical classification The five largest religious groups by world population, estimated to account for 5.8 billion people and 84% of the population, are Christianity, Islam, Buddhism, Hinduism (with the relative numbers for Buddhism and Hinduism dependent on the extent of syncretism) and traditional folk religion. A global poll in 2012 surveyed 57 countries and reported that 59% of the world's population identified as religious, 23% as not religious, 13% as convinced atheists, and also a 9% decrease in identification as religious when compared to the 2005 average from 39 countries. A follow-up poll in 2015 found that 63% of the globe identified as religious, 22% as not religious, and 11% as convinced atheists. On average, women are more religious than men. Some people follow multiple religions or multiple religious principles at the same time, regardless of whether or not the religious principles they follow traditionally allow for syncretism. A 2017 Pew projection suggests that Islam will overtake Christianity as the plurality religion by 2075. Unaffiliated populations are projected to drop, even when taking disaffiliation rates into account, due to differences in birth rates. Specific religions Abrahamic Abrahamic religions are monotheistic religions which believe they descend from Abraham. Judaism Judaism is
done by a Reed professor of statistics and her students to investigate the mechanics of the ranking algorithm, attempting to see if Reed's ranking had been purposefully devalued because the school refused to submit its information to U.S. News. Admissions Admissions For Fall 2016, the freshman class had 357 students. 10% were valedictorians of their high school classes and another 2% were salutatorians. 32% ranked in the top 5% of their class. The median scores on their SAT tests were 680 math, 710 verbal, and 680 writing, which puts them at the 96th percentile. The class was drawn from the largest pool ever—5,705 applicants—and was the most selective in Reed's history, with an admittance rate of 31%. , to increase student enrollment from historically underrepresented minorities, Reed encourages they apply for the college's "Discover Reed Fly-In Program", which is an all-inclusive, all-expenses-paid, multi-day campus tour and is open to all high school seniors who are US citizens or permanent residents, regardless of their race or ethnicity. Tuition and finances The total direct cost for the 2018–19 academic year, including tuition, fees and room-and-board, is $70,550. Indirect costs (books, supplies, transportation, personal expenses) can tack on another $3,950. For the 2017–18 academic year, the average financial aid package – including grants, loans, and work opportunities – was approximately $45,325". In 2017–18 about half of students received financial aid from the college. In 2004 (the most recent data available), 1.4% of Reed graduates defaulted on their student loans – below the national Cohort Default Rate average of 5.1%. Reed's endowment as of June 30, 2021, was $779 million. In the economic downturn that began in late 2007, Reed's total endowment had declined from $455 million in June 2007 to $311 million in June 2009. By the end of 2013, however, the endowment surpassed the $500 million mark. Academic honors Reed has produced the second-highest number of Rhodes scholars for any liberal arts college—32—as well as over fifty Fulbright Scholars, over sixty Watson Fellows, and two MacArthur ("Genius") Award winners. A very high proportion of Reed graduates go on to earn PhDs, particularly in the natural sciences, history, political science, and philosophy. Reed is ranked third in the percentage of graduates who go on to earn PhDs in all disciplines, after only Caltech and Harvey Mudd. In 1961, Scientific American declared that second only to Caltech, "This small college in Oregon has been far and away more productive of future scientists than any other institution in the U.S." Reed is ranked first in producing PhDs in biology, second in chemistry and humanities, third in history, foreign languages, and political science, fourth in science and mathematics, fifth in physics and social sciences, sixth in anthropology, seventh in area and ethnic studies and linguistics, and eighth in English literature and medicine. Reed's debating team, which had existed for only two years at the time, was awarded the first place sweepstakes trophy for Division II schools at the final tournament of the Northwest Forensics Conference in February 2004. Loren Pope, former education editor for The New York Times, writes about Reed in Colleges That Change Lives, saying, "If you're a genuine intellectual, live the life of the mind, and want to learn for the sake of learning, the place most likely to empower you is not Harvard, Yale, Princeton, Chicago, or Stanford. It is the most intellectual college in the country—Reed in Portland, Oregon." Drug use Since the 1960s, Reed has had a reputation for tolerating open drug use among its students. The Insider's Guide to the Colleges, written by the staff of Yale Daily News, notes an impression among students of institutional permissiveness: "According to students, the school does not bust students for drug or alcohol use unless they cause harm or embarrassment to another student." In April 2008, student Alex Lluch died of a heroin overdose in his on-campus dorm room. His death prompted revelations of several previous incidents, including the near-death heroin overdose of another student only months earlier. College President Colin Diver said "I don't honestly know" whether the drug death was an isolated incident or part of a larger problem. "When you say Reed," Diver said, "two words often come to mind. One is brains. One is drugs." Local reporter James Pitkin of the newspaper Willamette Week editorialized that "Reed College, a private school with one of the most prestigious academic programs in the U.S., is one of the last schools in the country where students enjoy almost unlimited freedom to experiment openly with drugs, with little or no hassles from authorities," though Willamette Week stated the following week concerning Pitkin's editorial: "As of press time, almost 500 responses, many expressing harsh criticism of Willamette Week, had been posted on our website." In March 2010, another student died of drug-related causes in his off-campus residence. This led The New York Times to conclude that "Reed…has long been known almost as much for its unusually permissive atmosphere as for its impressively rigorous academics." Law enforcement authorities promised to take action, including sending undercover agents to Reed's annual Renn Fayre celebration. In February 2012, the Reed administration chose to call the police following the discovery of "two to three pounds of marijuana and a small amount of ecstasy and LSD in the on-campus apartment of two juniors." Following campus debate, Reed's president at the time, Colin Diver, issued a letter to students and staff, saying the college would not tolerate illegal drug use on campus: "Such behavior endangers the health and welfare of the entire community, attracts potentially dangerous criminal activity on campus, undermines the academic mission of the college, and violates the college's obligations under state and federal law." Political and social activism Reed has a reputation for being politically left-of-center. During the McCarthy era of the 1950s, then-President Duncan Ballantine fired Marxist philosopher Stanley Moore, a tenured professor, for his failure to cooperate with the House Un-American Activities Committee (HUAC) investigation. According to an article in the college's alumni magazine, "because of the decisive support expressed by Reed's faculty, students, and alumni for the three besieged teachers and for the principle of academic freedom, Reed College's experience with McCarthyism stands apart from that of most other American colleges and universities. Elsewhere in the academic world both tenured and nontenured professors with alleged or admitted communist party ties were fired with relatively little fuss or protest. At Reed, however, opposition to the political interrogations of the teachers was so strong that some believed the campus was in danger of closure." A statement of "regret" by the Reed administration and Board of Trustees was published in 1981, formally revising the judgment of the 1954 trustees. In 1993, then-President Steve Koblik invited Moore to visit the college, and in 1995 the last surviving member of the Board that fired Moore expressed his regret and apologized to him. Reedies Against Racism On September 26, 2016, students organized a boycott of all college operations in participation with the National Day of Boycott, a national day of protest which was proposed by actor Isaiah Washington on Twitter in response to the issue of police brutality against African-Americans. Following the boycott, students created an activist group called Reedies Against Racism (RAR) and presented a list of demands for the college purportedly on behalf of students from marginalized backgrounds. The primary demand concerned Reed's mandatory freshman Humanities course, proposing that the course either be changed to be more inclusive of world literature and classics or to be made not mandatory. One element of the class deemed racist by the protestors was the use of the 1978 Steve Martin song "King Tut" in a discussion about cultural appropriation. Students began a protest campaign against the curriculum by sitting in during lectures with signs with quotations from various African-American and non-white academics. Other protests separate from the Humanities course also included efforts to shout down speakers, including Kimberly Peirce after she was accused of profiting from transphobia while making the film Boys Don't Cry. The group eventually focused on Reed's banking relationship with Wells Fargo, based on allegations that the bank had invested in the Dakota Access Pipeline project and the private prison industry, and staged an occupation of Reed's Eliot Hall. There was some opposition to the lecture protests, notably by Reed professor of English Lucía Martínez Valdivia, who stated that a protest during her lecture on Sappho would amplify her pre-existing case of PTSD. In November 2017, Chris Bodenner of The Atlantic wrote about growing student resentment toward the tactics of RAR. In response to protests the faculty decided to undergo the decennial review process a year early, as well as to complete the process in three months instead of the usual year. In January 2018, Humanities 110 Chair professor Libby Drumm announced in a campus-wide email that the course curriculum would be restructured after years of faculty discussion and in response to student feedback as well as input from an external review committee composed of humanities faculty from other institutes, adopting a "four-module structure" that would include texts from the Americas and allow greater flexibility in the curriculum which would be integrated beginning fall 2018. The external review had not in fact been completed nor reviewed at the time of the announcement. Following "a contentious year of protests, including an anti-racism sit-in in Kroger’s office," college president John Kroger resigned, effective June 2018. Campus The Reed College campus was established on a tract of land in southeast Portland known in 1910 as Crystal Springs Farm, a part of the Ladd Estate, formed in the 1870s from original land claims. The college's grounds include of contiguous land, including a wooded wetland known as Reed Canyon. Portland architect A. E. Doyle developed a plan, never implemented in full, modeled on the University of Oxford's St. John's College. The original campus buildings (including the Library, the Old Dorm Block, and what is now the primary administration building, Eliot Hall) are brick Tudor Gothic buildings in a style similar to Ivy League campuses. In contrast, the science section of campus, including the physics, biology, and psychology (originally chemistry) buildings, were designed in the Modernist style. The Psychology Building, completed in 1949, was designed by Modernist architect Pietro Belluschi at the same time as his celebrated Equitable Building in downtown Portland. The campus and buildings have undergone several phases of growth, and there are now 21 academic and administrative buildings and 18 residence halls. Since 2004, Reed's campus has expanded to include adjacent properties beyond its historic boundaries, such as the Birchwood Apartments complex and former medical administrative offices on either side of SE 28th Avenue, and the Parker House, across SE Woodstock from Prexy. At the same time the Willard House (donated to Reed in 1964), across from the college's main entrance at SE Woodstock and SE Reed College Place, was converted from faculty housing to administrative use. Reed announced on July 13, 2007, that it had purchased the Rivelli farm, a tract of land south of the Garden House and west of Botsford Drive. Reed's "immediate plans for the acquired property include housing a small number of students in the former Rivelli home during the 2007–08 academic year. Longer term, the college anticipates that it may seek to develop the northern portion of the property for additional student housing". Residence halls Reed houses 945 students in 18 residence halls on campus and several college-owned houses and apartment buildings on or adjacent to campus. Residence halls on campus range from the traditional (i.e., Gothic Old Dorm Block, referred to as "ODB") to the eclectic (e.g., Anna Mann, a Tudor-style cottage built in the 1920s by Reed's founding architect A. E. Doyle, originally used as a women's hall), language houses (Spanish, Russian, French, German, and Chinese), "temporary" housing, built in the 1960s (Cross Canyon – Chittick, Woodbridge, McKinley, Griffin), to more recently built dorms (Bragdon, Naito, Sullivan). There are also theme residence halls including everything from substance-free living to Japanese culture to music to a dorm for students interested in outdoors activities (hiking, climbing, bicycling, kayaking, skiing, etc.). The college's least-loved complex (as measured by applications to the college's housing lottery), MacNaughton and Foster-Scholz, is known on campus as "Asylum Block" because of its post-World War II modernist architecture and interior spaces dominated by long, straight corridors lined with identical doors, said by students to resemble that of an insane asylum. Until 2006, it was thought that these residence halls had been designed by architect Pietro Belluschi. Under the 10-year Campus Master Plan adopted in 2006, Foster-Scholz is scheduled to be demolished and replaced, and MacNaughton to be remodeled. According to the master plan, "The College's goal is to provide housing on or adjacent to the campus that accommodates 75% of the [full-time] student population. At present, the College provides on-campus housing for 838 students". In Spring 2007, the college broke ground on the construction of a new quadrangle called the Grove with four new Leed certified residence halls (Aspen, Sequoia, Sitka, Bidwell) on the northwest side of the campus, which opened in Fall 2008. A new Spanish House residence was completed. Together, the five new residences added 142 new beds. Reed also has off-campus housing. Many houses in the Woodstock and Eastmoreland Portland neighborhoods are traditionally rented to Reed students. On February 21, 2018, Reed announced the construction of the "largest residence hall in its history." Set to be complete by Fall 2019, it will house an additional 180 students, boosting Reed's housing capacity to nearly 80% of the student body, up from 68%. This will guarantee housing for both freshman and sophomores, as students were formerly subjected to a housing lottery after freshman year. The new building is
Division II schools at the final tournament of the Northwest Forensics Conference in February 2004. Loren Pope, former education editor for The New York Times, writes about Reed in Colleges That Change Lives, saying, "If you're a genuine intellectual, live the life of the mind, and want to learn for the sake of learning, the place most likely to empower you is not Harvard, Yale, Princeton, Chicago, or Stanford. It is the most intellectual college in the country—Reed in Portland, Oregon." Drug use Since the 1960s, Reed has had a reputation for tolerating open drug use among its students. The Insider's Guide to the Colleges, written by the staff of Yale Daily News, notes an impression among students of institutional permissiveness: "According to students, the school does not bust students for drug or alcohol use unless they cause harm or embarrassment to another student." In April 2008, student Alex Lluch died of a heroin overdose in his on-campus dorm room. His death prompted revelations of several previous incidents, including the near-death heroin overdose of another student only months earlier. College President Colin Diver said "I don't honestly know" whether the drug death was an isolated incident or part of a larger problem. "When you say Reed," Diver said, "two words often come to mind. One is brains. One is drugs." Local reporter James Pitkin of the newspaper Willamette Week editorialized that "Reed College, a private school with one of the most prestigious academic programs in the U.S., is one of the last schools in the country where students enjoy almost unlimited freedom to experiment openly with drugs, with little or no hassles from authorities," though Willamette Week stated the following week concerning Pitkin's editorial: "As of press time, almost 500 responses, many expressing harsh criticism of Willamette Week, had been posted on our website." In March 2010, another student died of drug-related causes in his off-campus residence. This led The New York Times to conclude that "Reed…has long been known almost as much for its unusually permissive atmosphere as for its impressively rigorous academics." Law enforcement authorities promised to take action, including sending undercover agents to Reed's annual Renn Fayre celebration. In February 2012, the Reed administration chose to call the police following the discovery of "two to three pounds of marijuana and a small amount of ecstasy and LSD in the on-campus apartment of two juniors." Following campus debate, Reed's president at the time, Colin Diver, issued a letter to students and staff, saying the college would not tolerate illegal drug use on campus: "Such behavior endangers the health and welfare of the entire community, attracts potentially dangerous criminal activity on campus, undermines the academic mission of the college, and violates the college's obligations under state and federal law." Political and social activism Reed has a reputation for being politically left-of-center. During the McCarthy era of the 1950s, then-President Duncan Ballantine fired Marxist philosopher Stanley Moore, a tenured professor, for his failure to cooperate with the House Un-American Activities Committee (HUAC) investigation. According to an article in the college's alumni magazine, "because of the decisive support expressed by Reed's faculty, students, and alumni for the three besieged teachers and for the principle of academic freedom, Reed College's experience with McCarthyism stands apart from that of most other American colleges and universities. Elsewhere in the academic world both tenured and nontenured professors with alleged or admitted communist party ties were fired with relatively little fuss or protest. At Reed, however, opposition to the political interrogations of the teachers was so strong that some believed the campus was in danger of closure." A statement of "regret" by the Reed administration and Board of Trustees was published in 1981, formally revising the judgment of the 1954 trustees. In 1993, then-President Steve Koblik invited Moore to visit the college, and in 1995 the last surviving member of the Board that fired Moore expressed his regret and apologized to him. Reedies Against Racism On September 26, 2016, students organized a boycott of all college operations in participation with the National Day of Boycott, a national day of protest which was proposed by actor Isaiah Washington on Twitter in response to the issue of police brutality against African-Americans. Following the boycott, students created an activist group called Reedies Against Racism (RAR) and presented a list of demands for the college purportedly on behalf of students from marginalized backgrounds. The primary demand concerned Reed's mandatory freshman Humanities course, proposing that the course either be changed to be more inclusive of world literature and classics or to be made not mandatory. One element of the class deemed racist by the protestors was the use of the 1978 Steve Martin song "King Tut" in a discussion about cultural appropriation. Students began a protest campaign against the curriculum by sitting in during lectures with signs with quotations from various African-American and non-white academics. Other protests separate from the Humanities course also included efforts to shout down speakers, including Kimberly Peirce after she was accused of profiting from transphobia while making the film Boys Don't Cry. The group eventually focused on Reed's banking relationship with Wells Fargo, based on allegations that the bank had invested in the Dakota Access Pipeline project and the private prison industry, and staged an occupation of Reed's Eliot Hall. There was some opposition to the lecture protests, notably by Reed professor of English Lucía Martínez Valdivia, who stated that a protest during her lecture on Sappho would amplify her pre-existing case of PTSD. In November 2017, Chris Bodenner of The Atlantic wrote about growing student resentment toward the tactics of RAR. In response to protests the faculty decided to undergo the decennial review process a year early, as well as to complete the process in three months instead of the usual year. In January 2018, Humanities 110 Chair professor Libby Drumm announced in a campus-wide email that the course curriculum would be restructured after years of faculty discussion and in response to student feedback as well as input from an external review committee composed of humanities faculty from other institutes, adopting a "four-module structure" that would include texts from the Americas and allow greater flexibility in the curriculum which would be integrated beginning fall 2018. The external review had not in fact been completed nor reviewed at the time of the announcement. Following "a contentious year of protests, including an anti-racism sit-in in Kroger’s office," college president John Kroger resigned, effective June 2018. Campus The Reed College campus was established on a tract of land in southeast Portland known in 1910 as Crystal Springs Farm, a part of the Ladd Estate, formed in the 1870s from original land claims. The college's grounds include of contiguous land, including a wooded wetland known as Reed Canyon. Portland architect A. E. Doyle developed a plan, never implemented in full, modeled on the University of Oxford's St. John's College. The original campus buildings (including the Library, the Old Dorm Block, and what is now the primary administration building, Eliot Hall) are brick Tudor Gothic buildings in a style similar to Ivy League campuses. In contrast, the science section of campus, including the physics, biology, and psychology (originally chemistry) buildings, were designed in the Modernist style. The Psychology Building, completed in 1949, was designed by Modernist architect Pietro Belluschi at the same time as his celebrated Equitable Building in downtown Portland. The campus and buildings have undergone several phases of growth, and there are now 21 academic and administrative buildings and 18 residence halls. Since 2004, Reed's campus has expanded to include adjacent properties beyond its historic boundaries, such as the Birchwood Apartments complex and former medical administrative offices on either side of SE 28th Avenue, and the Parker House, across SE Woodstock from Prexy. At the same time the Willard House (donated to Reed in 1964), across from the college's main entrance at SE Woodstock and SE Reed College Place, was converted from faculty housing to administrative use. Reed announced on July 13, 2007, that it had purchased the Rivelli farm, a tract of land south of the Garden House and west of Botsford Drive. Reed's "immediate plans for the acquired property include housing a small number of students in the former Rivelli home during the 2007–08 academic year. Longer term, the college anticipates that it may seek to develop the northern portion of the property for additional student housing". Residence halls Reed houses 945 students in 18 residence halls on campus and several college-owned houses and apartment buildings on or adjacent to campus. Residence halls on campus range from the traditional (i.e., Gothic Old Dorm Block, referred to as "ODB") to the eclectic (e.g., Anna Mann, a Tudor-style cottage built in the 1920s by Reed's founding architect A. E. Doyle, originally used as a women's hall), language houses (Spanish, Russian, French, German, and Chinese), "temporary" housing, built in the 1960s (Cross Canyon – Chittick, Woodbridge, McKinley, Griffin), to more recently built dorms (Bragdon, Naito, Sullivan). There are also theme residence halls including everything from substance-free living to Japanese culture to music to a dorm for students interested in outdoors activities (hiking, climbing, bicycling, kayaking, skiing, etc.). The college's least-loved complex (as measured by applications to the college's housing lottery), MacNaughton and Foster-Scholz, is known on campus as "Asylum Block" because of its post-World War II modernist architecture and interior spaces dominated by long, straight corridors lined with identical doors, said by students to resemble that of an insane asylum. Until 2006, it was thought that these residence halls had been designed by architect Pietro Belluschi. Under the 10-year Campus Master Plan adopted in 2006, Foster-Scholz is scheduled to be demolished and replaced, and MacNaughton to be remodeled. According to the master plan, "The College's goal is to provide housing on or adjacent to the campus that accommodates 75% of the [full-time] student population. At present, the College provides on-campus housing for 838 students". In Spring 2007, the college broke ground on the construction of a new quadrangle called the Grove with four new Leed certified residence halls (Aspen, Sequoia, Sitka, Bidwell) on the northwest side of the campus, which opened in Fall 2008. A new Spanish House residence was completed. Together, the five new residences added 142 new beds. Reed also has off-campus housing. Many houses in the Woodstock and Eastmoreland Portland neighborhoods are traditionally rented to Reed students. On February 21, 2018, Reed announced the construction of the "largest residence hall in its history." Set to be complete by Fall 2019, it will house an additional 180 students, boosting Reed's housing capacity to nearly 80% of the student body, up from 68%. This will guarantee housing for both freshman and sophomores, as students were formerly subjected to a housing lottery after freshman year. The new building is also designed to meet "LEED Platinum standards," and Reed is currently evaluating proposals to put solar panels on the roof. Reed Canyon The Reed College Canyon, a natural area and national wildlife preserve, bisects the campus, separating the academic buildings from many of the residence halls (the so-called cross-canyon halls). The canyon is filled by Crystal Creek Springs, a natural spring that drains into Johnson Creek. Canyon Day, a tradition dating back to 1915, is held twice a year. On Canyon Day students and Reed neighbors join canyon crew workers to spend a day helping with restoration efforts. A landmark of the campus, the Blue Bridge, spans the canyon. This bridge replaced the unique cantilevered bridge that served in that spot between 1959 and 1991, which "featured stressed plywood girders – the first time this construction had been used on a span of this size: a straight bridge long and high. It attracted great architectural interest during its lifetime". A new pedestrian and bicycle bridge spanning the canyon was opened in Fall 2008. This bridge, dubbed the "Bouncy Bridge", "Orange Bridge", and in some cases the "Amber Bridge" by students, is long, about a third longer than the Blue Bridge, and "connect[s] the new north campus quad to Gray Campus